Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 9, 2017

Clojure 1.9 Hits the Streets!

Filed under: Clojure,Functional Programming,Merging,Topic Maps — Patrick Durusau @ 4:31 pm

Clojure 1.9 by Alex Miller.

From the post:

Clojure 1.9 is now available!

Clojure 1.9 introduces two major new features: integration with spec and command line tools.

spec (rationale, guide) is a library for describing the structure of data and functions with support for:

  • Validation
  • Error reporting
  • Destructuring
  • Instrumentation
  • Test-data generation
  • Generative test generation
  • Documentation

Clojure integrates spec via two new libraries (still in alpha):

This modularization facilitates refinement of spec separate from the Clojure release cycle.

The command line tools (getting started, guide, reference) provide:

  • Quick and easy install
  • Clojure REPL and runner
  • Use of Maven and local dependencies
  • A functional API for classpath management (tools.deps.alpha)

The installer is available for Mac developers in brew, for Linux users in a script, and for more platforms in the future.

Being interested in documentation, I followed the link to spec rationale and found:


Map specs should be of keysets only

Most systems for specifying structures conflate the specification of the key set (e.g. of keys in a map, fields in an object) with the specification of the values designated by those keys. I.e. in such approaches the schema for a map might say :a-key’s type is x-type and :b-key’s type is y-type. This is a major source of rigidity and redundancy.

In Clojure we gain power by dynamically composing, merging and building up maps. We routinely deal with optional and partial data, data produced by unreliable external sources, dynamic queries etc. These maps represent various sets, subsets, intersections and unions of the same keys, and in general ought to have the same semantic for the same key wherever it is used. Defining specifications of every subset/union/intersection, and then redundantly stating the semantic of each key is both an antipattern and unworkable in the most dynamic cases.

Decomplect maps/keys/values

Keep map (keyset) specs separate from attribute (key→value) specs. Encourage and support attribute-granularity specs of namespaced keyword to value-spec. Combining keys into sets (to specify maps) becomes orthogonal, and checking becomes possible in the fully-dynamic case, i.e. even when no map spec is present, attributes (key-values) can be checked.

Sets (maps) are about membership, that’s it

As per above, maps defining the details of the values at their keys is a fundamental complecting of concerns that will not be supported. Map specs detail required/optional keys (i.e. set membership things) and keyword/attr/value semantics are independent. Map checking is two-phase, required key presence then key/value conformance. The latter can be done even when the (namespace-qualified) keys present at runtime are not in the map spec. This is vital for composition and dynamicity.

The idea of checking keys separate from their values strikes me as a valuable idea for processing of topic maps.

Keys not allowed in a topic or proxy, could signal an error, as in authoring, could be silently discarded depending upon your processing goals, or could be maintained while not considered or processed for merging purposes.

Thoughts?

Apache Kafka: Online Talk Series [Non-registration for 5 out of 6]

Filed under: Cybersecurity,ETL,Government,Kafka,Streams — Patrick Durusau @ 2:35 pm

Apache Kafka: Online Talk Series

From the webpage:

Watch this six-part series of online talks presented by Kafka experts. You will learn the key considerations in building a scalable platform for real-time stream data processing, with Apache Kafka at its core.

This series is targeted to those who want to understand all the foundational concepts behind Apache Kafka, streaming data, and real-time processing on streams. The sequence begins with an introduction to Kafka, the popular streaming engine used by many large scale data environments, and continues all the way through to key production planning, architectural and operational methods to consider.

Whether you’re just getting started or have already built stream processing applications for critical business functions, you will find actionable tips and deep insights that will help your enterprise further derive important business value from your data systems.

Video titles:

1. Introduction To Streaming Data and Stream Processing with Apache Kafka, Jay Kreps, Confluent CEO and Co-founder, Apache Kafka Co-creator.

2. Deep Dive into Apache Kafka by Jun Rao, Confluent Co-founder, Apache Kafka Co-creator.

3. Data Integration with Apache Kafka by David Tucker, Director, Partner Engineering and Alliances.

4. Demystifying Stream Processing with Apache Kafka, Neha Narkhede, Confluent CTO and Co-Founder, Apache Kafka Co-creator.

5. A Practical Guide to Selecting a Stream Processing Technology by Michael Noll, Product Manager, Confluent.

6. Streaming in Practice: Putting Kafka in Production by Roger Hoover, Engineer, Confluent. (Registration required. Anyone know a non-registration version of Hoover’s presentation?)

I was able to find versions of the first five videos that don’t require you to register to view them.

I make it a practice to dodge marketing department registrations whenever possible.

You?

Zero Days, Thousands of Nights [Zero-day – 6.9 Year Average Life Expectancy]

Filed under: Cybersecurity,Government,Security,Transparency — Patrick Durusau @ 11:41 am

Zero Days, Thousands of Nights – The Life and Times of Zero-Day Vulnerabilities and Their Exploits by Lillian Ablon, Timothy Bogart.

From the post:

Zero-day vulnerabilities — software vulnerabilities for which no patch or fix has been publicly released — and their exploits are useful in cyber operations — whether by criminals, militaries, or governments — as well as in defensive and academic settings.

This report provides findings from real-world zero-day vulnerability and exploit data that could augment conventional proxy examples and expert opinion, complement current efforts to create a framework for deciding whether to disclose or retain a cache of zero-day vulnerabilities and exploits, inform ongoing policy debates regarding stockpiling and vulnerability disclosure, and add extra context for those examining the implications and resulting liability of attacks and data breaches for U.S. consumers, companies, insurers, and for the civil justice system broadly.

The authors provide insights about the zero-day vulnerability research and exploit development industry; give information on what proportion of zero-day vulnerabilities are alive (undisclosed), dead (known), or somewhere in between; and establish some baseline metrics regarding the average lifespan of zero-day vulnerabilities, the likelihood of another party discovering a vulnerability within a given time period, and the time and costs involved in developing an exploit for a zero-day vulnerability.

Longevity and Discovery by Others

  • Zero-day exploits and their underlying vulnerabilities have a rather long average life expectancy (6.9 years). Only 25 percent of vulnerabilities do not survive to 1.51 years, and only 25 percent live more than 9.5 years.
  • No vulnerability characteristics indicated a long or short life; however, future analyses may want to examine Linux versus other platform types, the similarity of open and closed source code, and exploit class type.
  • For a given stockpile of zero-day vulnerabilities, after a year, approximately 5.7 percent have been publicly discovered and disclosed by another entity.

Rand researchers Ablon and Bogart attempt to interject facts into the debate over stockpiling zero-day vulnerabilities. It a great read, even though I doubt policy decisions over zero-day stockpiling will be fact-driven.

As an advocate of inadvertent or involuntary transparency (is there any other honest kind?), I take heart from the 6.9 year average life expectancy of zero-day exploits.

Researchers should take encouragement from the finding that within a given year, only 5.7 of all zero-days vulnerability discoveries overlap. That is 94.3% of zero-day discoveries are unique. That indicates to me vulnerabilities are left undiscovered every year.

Voluntary transparency, like presidential press conferences, is an opportunity to shape and manipulate your opinions. Zero-day vulnerabilities, on the other hand, can empower honest/involuntary transparency.

Won’t you help?

Shopping for the Intelligence Community (IC) [Needl]

Filed under: Government,Intelligence — Patrick Durusau @ 10:54 am

The holiday season in various traditions has arrived for 2018!

With it returns the vexing question: What to get for the Intelligence Community (IC)?

They have spent all year violating your privacy, undermining legitimate government institutions, supporting illegitimate governments, mocking any notion of human rights and siphoning government resources that could benefit the public for themselves and their contractors.

The excesses of your government’s intelligence agencies will be special to you but in truth, they are all equally loathsome and merit some acknowledgement at this special time of the year.

Needl is a gift for the intelligence community this holiday season and one that can keep on giving all year long.

Take back your privacy. Lose yourself in the haystack.

Your ISP is most likely tracking your browsing habits and selling them to marketing agencies (albeit anonymised). Or worse, making your browsing history available to law enforcement at the hint of a Subpoena. Needl will generate random Internet traffic in an attempt to conceal your legitimate traffic, essentially making your data the Needle in the haystack and thus harder to find. The goal is to make it harder for your ISP, government, etc to track your browsing history and habits.

…(graphic omitted)

Implemented modules:

  • Google: generates a random search string, searches Google and clicks on a random result.
  • Alexa: visits a website from the Alexa Top 1 Million list. (warning: contains a lot of porn websites)
  • Twitter: generates a popular English name and visits their profile; performs random keyword searches
  • DNS: produces random DNS queries from the Alexa Top 1 Million list.
  • Spotify: random searches for Spotify artists

Module ideas:

  • WhatsApp
  • Facebook Messenger

… (emphasis in original)

Not for people with metered access but otherwise, a must for home PCs and enterprise PC farms.

No doubt annoying but running Needl through Tor, with a list of trigger words/phrases, searches for explosives, viruses, CBW topics with locations, etc. would create festive blinking red lights for the intelligence community.

Lisp at the Frontier of Computation

Filed under: Computer Science,Lisp,Quantum — Patrick Durusau @ 10:18 am

Abstract:

Since the 1950s, Lisp has been used to describe and calculate in cutting-edge fields like artificial intelligence, robotics, symbolic mathematics, and advanced optimizing compilers. It is no surprise that Lisp has also found relevance in quantum computation, both in academia and industry. Hosted at Rigetti Computing, a quantum computing startup in Berkeley, Robert Smith will provide a pragmatic view of the technical, sociological, and psychological aspects of working with an interdisciplinary team, writing Lisp, to build the next generation of technology resource: the quantum computer.

ABOUT THE SPEAKER: Robert has been using Lisp for over decade, and has been fortunate to work with and manage expert teams of Lisp programmers to build embedded fingerprint analysis systems, machine learning-based product recommendation software, metamaterial phased-array antennas, discrete differential geometric computer graphics software, and now quantum computers. As Director of Software Engineering, Robert is responsible for building the publicly available Rigetti Forest platform, powered by both a real quantum computer and one of the fastest single-node quantum computer simulators in the world.

Video notes mention “poor audio quality.” Not the best but clear and audible to me.

The coverage of the quantum computer work is great but mostly a general promotion of Lisp.

Important links:

Forest (beta) Forest provides development access to our 30-qubit simulator the Quantum Virtual Machine ™ and limited access to our quantum hardware systems for select partners. Workshop video plus numerous other resources.

A Practical Quantum Instruction Set Architecture by Robert S. Smith, Michael J. Curtis, William J. Zeng. (speaker plus two of his colleagues)

December 8, 2017

Google About to Publicly Drop iPhone Exploit (More Holiday News!)

Filed under: Cybersecurity,FBI,Security — Patrick Durusau @ 5:41 pm

The Jailbreaking Community Is Bracing for Google to Publicly Drop an iPhone Exploit by Lorenzo Franceschi-Bicchierai.

From the post:


Because exploits are so valuable, it’s been a long time since we’ve seen a publicly accessible iPhone jailbreak even for older versions of iOS (let alone one in the wild for an up to date iPhone.) But a tweet sent by a Google researcher Wednesday has got the security and jailbreaking communities in a frenzy. The tweet suggests that Google is about to drop an exploit that is a major step toward an iPhone jailbreak, and other researchers say they will be able to take that exploit and turn it into a full jailbreak.

It might seem surprising that an iPhone exploit would be released by Google, Apple’s closest competitor, but the company has a history of doing so, albeit with less hype than this one is garnering.

Ian Beer is a Google Project Zero security researcher, and one of the most prolific iOS bug hunters. Wednesday, he told his followers to keep their “research-only” devices on iOS 11.1.2 because he was about to release “tfp0” soon. (tfp0 stands for “task for pid 0,” or the kernel task port, which gives you control of the core of the operating system.) He also hinted that this is just the first part of more releases to come. iOS 11.1.2 was just patched and updated last week by Apple; it is extremely rare for exploits for recent versions of iOS to be made public.

Another surprise in the offing for the holiday season! See Franceschi-Bicchierai’s post for much speculation and possibilities.

Benefits from a current iPhone Exploit

  • Security researchers obtain better access to research iPhone security issues
  • FBI told by courts to hire local hackers instead of badgering Apple
  • Who carries iPhones? (security clueless public officials)

From improving the lot of security researchers, local employment for hackers and greater exposure of public officials, what’s there to not like?

Looking forward to the drop and security researchers jumping on it like a terrier pack on a rat.

Haystack: The Search Relevance Conference! (Proposals by Jan. 19, 2018) Updated

Filed under: Conferences,Relevance,Search Algorithms,Search Analytics,Searching — Patrick Durusau @ 5:16 pm

Haystack: The Search Relevance Conference!

From the webpage:

Haystack is the conference for improving search relevance. If you’re like us, you work to understand the shiny new tools or dense academic papers out there that promise the moon. Then you puzzle how to apply those insights to your search problem, in your search stack. But the path isn’t always easy, and the promised gains don’t always materialize.

Haystack is the no-holds-barred conference for organizations where search, matching, and relevance really matters to the bottom line. For search managers, developers & data scientists finding ways to innovate, see past the silver bullets, and share what actually has worked well for their unique problems. Please come share and learn!

… (inline form for submission proposals)

Welcome topics include

  • Information Retrieval
  • Learning to Rank
  • Query Understanding
  • Semantic Search
  • Applying NLP to search
  • Personalized Search
  • Search UX Strategy: Perceived relevance, smart snippeting
  • Measuring and testing search against business objectives
  • Nuts & bolts: plugins for Solr, Elasticsearch, Vespa, etc
  • Adjacent topics: recommendation systems, entity matching via search, and other topics

… (emphasis in original)

The first link for the conference I saw was http://mailchi.mp/e609fba68dc6/announcing-haystack-the-search-relevance-conference, which promised topics including:

  • Intent detection

The modest price of $75 covers our costs….

To see a solution to the problem of other minds and to discover their intent, all for $75, is quite a bargain. Especially since the $75 covers breakfast and lunch both days, plus dinner the first day in a beer hall. 😉

Even without solving philosophical problems, sponsorship by OpenSource Connections is enough to recommend this conference without reservation.

My expectation is this conference is going to rock for hard core search geeks!

PS: Ask if videos will be posted. Thanks!

Follow Manuel Uberti’s Excellent Adventure – Learning Haskell

Filed under: Functional Programming,Haskell — Patrick Durusau @ 4:38 pm

Learning Haskell

Manuel Uberti’s post:

Since my first baby steps in the world of Functional Programming, Haskell has been there. Like the enchanting music of a Siren, it has been luring me with promises of a new set of skills and a better understanding of the lambda calculus.

I refused to oblige at first. A bit of Scheme and my eventual move to Clojure occupied my mind and my daily activities. Truth be told, the odious warfare between dynamic types troopers and static types zealots didn’t help steering my enthusiasm towards Haskell.

Still, my curiosity is stoic and hard to kill and the Haskell Siren was becoming too tempting to resist any further. The Pragmatic Programmer in me knew it was the right thing to do. My knowledge portfolio is always reaching out for something new.

My journey began with the much praised Programming in Haskell. I kept track of the exercises only to soon discover this wasn’t the right book for me. A bit too terse and schematic, I needed something that could ease me in in a different way. I needed more focus on the basics, the roots of the language.

As I usually do, I sought help online. I don’t know many Haskell developers, but I know there are crazy guys in the Emacs community. Steve Purcell was kind and patient enough to introduce me to Haskell Programming From First Principles.

This is a huge book (nearly 1300 pages), but it just took the authors’ prefaces to hook me. Julie Moronuki words in particular resonated heavily with me. Unlike Julie I have experience in programming, but I felt exactly like her when it comes to approaching Haskell teaching materials.

So here I am, armed with Stack and Intero and ready to abandon myself to the depths and wonders of static typing and pure functional programming. I will track my progress and maybe report back here. I already have a project in mind, but my Haskell needs to get really good before starting any serious work.

May the lambda be with me.

Uberti’s post was short enough to quote in full and offers something to offset the grimness the experience with 2017 promises to arrive in 2018.

We will all take to Twitter, Facebook, etc. in 2018 to vent our opinions but at the end of the year, finger exercise is all we will have to show for it.

Following Uberti’s plan, with Haskell, or Clojure, Category Theory, ARM Exploitation, etc., whatever best fits your interest, will see 2018 end with your possessing an expanded skill set.

Your call, finger exercise or an expanded skill set (skills you can use for your cause).

Journocode Data Journalism Dictionary

Filed under: Journalism,News,Reporting — Patrick Durusau @ 1:47 pm

Journocode Data Journalism Dictionary

From the webpage:

Navigating the field of data journalism, a field that borrows methods and terms from so many disciplines, can be hard – especially in the beginning. You need to speak the language in order to collaborate with others and knowing which words to type into a search engine is the first step to learning new things.

That’s why we started the Journocode Data Journalism Dictionary. It aims to explain technical terms from fields like programming, web development, statistics and graphics design in a way that every journalist and beginner can understand them.

Fifty-one (51) definitions as of today, 8 December 2017, and none will be unfamiliar to data scientists.

But, a useful resource for data scientists to gauge the terms already known to data journalists and perhaps a place to contribute other terms with definitions.

Don’t miss their DDJ Tools resource page while you visiting.

Contra Censors: Tor Bridges and Pluggable Transports [Please Donate to Tor]

Filed under: Censorship,Tor — Patrick Durusau @ 1:08 pm

Tor at the Heart: Bridges and Pluggable Transports by ssteele.

From the post:


Censors block Tor in two ways: they can block connections to the IP addresses of known Tor relays, and they can analyze network traffic to find use of the Tor protocol. Bridges are secret Tor relays—they don’t appear in any public list, so the censor doesn’t know which addresses to block. Pluggable transports disguise the Tor protocol by making it look like something else—for example like HTTP or completely random.

Ssteele points out censorship, even censorship of Tor, is getting worse, so the time to learn these tools is now. Don’t wait until Tor has gone dark for you to respond.

December seems to be when all the begging bowls come out from a number of worthwhile projects.

I should be pitching my cause at this point but instead, please donate to support the Tor project.

Another Windows Critical Vulnerability (and I forgot to get MS anything)

Filed under: Cybersecurity,Microsoft,Security — Patrick Durusau @ 11:58 am

Microsoft Issues Emergency Windows Security Update For A Critical Vulnerability by Swati Khandelwal.

From the post:

If your computer is running Microsoft’s Windows operating system, then you need to apply this emergency patch immediately. By immediately, I mean now!

Microsoft has just released an emergency security patch to address a critical remote code execution (RCE) vulnerability in its Malware Protection Engine (MPE) that could allow an attacker to take full control of a victim’s PC.

Enabled by default, Microsoft Malware Protection Engine offers the core cybersecurity capabilities, like scanning, detection, and cleaning, for the company’s antivirus and antimalware programs in all of its products.

According to Microsoft, the vulnerability affects a large number of Microsoft security products, including Windows Defender and Microsoft Security Essentials along with Endpoint Protection, Forefront Endpoint Protection, and Exchange Server 2013 and 2016, impacting Windows 7, Windows 8.1, Windows 10, Windows RT 8.1, and Windows Server.

Tracked as CVE-2017-11937, the vulnerability is a memory corruption issue which is triggered when the Malware Protection Engine scans a specially crafted file to check for any potential threat.
… (emphasis in original)

I always feel bad when I read about newly discovered vulnerabilities in Microsoft Windows. Despite MS opening up computers around the world to the idly curious if not the malicious, I haven’t gotten them anything.

I’m sure Munich must be celebrating its plan to switch to Windows 10 for €50m. You wouldn’t think unintended governmental transparency would be that expensive. Munich could save everyone time and trouble by backing up all its files/data to an open S3 bucket on AWS. Thoughts?

Khandelwal also reports Microsoft says that this vulnerability isn’t being used in the wild. Modulo that claim comes from the originator of the vulnerability. If it couldn’t/didn’t recognize the vulnerability in its code, what are the odds of it recognizes its exploit by others? Your call.

See Khandelwal’s post for more details.

December 7, 2017

Malpedia

Filed under: Cybersecurity,Malware — Patrick Durusau @ 8:55 pm

Malpedia

From the webpage:

Malpedia is a free service offered by Fraunhofer FKIE.

The primary goal of Malpedia is to provide a resource for rapid identification and actionable context when investigating malware. Openness to curated contributions shall ensure an accountable level of quality in order to foster meaningful and reproducible research.

Also, please be aware that not all content on Malpedia is publicly available.

More specifically, you will need an account to access all data (malware samples, non-public YARA rules, …).

In this regard, Malpedia is operated as an invite-only trust group.
…(emphasis in original)

You are probably already aware of Malpedia but I wasn’t.

Enjoy!

A Guide to Reproducible Code in Ecology and Evolution

Filed under: Bioinformatics,Biology,Replication,Research Methods,Science — Patrick Durusau @ 3:33 pm

A Guide to Reproducible Code in Ecology and Evolution by British Ecological Society.

Natilie Cooper, Natural History Museum, UK and Pen-Yuan Hsing, Durham University, UK, write in the introduction:

The way we do science is changing — data are getting bigger, analyses are getting more complex, and governments, funding agencies and the scientific method itself demand more transparency and accountability in research. One way to deal with these changes is to make our research more reproducible, especially our code.

Although most of us now write code to perform our analyses, it is often not very reproducible. We have all come back to a piece of work we have not looked at for a while and had no idea what our code was doing or which of the many “final_analysis” scripts truly was the final analysis! Unfortunately, the number of tools for reproducibility and all the jargon can leave new users feeling overwhelmed, with no idea how to start making their code more reproducible. So, we have put together this guide to help.

A Guide to Reproducible Code covers all the basic tools and information you will need to start making your code more reproducible. We focus on R and Python, but many of the tips apply to any programming language. Anna Krystalli introduces some ways to organise files on your computer and to document your workflows. Laura Graham writes about how to make your code more reproducible and readable. François Michonneau explains how to write reproducible reports. Tamora James breaks down the basics of version control. Finally, Mike Croucher describes how to archive your code. We have also included a selection of helpful tips from other scientists.

True reproducibility is really hard. But do not let this put you off. We would not expect anyone to follow all of the advice in this booklet at once. Instead, challenge yourself to add one more aspect to each of your projects. Remember, partially reproducible research is much better than completely non-reproducible research.

Good luck!
… (emphasis in original)

Not counting front and back matter, 39 pages total. A lot to grasp in one reading but if you don’t already have reproducible research habits, keep a copy of this publication on top of your desk. Yes, on top of the incoming mail, today’s newspaper, forms and chart requests from administrators, etc. On top means just that, on top.

At some future date, when the pages are too worn, creased, folded, dog eared and annotated to be read easily, reprint it and transfer your annotations to a clean copy.

I first saw this in David Smith’s The British Ecological Society’s Guide to Reproducible Science.

PS: The same rules apply to data science.

CatBoost: Yandex’s machine learning algorithm (here be Russians)

Filed under: CERN,Machine Learning — Patrick Durusau @ 3:08 pm

CatBoost: Yandex’s machine learning algorithm is available free of charge Victoria Zavyalova.

From the post:

Russia’s Internet giant Yandex has launched CatBoost, an open source machine learning service. The algorithm has already been integrated by the European Organization for Nuclear Research to analyze data from the Large Hadron Collider, the world’s most sophisticated experimental facility.

Machine learning helps make decisions by analyzing data and can be used in many different areas, including music choice and facial recognition. Yandex, one of Russia’s leading tech companies, has made its advanced machine learning algorithm, CatBoost, available free of charge for developers around the globe.

“This is the first Russian machine learning technology that’s an open source,” said Mikhail Bilenko, Yandex’s head of machine intelligence and research.

I called out the Russian origin of the CatBoost algorithm, not because I have any nationalistic tendencies but you can find frothing paranoids in U.S. government agencies and their familiars who do. In those cases, avoid CatBoost.

If you work in saner environments, or need to use categorical data (read not converted to numbers), give CatBoost a close look!

Enjoy!

The Top-100 rated Devoxx Belgium 2017 talks (or the full 207)

Filed under: Conferences,Programming — Patrick Durusau @ 1:47 pm

The Top-100 rated Devoxx Belgium 2017 talks

The top-100 list has Devoxx Belgium 2017 talks sorted in voting order, with hyperlinks to the top 50.

If you are looking for more comprehensive coverage of Devoxx Belgium 2017, try the Devoxx Belgium 2017 YouTube Playlist, with 207 videos!

Kudos to Devoxx for putting conference content online to spread the word about technology.

The Computer Science behind a modern distributed data store

Filed under: ArangoDB,Computer Science,Distributed Computing,Distributed Consistency — Patrick Durusau @ 1:34 pm

From the description:

What we see in the modern data store world is a race between different approaches to achieve a distributed and resilient storage of data. Every application needs a stateful layer which holds the data. There are at least three necessary ingredients which are everything else than trivial to combine and of course even more challenging when heading for an acceptable performance.

Over the past years there has been significant progress in respect in both the science and practical implementations of such data stores. In his talk Max Neunhöffer will introduce the audience to some of the needed ingredients, address the difficulties of their interplay and show four modern approaches of distributed open-source data stores.

Topics are:

  • Challenges in developing a distributed, resilient data store
  • Consensus, distributed transactions, distributed query optimization and execution
  • The inner workings of ArangoDB, Cassandra, Cockroach and RethinkDB

The talk will touch complex and difficult computer science, but will at the same time be accessible to and enjoyable by a wide range of developers.

I haven’t found the slides for this presentation but did stumble across ArangoDB Tech Talks and Slides.

Neunhöffer’s presentation will make you look at ArangoDB more closely.

December 6, 2017

Security Analyst Summit – #TheSAS2017

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:42 pm

Security Analyst Summit – #TheSAS2017

From the webpage:

The Kaspersky Security Analyst Summit (SAS) is a unique annual event connecting anti-malware researchers and developers, global law enforcement agencies and CERTs and members of the security research community.

The summit is one of the best places to learn, debate, share and showcase cutting-edge research, new technologies and discuss ways to improve collaboration in the fight against cyber-crime.

Now you have a chance to get access to the unique videos of the presentations given at #TheSAS2017

Registration required but where are you going to hide from Kaspersky anyway? 😉

I count sixty-three (63) videos.

If you want to start 2018 with a broad overview of security issues, this is one place to start.

Enjoy!

PS: Any favorites?

AlphaZero: Mastering Unambiguous, Low-Dimensional Data

Filed under: Ambiguity,Artificial Intelligence,High Dimensionality,Machine Learning — Patrick Durusau @ 8:57 pm

Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm by David Silver, et al.

Abstract:

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

The achievements by the AlphaZero team and their algorithm merit joyous celebration.

Joyous celebration recognizing AlphaZero masters unambiguous, low-dimensional data governed by deterministic rules that define the outcomes for any state, more quickly and completely than any human.

Chess, Shogi and Go appear complex to humans due to the large number of potential outcomes. But every outcome is the result of the application of deterministic rules to unambiguous, low-dimensional data. Something that AlphaZero excels at doing.

What hasn’t been shown is equivalent performance on ambiguous, high-dimensional data, governed by partially (if that) known rules, for a limited set of sub-cases. For those cases, well, you need a human being.

That’s not to take anything away from the AlphaZero team, but to recognize the strengths of AlphaZero and to avoid its application where it is weak.

Champing at the Cyberbit [Shouldn’t that be: Chomping on Cyberbit?]

Filed under: Cybersecurity,Government,Politics — Patrick Durusau @ 5:10 pm

Champing at the Cyberbit: Ethiopian Dissidents Targeted with New Commercial Spyware by Bill Marczak, Geoffrey Alexander, Sarah McKune, John Scott-Railton, and Ron Deibert.

From the post:

Key Findings

  • This report describes how Ethiopian dissidents in the US, UK, and other countries were targeted with emails containing sophisticated commercial spyware posing as Adobe Flash updates and PDF plugins. Targets include a US-based Ethiopian diaspora media outlet, the Oromia Media Network (OMN), a PhD student, and a lawyer. During the course of our investigation, one of the authors of this report was also targeted.
  • We found a public logfile on the spyware’s command and control server and monitored this logfile over the course of more than a year. We saw the spyware’s operators connecting from Ethiopia, and infected computers connecting from IP addresses in 20 countries, including IP addresses we traced to Eritrean companies and government agencies.
  • Our analysis of the spyware indicates it is a product known as PC Surveillance System (PSS), a commercial spyware product with a novel exploit-free architecture. PSS is offered by Cyberbit — an Israel-based cyber security company that is a wholly-owned subsidiary of Elbit Systems — and marketed to intelligence and law enforcement agencies.
  • We conducted Internet scanning to find other servers associated with PSS and found several servers that appear to be operated by Cyberbit themselves. The public logfiles on these servers seem to have tracked Cyberbit employees as they carried infected laptops around the world, apparently providing demonstrations of PSS to the Royal Thai Army, Uzbekistan’s National Security Service, Zambia’s Financial Intelligence Centre, the Philippine President’s Malacañang Palace, ISS World Europe 2017 in Prague, and Milipol 2017 in Paris. Cyberbit also appears to have provided other demos of PSS in France, Vietnam, Kazakhstan, Rwanda, Serbia, and Nigeria.

Detailed research and reporting, the like of which is absent in reporting about election year “hacks” in the United States.

Despite the excellence of reporting in this post, I find it disappointing that Citizen Lab sees this as an occasion for raising legal and regulatory issues. Especially in light of the last substantive paragraph noting:

As we explore in a separate analysis, while lawful access and intercept tools have legitimate uses, the significant insecurities and illegitimate targeting we have documented that arise from their abuse cannot be ignored. In the absence of stronger norms and incentives to induce state restraint, as well as more robust regulation of spyware companies, we expect that authoritarian and other politically corrupt leaders will continue to obtain and use spyware to covertly surveil and invisibly sabotage the individuals and institutions that hold them to account.

Exposing the abuse of peaceful citizens by their governments is a powerful tool but for me, it falls far short of holding them to account. I have always thought of being “held to account” meant there were negative consequences associated with undesirable behavior.

Do you know of any examples of governments holding Cyberbit or similar entities accountable?

I am aware that the U.S. Congress has from time to time passed legislation “regulating the CIA” and other agencies, all of which was ignored by the regulated agencies. That doesn’t sound like accountability to me.

You?

PS: Despite my disagreement on the call for action, this is a great example of how to provide credible details about malicious cyberactivity. Would that members of the IC would read it and take it to heart.

When Good Enough—Isn’t [Search Engine vs. Librarian Challenge]

Filed under: Library,Search Behavior,Searching — Patrick Durusau @ 3:13 pm

When Good Enough—Isn’t by Patti Brennan.

From the post:

Why do we need librarians when we have Google?

What is the role of a librarian now that we can google anything?

How often have you heard that?

Let’s face it: We have all become enticed by the immediacy of the answers that search engines provide, and we’ve come to accept the good-enough answer—even when good enough isn’t.

When I ask a librarian for help, I am tapping not only into his or her expertise, but also into that of countless others behind the scenes.

From the staff who purposefully and thoughtfully develop the collection—guided by a collection development manual other librarians have carefully crafted and considered—to the team of catalogers and indexers who assign metadata to the items we acquire, to the technical staff who design the systems that make automated search possible, we’ve got a small army of librarians supporting my personal act of discovery…and yours.
… (emphasis in original)

A great read to pass along to search fans in your office!

The image of tapping into the wisdom of countless others (dare I say the “crowd?”) behind every librarian is an apt one.

With search engines, you are limited to your expertise and yours alone. No backdrop of indexers, catalogers, metadata experts, to say nothing of those contributing to all those areas.

Compared to a librarian, you are out-classed and over matched, badly.

Are you ready to take Brennan’s challenge:

Let me offer a challenge: The next time you have a substantive question, ask a librarian and then report back here about how it went.

Ping me if you take Brennan up on that challenge. We are all want to benefit from your experience.

PS: Topic maps can build a backdrop of staff wisdom for you or you can wing every decision anew. Which one do you think works better?

Paradise Papers – The Hand Job Edition – Some Small Joy

Filed under: Graphs,Neo4j,Paradise Papers — Patrick Durusau @ 11:18 am

I need to revise my assessment in Neo4j Desktop Download of Paradise Papers [It’s Not What You Hope For, Disappointment Ahead] to say it is disappointing, but does deliver a hand job version of Paradise Papers data for use in other programs.

Assuming you have made the AppImage file executable, here are the steps on Linux:

1. At the Linux command line type: ./neo4j-desktop-for-icij-1.0.0-x86_64.AppImage

2. Your initial start screen:

3. Notice the Manage Offshore Leaks Graph button:

4. The results of selecting “manage:”

5. Follow the natural progression to data/databases/graph.db and you will find, among other files:

  • neostore.labelscanstore.db (729.1 KB)
  • neostore.nodestore.db (18.1 MB)
  • neostore.propertystore.db (347.9 MB)
  • neostore.propertystore.db.strings (414.9 MB)
  • neostore.relationshipstore.db (64.6 MB TGA image)

The files are, of course, in some binary format, but that’s solved easily enough.

6. Export the data following Michael Hunger’s Export a (sub)graph to Cypher script and import it again post.

7. Load into your favorite graph tool for data exploration.

People who profit from stolen data are very sensitive to licensing issues. Neo4j released this AppImage and its contents under GNU and some parts under an Apache license.

Looking forward to the day when you and the general public can explore all of the Paradise papers, not just selected facts others have chosen for you.

INFILTRATE 2018 – Vote on Papers – Closes 14 December 2017

Filed under: Conferences,Cybersecurity,Security — Patrick Durusau @ 9:59 am

INFILTRATE 2018 – OPEN CFP

Cast your vote for the talks you want to see at INFILTRATE 2018.

As of today, 6 December 2017, I count 26 presentations.

The titles alone are enough to sell the conference:

  1. Energy Larceny-Breaking into a solar power plant
  2. Chainspotting: Building Exploit Chains with Logic Bugs
  3. Back To The Future – Going Back In Time To Abuse Android's JIT
  4. Windows Offender: Attacking The Windows Defender Emulator
  5. Bypassing Mitigations by Attacking JIT Server in Microsoft Edge
  6. A year of inadvertent macOS bugs
  7. L'art de l’Évasion: Modern VMWare Exploitation techniques
  8. Unboxing your VirtualBoxes: A close look at a desktop hypervisor
  9. Fuzzing the ‘Unfuzzable’
  10. How to become a Penetration tester – an attempt to guide the next generation of hackers
  11. Parasite OS
  12. Detecting Reverse Engineering with Canaries
  13. Discovering & exploiting a Cisco ASA pre-auth RCE vulnerability
  14. Synthetic Reality; Breaking macOS One Click at a Time
  15. Dissecting QNX – Analyzing & Breaking QNX Exploit Mitigations and Secure Random Number Generators
  16. Malware​ ​ tradecrafts​ ​ and nasty​ ​ secrets​ ​ of​ ​ evading​ ​ to escalating
  17. Sandbox evasion using VBA Referencing
  18. Exploits in Wetware
  19. How to escalate privileges to SYSTEM in Windows 10
  20. Pack your Android: Everything you need to know about Android Boxing
  21. How to hide your browser 0-days
  22. So you think IoT DDoS botnets are dangerous – Bypassing ISP and Enterprise Anti-DDoS with 90's techn
  23. Making love to Enterprise Software
  24. I Did it Thrawn’s Way- Spiels and the Symbiosis of Red Teaming & Threat Intelligence Analysis
  25. Digital Vengeance: Exploiting Notorious C&C Toolkits
  26. Advanced Social Engineering and OSINT for Penetration Testing

Another example of open sharing as opposed to the hoard and privilege approach of the defensive cybersecurity community. White hats are fortunate to only be a decade behind. Consider it the paranoia penalty. Fear of sharing knowledge harms you more than anyone else.

Speaking of sharing, the archives for INFILTRATE 2011 through INFILTRATE 2017 are online.

May not be true for any particular exploit, but given the lagging nature of cyberdefense, not to mention shoddy patch application, any technique less than ten years old is likely still viable. Remember SQL injection turned 17 this year and remains the #1 threat to websites.

Vote on your favorite papers for INFILTRATE 2018 – OPEN CFP
and let’s see some great tweet coverage for the conference!

INFILTRATE Security Conference, April 26 & 27 2018, @Fountainbleau Hotel

INFILTRATE is a deep technical conference that focuses entirely on offensive security issues. Groundbreaking researchers focused on the latest technical issues will demonstrate techniques that you cannot find elsewhere. INFILTRATE is the single-most important event for those who are focused on the technical aspects of offensive security issues, for example, computer and network exploitation, vulnerability discovery, and rootkit and trojan covert protocols. INFILTRATE eschews policy and high-level presentations in favor of just hard-core thought-provoking technical meat.

Registration: infiltrate@immunityincdotcom

Twitter: @InfiltrateCon.

Enjoy!

Don’t trust NGOs, they have their own agendas (edited)

Filed under: Environment,Journalism,News,Reporting — Patrick Durusau @ 8:50 am

The direct quote is “Don’t trust NGOs, they may have their own agendas.”

I took out the “may” because NGOs are committed to themselves and their staffs before any cause or others. That alone justifies removing the “may.” They have their own agendas and you need to keep that in mind.

Wildlife Crimes: Focus On The Villain, Not The Victim by Ufrieda Ho, says in part:

Ease up on the blood shots, ditch the undercover ploys and think crime story, not animal story.

These are top tips from Bryan Christy, author, investigative journalist and National Geographic Society Fellow. He says environmental trafficking and smuggling should be treated like a “whodunnits” rather than yet another depressing tale of gore and horror.

Christy, a panelist at this morning’s GIJN session on Environmental Crime and Wildlife Smuggling, says: “We need to stop telling the rhino-victim story and start thinking about the trafficker-villain story.”

Christy says shifting the editorial telling of stories in this way is a tool to fight “sad story” fatigue. It trains the audience to follow the trail of a villain through plot-driven action rather than to be turned off by feeling hopeless and despairing in the face of another climate change story or another report on a butchered elephant.

“The criminal plot is also a pack horse – it can pack in a lot of information,” says Christy, understanding that the nature of environmental investigations on smuggling and trafficking is about exploring intricate webs.

That sounds like a data mining/science angle to wildlife crime to me!

There will be people in the field but connecting all the dots will require checking shipping, financial, even the Panama Papers and Paradise Papers for potential connections and leads.

December 5, 2017

Neo4j Desktop Download of Paradise Papers [It’s Not What You Hope For, Disappointment Ahead]

Filed under: Graphs,Journalism,Neo4j,News,Reporting — Patrick Durusau @ 8:52 pm

Neo4j Desktop Download of Paradise Papers

Not for the first time, Neo4j marketing raises false hopes among potential users.

When you or I read “Paradise Papers,” we quite naturally think of the reputed cache of:

…13.4 million leaked files from a combination of offshore service providers and the company registries of some of the world’s most secretive countries.

Well, you aren’t going to find those “Paradise Papers” in the Neo4j Desktop download.

What you will find is highly processed data summarized as:


Data contained in the Paradise Papers:

  • Officer: a person or company who plays a role in an offshore entity.
  • Intermediary: go-between for someone seeking an offshore corporation and an offshore service provider — usually a law-firm or a middleman that asks an offshore service provider to create an offshore firm for a client.
  • Entity: a company, trust or fund created in a low-tax, offshore jurisdiction by an agent.
  • Address: postal address as it appears in the original databases obtained by ICIJ.
  • Other: additional information items.

Make no mistake, International Consortium of Investigative Journalists (ICIJ) does vital work that isn’t being done by anyone else. For that they merit full marks. Not to mention the quality of their data mining and reporting on the data they collect.

However, their hoarding of primary source materials deprives other journalists and indeed the general public of the ability to judge the accuracy and fairness of their reporting.

Using data derived from those hoarded materials to create a teaser database such as the “Paradise Papers” distributed by Neo4j only adds insult to injury. A journalist or member of the public can learn who is mentioned but is denied access to the primary materials that would make that mention meaningful.

You can learn a lot of about Neo4j from the “Paradise Papers,” but about the people and transactions mentioned in the actual Paradise Papers, not so much.

Imagine this as a public resource for citizens and law enforcement around the world, with links back to the primary documents.

That could make a difference for the citizens of entire countries, instead of for the insiders journalists managing the access to and use of the Paradise Papers.

PS: Have you thought about how you would extract the graph data from the .AppImage file?

Building a Telecom Dictionary scraping web using rvest in R [Tunable Transparency]

Filed under: Dictionary,R,Web Scrapers — Patrick Durusau @ 8:04 pm

Building a Telecom Dictionary scraping web using rvest in R by Abdul Majed Raja.

From the post:

One of the biggest problems in Business to carry out any analysis is the availability of Data. That is where in many cases, Web Scraping comes very handy in creating that data that’s required. Consider the following case: To perform text analysis on Textual Data collected in a Telecom Company as part of Customer Feedback or Reviews, primarily requires a dictionary of Telecom Keywords. But such a dictionary is hard to find out-of-box. Hence as an Analyst, the most obvious thing to do when such dictionary doesn’t exist is to build one. Hence this article aims to help beginners get started with web scraping with rvest in R and at the same time, building a Telecom Dictionary by the end of this exercise.

Great for scraping an existing glossary but as always, it isn’t possible to extract information that isn’t captured by the original glossary.

Things like the scope of applicability for the terms, language, author, organization, even characteristics of the subjects the terms represent.

Of course, if your department invested in collecting that information for every subject in the glossary, there is no external requirement that on export all that information be included.

That is your “data silo” can have tunable transparency, that is you enable others to use your data with as much or as least semantic friction as the situation merits.

For some data borrowers, they get opaque spreadsheet field names, column1, column2, etc.

Other data borrowers, perhaps those willing to help defray the cost of semantic annotation, well, they get a more transparent view of the data.

One possible method of making semantic annotation and its maintenance a revenue center as opposed to a cost one.

Reflecting on Haskell in 2017

Filed under: Functional Programming,Haskell — Patrick Durusau @ 7:42 pm

Reflecting on Haskell in 2017 by Stephen Diehl.

From the post:

Alas, another year has come and gone. It feels like just yesterday I was writing the last reflection blog post on my flight back to Boston for Christmas. I’ve spent most of the last year traveling and working in Europe, meeting a lot of new Haskellers and putting a lot of faces to names.

Haskell has had a great year and 2017 was defined by vast quantities of new code, including 14,000 new Haskell projects on Github . The amount of writing this year was voluminous and my list of interesting work is eight times as large as last year. At least seven new companies came into existence and many existing firms unexpectedly dropped large open source Haskell projects into the public sphere. Driven by a lot of software catastrophes, the intersection of security, software correctness and formal methods have been become quite an active area of investment and research across both industry and academia. It’s really never been an easier and more exciting time to be programming professionally in the world’s most advanced (yet usable) statically typed language.

Per what I guess is now a tradition, I will write my end of year retrospective on my highlights of what happened in the Haskell scene in retrospect.

This reading list will occupy you until Reflecting on Haskell in 2018 appears and beyond.

Assuming you are already conversant with Haskell. 😉

If your not, well, there’s no point in getting further behind!

BTW, this is a great example of how to write a year end summary for language. Some generalities but enough specifics for readers to plot their own course.

Australian Census Data and Same Sex Marriage

Filed under: Census Data,R — Patrick Durusau @ 5:59 pm

Combining Australian Census data with the Same Sex Marriage Postal Survey in R by Miles McBain.

Last week I put out a post that showed you how to tidy the Same Sex Marriage Postal Survey Data in R. In this post we’ll visualise that data in combination with the 2016 Australian Census. Note to people just here for the R — the main challenge here is actually just navigating the ABS’s Census DataPack, but I’ve tried to include a few pearls of wisdom on joining datasets to keep things interesting for you.

Decoding the “datapack” is an early task:


The datapack consists of 59 encoded csv files and 3 metadata excel files that will help us decode their meaning. What? You didn’t think this was going to be straight forward did you?

When I say encoded, I mean the csv’s have inscrutable names like ‘2016Census_G09C.csv’ and contain column names like ‘Se_d_r_or_t_h_t_Tot_NofB_0_ib’ (H.T. @hughparsonage).

Two of the metadata files in /Metadata/ have useful applications for us. ‘2016Census_geog_desc_1st_and_2nd_release.xlsx’ will help us resolve encoded geographic areas to federal electorate names. ‘Metadata_2016_GCP_DataPack.xlsx’ lists the topics of each of the 59 tables and will allow us to replace a short and uninformative column name with a much longer, and slightly more informative name….

Followed by the joys of joining and analyzing the data sets.

McBain develops original analysis of the data that demonstrates a relationship between having children and opinions on the impact of same sex marriage on children.

No, I won’t repeat his insight. Read his post, it’s quite entertaining.

Name a bitch badder than Taylor Swift

Filed under: Feminism,R,Twitter — Patrick Durusau @ 4:27 pm

It all began innocently enough, a tweet with this image and title by Nutella.

Maëlle Salmon reports in Names of b…..s badder than Taylor Swift, a class in women’s studies? that her first pass on tweets quoting Nutella’s tweet, netted 15,653 tweets! (Salmon posted on 05 December 2017 so a later tweet count will be higher.)

Salmon uses rtweet to obtain the tweets, cleanNLP to extract entities, and then enhances those entities with Wikidata.

There’s a lot going on in this one post!

Enjoy the post and remember to follow Maëlle Salmon on Twitter!

Other value-adds for this data set?

Tabula: Extracting A Hit (sorry) Security List From PDF Report

Filed under: Cybersecurity,Extraction,Government,PDF,Security — Patrick Durusau @ 11:44 am

Benchmarking U.S. Government Websites by Daniel Castro, Galia Nurko, and Alan McQuinn, provides a quick assessment of 468 of the most popular federal websites for “…page-load speed, mobile friendliness, security, and accessibility.”

Unfortunately, it has an ugly table layout:

Double column listings with the same headers?

There are 476 results on Stackoverflow this morning for extracting tables from PDF.

However, I need a cup of coffee, maybe two cups of coffee answer to extracting data from these tables.

Enter Tabula.

If you’ve ever tried to do anything with data provided to you in PDFs, you know how painful it is — there’s no easy way to copy-and-paste rows of data out of PDF files. Tabula allows you to extract that data into a CSV or Microsoft Excel spreadsheet using a simple, easy-to-use interface. Tabula works on Mac, Windows and Linux.

Tabula is download, extract, start and point your web browser to http://localhost:8080 (or http://127.0.0.1:8080), load your PDF file, select the table, export the content, easy to use.

I tried selecting the columns separately (one page at a time) but then used table recognition and selected the entirety of Table 6 (security evaluation). I don’t think it made any difference in the errors I was seeing in the result (dropping first letter of site domains, but check with your data.)

Warning: For some unknown reason, possibly a defect in the PDF and/or Tabula, the leading character from the second domain field was dropped on some entries. Not all, not consistently, but it was dropped. Not to mention missing the last line of entries on a couple of pages. Proofing is required!

Not to mention there were other recognition issues

Capture wasn’t perfect due to underlying differences in the PDF:

cancer.gov,100,901,fdic.gov,100,"3,284"
weather.gov,100,904,blm.gov,100,"3,307"
transportation.gov,,,100,,,"3,340",,,ecreation.gov,,,100,,,"9,012",
"regulations.gov1003,390data.gov1009,103",,,,,,,,,,,,,,,,
nga.gov,,,100,,,"3,462",,,irstgov.gov,,,100,,,"9,112",
"nrel.gov1003,623nationalservice.gov1009,127",,,,,,,,,,,,,,,,
hrsa.gov,,,100,,,"3,635",,,topbullying.gov,,,100,,,"9,285",
"consumerfinance.gov1004,144section508.gov1009,391",,,,,,,,,,,,,,,,

With proofing, we are way beyond two cups of coffee but once proofed, I tossed it into Calc and produced a single column CSV file: 2017-Benchmarking-US-Government-Websites-Security-Table-6.csv.

Enjoy!

PS: I discovered a LibreOffice Calc “gotcha” in this exercise. If you select a column for the top and attempt to paste it under an existing column (same or different spreadsheet), you get the error message: “There is not enough room on the sheet to insert here.”

When you select a column from the top, it copies all the blank cells in that column so there truly isn’t sufficient space to paste it under another column. Tip: Always copy columns in Calc from the bottom of the column up.

December 4, 2017

Finding Interesting Amazon S3 Buckets

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:59 am

Bucket Stream

From the webpage:

This tool simply listens to various certificate transparency logs (via certstream) and attempts to find public S3 buckets from permutations of the certificates domain name.

(graphic omitted)

Be responsible. I mainly created this tool to highlight the risks associated with public S3 buckets and to put a different spin on the usual dictionary based attacks.
… (emphasis in original)

If you find the March of Dimes or the International Federation of the Red Cross and Red Crescent with an insecure Amazon S3 bucket, take the author’s advice and report it.

If asked about Amazon S3 buckets belonging to groups, organizations and governments actively seeking to harm others, I would answer differently.

You?

« Newer Posts

Powered by WordPress