Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

July 4, 2016

Cybersecurity By Design?

Filed under: Cybersecurity,Programming,Rust — Patrick Durusau @ 9:08 pm

Shaun Nichols reports in Mozilla emits nightly builds of heir-to-Firefox browser engine Servo:

Mozilla has started publishing nightly in-development builds of its experimental Servo browser engine so anyone can track the project’s progress.

Executables for macOS and GNU/Linux are available right here to download and test drive even if you’re not a developer. If you are, the open-source engine’s code is here if you want to build it from scratch, fix bugs, or contribute to the effort.

Right now, the software is very much in a work-in-progress state, with a very simple user interface built out of HTML. It’s more of a technology demonstration than a viable web browser, although Mozilla has pitched Servo as a potential successor to Firefox’s Gecko engine.

Crucially, Servo is written using Rust – Mozilla’s more-secure C-like systems programming language. If Google has the language of Go, Moz has the language of No: Rust. It works hard to stop coders making common mistakes that lead to exploitable security bugs, and we literally mean stop: the compiler won’t build the application if it thinks dangerous code is present.

Rust focuses on safety and speed: its security measures do not impact it at run-time as the safety mechanisms are in the language by design. For example, variables in Rust have an owner and a lifetime; they can be borrowed by another owner. When a variable is being used by one owner, it cannot be used by another. This is supposed to help enforce memory safety and stop data races between threads.

It also forces the programmer to stop and think about their software’s design – Rust is not something for novices to pick up and quickly bash out code on.

Even though pre-release and rough, I was fairly excited until I read:


One little problem is that Servo relies on Mozilla’s SpiderMonkey JavaScript engine, which is written in C/C++. So while the HTML-rendering engine will run secured Rust code, fingers crossed nothing terrible happens within the JS engine.

Really?

But then I checked Mozilla JavaScript-C Engine – SpiderMonkey at BlackDuck | Security, which shows zero (0) vulnerabilities over the last 10 versions.

Other than SpiderMonkey vulnerabilities known to the NSA, any others you care to mention?

Support, participate, submit bug reports on the new rendering engine but don’t forget about the JavaScript engine.

July 13, 2015

The impact of fast networks on graph analytics, part 1

Filed under: Graph Analytics,Graphs,GraphX,Rust — Patrick Durusau @ 3:58 pm

The impact of fast networks on graph analytics, part 1 by Frank McSherry.

From the post:

This is a joint post with Malte Schwarzkopf, cross-blogged here and at the CamSaS blog.

tl;dr: A recent NSDI paper argued that data analytics stacks don’t get much faster at tasks like PageRank when given better networking, but this is likely just a property of the stack they evaluated (Spark and GraphX) rather than generally true. A different framework (timely dataflow) goes 6x faster than GraphX on a 1G network, which improves by 3x to 15-17x faster than GraphX on a 10G network.

I spent the past few weeks visiting the CamSaS folks at the University of Cambridge Computer Lab. Together, we did some interesting work, which we – Malte Schwarzkopf and I – are now going to tell you about.

Recently, a paper entitled “Making Sense of Performance in Data Analytics Frameworks” appeared at NSDI 2015. This paper contains some surprising results: in particular, it argues that data analytics stacks are limited more by CPU than they are by network or disk IO. Specifically,

“Network optimizations can only reduce job completion time by a median of at most 2%. The network is not a bottleneck because much less data is sent over the network than is transferred to and from disk. As a result, network I/O is mostly irrelevant to overall performance, even on 1Gbps networks.” (§1)

The measurements were done using Spark, but the authors argue that they generalize to other systems. We thought that this was surprising, as it doesn’t match our experience with other data processing systems. In this blog post, we will look into whether these observations do indeed generalize.

One of the three workloads in the paper is the BDBench query set from Berkeley, which includes a “page-rank-like computation”. Moreover, PageRank also appears as an extra example in the NSDI slide deck (slide 38-39), used there to illustrate that at most a 10% improvement in job completion time can be had even for a network-intensive workload.

This was especially surprising to us because of the recent discussion around whether graph computations require distributed data processing systems at all. Several distributed systems get beat by a simple, single-threaded implementation on a laptop for various graph computations. The common interpretation is that graph computations are communication-limited; the network gets in the way, and you are better off with one machine if the computation fits.[footnote omitted]

The authors introduce Rust and timely dataflow to achieve rather remarkable performance gains. That is if you think a 4x-16x speedup over GraphX on the same hardware is a performance gain. (Most do.)

Code and instructions are available so you can test their conclusions for yourself. Hardware is your responsibility.

While you are waiting for part 2 to arrive, try Frank’s homepage for some fascinating reading.

Powered by WordPress