Archive for the ‘Profiling’ Category

Watch your Python script with strace

Sunday, September 11th, 2016

Description:

Modern operating systems sandbox each process inside of a virtual memory map from which direct I/O operations are generally impossible. Instead, a process has to ask the operating system every time it wants to modify a file or communicate bytes over the network. By using operating system specific tools to watch the system calls a Python script is making — using “strace” under Linux or “truss” under Mac OS X — you can study how a program is behaving and address several different kinds of bugs.

Brandon Rhodes does a delightful presentation on using strace with Python.

Slides for Tracing Python with strace or truss.

I deeply enjoyed this presentation, which I discovered while looking at a Python regex issue.

Anticipate running strace on the Python script this week and will report back on any results or failure to obtain results! (Unlike in academic publishing, experiments and investigations do fail.)

Debugging

Tuesday, August 23rd, 2016

Julia Evans tweeted:

evans-debugging-460

It’s been two days without another suggestion.

Considering Brendan D. Gregg’s homepage, do you have another suggestion?

Too rich of a resource to not write down.

Besides, for some subjects and their relationships, you need specialized tooling to see them.

Not to mention that if you can spot patterns in subjects, detecting an unknown 0-day may be easier.

Of course, you can leave USB sticks at popular eateries near Fort Meade, MD 20755-6248, but some people prefer to work for their 0-day exploits.

😉

PAPERS ARE AMAZING: Profiling threaded programs with Coz

Saturday, October 31st, 2015

PAPERS ARE AMAZING: Profiling threaded programs with Coz by Julia Evans.

I don’t often mention profiling at all but I mention Julia’s post because:

  1. It reports a non-intuitive insight in profiling threaded programs (at least until you have seen it).
  2. Julia writes a great post on new ideas with perf.

From the post:

The core idea in this paper is – if you have a line of code in a thread, and you want to know if it’s making your program slow, speed up that line of code to see if it makes the whole program faster!

Of course, you can’t actually speed up a thread. But you can slow down all other threads! So that’s what they do. The implemention here is super super super interesting – they use the perf Linux system to do this, and in particular they can do it without modifying the program’s code. So this is a) wizardry, and b) uses perf

Which are both things we love here (omg perf). I’m going to refer you to the paper for now to learn more about how they use perf to slow down threads, because I honestly don’t totally understand it myself yet. There are some difficult details like “if the thread is already waiting on another thread, should we slow it down even more?” that they get into.

The insight that slowing down all but one thread is the equivalent to speeding up the thread of interest for performance evaluation sounds obvious when mentioned. But only after it is mentioned.

I suspect the ability to have that type of insight isn’t teachable other than by demonstration across a wide range of cases. If you know of other such insights, ping me.

For those interested in “real world” application of insights, Julia mentions the use of this profiler on SQLite and Memcached.

See Julia’s post for the paper and other references.

If you aren’t already checking Julia’s blog on a regular basis you might want to start.