Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 14, 2014

Cassandra Performance Testing with cstar_perf

Filed under: Cassandra,Performance — Patrick Durusau @ 6:32 am

Cassandra Performance Testing with cstar_perf by Ryan Mcguire.

From the post:

It’s frequently been reiterated on this blog that performance testing of Cassandra is often done incorrectly. In my role as a Cassandra test engineer at DataStax, I’ve certainly done it incorrectly myself, numerous times. I’m convinced that the only way to do it right, consistently, is through automation – there’s simply too many variables to keep track of when doing things by hand.

cstar_perf is an easy to use tool to run performance tests on Cassandra clusters. A brief outline of what it does for you:

  • Downloads and builds Cassandra source code.
  • Configures your cassandra.yaml and environment settings.
  • Bootstraps nodes on a real cluster.
  • Runs a series of test operations on multiple versions or configs.
  • Collects and aggregates cluster performance metrics.
  • Creates easy to read performance charts comparing multiple test configurations in one view.
  • Runs a web frontend for convenient test scheduling, monitoring and reporting.

A great tool for Cassandra developers and a reminder of the first requirement for performance testing, automation. How’s your performance testing?

I first saw this in a tweet by Jason Brown.

May 14, 2014

Spy On Your CPU

Filed under: Linux OS,Performance,Programming — Patrick Durusau @ 3:45 pm

I can spy on my CPU cycles with perf! by Julia Evans.

From the post:

Yesterday I talked about using perf to profile assembly instructions. Today I learned how to make flame graphs with perf today and it is THE BEST. I found this because Graydon Hoare pointed me to Brendan Gregg’s excellent page on how to use perf.

Julia is up to her elbows in her CPU.

You can throw hardware at a problem or you can tune the program you are running on hardware.

Julia’s posts are about the latter.

July 30, 2013

Lucene 4 Performance Tuning

Filed under: Indexing,Lucene,Performance,Searching — Patrick Durusau @ 6:47 pm

From the description:

Apache Lucene has undergone a major overhaul influencing many of the key characteristics dramatically. New features and modification allow for new as well as fundamentally different ways of tuning the engine for best performance.

Tuning performance is essential for almost every Lucene based application these days – Search & Performance almost a synonyms. Knowing the details of the underlying software provides the basic tools to get the best out of your application. Knowing the limitations can safe you and your company a massive amount of time and money. This talks tries to explain design decision made in Lucene 4 compared to older versions and provide technical details how those implementations and design decisions can help to improve the performance of your application. The talk will mainly focus on core features like: Realtime & Batch Indexing Filter and Query performance Highlighting and Custom Scoring

The talk will contain a lot of technical details that require a basic understanding of Lucene, datastructures and algorithms. You don’t need to be an expert to attend but be prepared for some deep dive into Lucene. Attendees don’t need to be direct Lucene users, the fundamentals provided in this talk are also essential for Apache Solr or elasticsearch users.

If you want to catch some of the highlights of Lucene 4, this is the presentation for you!

It will be hard to not go dig deeper in a number of areas.

The new codec features were particularly impressive!

May 9, 2013

Metrics2: The New Hotness for Apache HBase Metrics

Filed under: HBase,Performance — Patrick Durusau @ 11:02 am

Metrics2: The New Hotness for Apache HBase Metrics by Elliott Clark.

From the post:

Apache HBase is a distributed big data store modeled after Google’s Bigtable paper. As with all distributed systems, knowing what’s happening at a given time can help spot problems before they arise, debug on-going issues, evaluate new usage patterns, and provide insight into capacity planning.

Since October 2008, version 0.19.0 (HBASE-625), HBase has been using Apache Hadoop’s metrics system to export metrics to JMX, Ganglia, and other metrics sinks. As the code base grew, more and more metrics were added by different developers. New features got metrics. When users needed more data on issues, they added more metrics. These new metrics were not always consistently named, and some were not well documented.

As HBase’s metrics system grew organically, Hadoop developers were making a new version of the Metrics system called Metrics2. In HADOOP-6728 and subsequent JIRAs, a new version of the metrics system was created. This new subsystem has a new name space, different sinks, different sources, more features, and is more complete than the old metrics. When the Metrics2 system was completed, the old system (aka Metrics1) was deprecated. With all of these things in mind, it was time to update HBase’s metrics system so HBASE-4050 was started. I also wanted to clean up the implementation cruft that had accumulated.

Welcome news of a consistent metric system for HBase!

If you can’t measure it, it’s hard to brag about it. 😉

April 19, 2013

Aerospike

Filed under: Aerospike,NoSQL,Performance — Patrick Durusau @ 1:01 pm

Aerospike

From the architecture overview:

Aerospike is a fast Key Value Store or Distributed Hash Table architected to be a flexible NoSQL platform for today’s high scale Apps. Designed to meet the reliability or ACID requirements of traditional databases, there is no single point of failure (SPOF) and data is never lost. Aerospike can be used as an in-memory database and is uniquely optimized to take advantage of the dramatic cost benefits of flash storage. Written in C, Aerospike runs on Linux.

Based on our own experiences developing mission-critical applications with high scale databases and our interactions with customers, we’ve developed a general philosophy of operational efficiency that guides product development. Three principles drive Aerospike architecture: NoSQL flexibility, traditional database reliability, and operational efficiency.

Technical details first published in Proceeding of the VLDB (Very Large Databases), Citrusleaf: A Real-Time NoSQL DB which Preserves ACID by V. Srinivasan and Brian Bulkowski.

You can guess why they changed the name. 😉

There is a free community edition, along with an SDK and documentation.

Relies on RAM and SDDs.

Timo Elliott was speculating about entirely RAM-based computing in: In-Memory Computing.

Imagine losing all the special coding tricks to get performance despite disk storage.

Simpler code and fewer operations should result in higher speed.

February 19, 2013

Really Large Queries: Advanced Optimization Techniques, Feb. 27

Filed under: MySQL,Performance,SQL — Patrick Durusau @ 11:10 am

Percona MySQL Webinar: Really Large Queries: Advanced Optimization Techniques, Feb. 27 by Peter Boros.

From the post:

Do you have a query you never dared to touch?
Do you know it’s bad, but it’s needed?
Does it fit your screen?
Does it really have to be that expensive?
Do you want to do something about it?

During the next Percona webinar on February 27, I will present some techniques that can be useful when troubleshooting such queries. We will go through case studies (each case study is made from multiple real-world cases). In these cases we were often able to reduce query execution time from 10s of seconds to a fraction of a second.

If you have SQL queries in your work flow, this will definitely be of interest.

December 14, 2012

Semantic Technology ROI: Article of Faith? or Benchmarks for 1.28% of the web?

Filed under: Benchmarks,Marketing,Performance,RDFa,Semantic Web — Patrick Durusau @ 3:58 pm

Orri Erling, in LDBC: A Socio-technical Perspective, writes in part:

I had a conversation with Michael at a DERI meeting a couple of years ago about measuring the total cost of technology adoption, thus including socio-technical aspects such as acceptance by users, learning curves of various stakeholders, whether in fact one could demonstrate an overall gain in productivity arising from semantic technologies. [in my words, paraphrased]

“Can one measure the effectiveness of different approaches to data integration?” asked I.

“Of course one can,” answered Michael, “this only involves carrying out the same task with two different technologies, two different teams and then doing a double blind test with users. However, this never happens. Nobody does this because doing the task even once in a large organization is enormously costly and nobody will even seriously consider doubling the expense.”

LDBC does in fact intend to address technical aspects of data integration, i.e., schema conversion, entity resolution, and the like. Addressing the sociotechnical aspects of this (whether one should integrate in the first place, whether the integration result adds value, whether it violates privacy or security concerns, whether users will understand the result, what the learning curves are, etc.) is simply too diverse and so totally domain dependent that a general purpose metric cannot be developed, at least not in the time and budget constraints of the project. Further, adding a large human element in the experimental setting (e.g., how skilled the developers are, how well the stakeholders can explain their needs, how often these needs change, etc.) will lead to experiments that are so expensive to carry out and whose results will have so many unquantifiable factors that these will constitute an insuperable barrier to adoption.

The need for parallel systems to judge the benefits of a new technology is a straw man. And one that is easy to dispel.

For example, if your company provides technical support, you are tracking metrics on how quickly your staff can answer questions. And probably customer satisfaction with your technical support.

Both are common metrics in use today.

Assume the suggestion that linked data to improve technical support for your products. You begin with a pilot project to measure the benefit from the suggested change.

If the length of support calls goes down or customer customer satisfaction goes up, or both, change to linked data. If not, don’t.

Naming a technology as “semantic” doesn’t change how you measure the benefits of a change in process.

LDBC will find purely machine based performance measures easier to produce than answering more difficult socio-technical issues.

But of what value are great benchmarks for a technology that no one wants to use?

See my comments under: Web Data Commons (2012) – [RDFa at 1.28% of 40.5 million websites]. Benchmarks for 1.28% of the web?

October 31, 2012

Coming soon on JAXenter: videos from JAX London [What Does Hardware Know?]

Filed under: CS Lectures,Java,Performance,Processing,Programming — Patrick Durusau @ 5:57 pm

Coming soon on JAXenter: videos from JAX London by Elliot Bentley.

From the post:

Can you believe it’s only been two weeks since JAX London? We’re already planning for the next one at JAX Towers (yes, really).

Yet if you’re already getting nostalgic, never fear – JAXenter is on hand to help you relive those glorious yet fleeting days, and give a taste of what you may have missed.

For a start, we’ve got videos of almost every session in the main room, including keynotes from Doug Cutting, Patrick Debois, Steve Poole and Martijn Verburg & Kirk Pepperdine, which we’ll be releasing gradually onto the site over the coming weeks. Slides for the rest of JAX London’s sessions are already freely available on SlideShare.

Pepperdine and Verburg, “Java and the Machine,” remark:

There’s no such thing as a process as far as the hardware is concerned.

A riff I need to steal to say:

There’s no such thing as semantics as far as the hardware is concerned.

We attribute semantics to data for input, we attribute semantics to processing of data by hardware, we attribute semantics to computational results.

I didn’t see a place for hardware in that statement. Do you?

September 25, 2012

New Tool: JMXC – JMX Console

Filed under: Java,Performance — Patrick Durusau @ 1:38 pm

New Tool: JMXC – JMX Console

From the post:

When you are obsessed with performance and run a performance monitoring service like Sematext does, you need a quick and easy way to inspect Java apps’ MBeans in JMX. We just open-sourced JMXC, our 1-class tool for dumping the contents of JMX, or specific MBeans. This is a true and super-simple, no external dependencies console tool that can connect to JMX via Java application PID or via JMX URL and can dump either all MBeans or those specified on the command line.

JMX lives at https://github.com/sematext/jmxc along with other Sematext open-source tools. Feedback and pull requests welcome! Enjoy!

If that sounds a tad cryptic, try reading: Introducing MBeans.

Too good of an opportunity to highlight Sematext’s open source tools to miss.

September 1, 2012

Web Performance Power Tool: HTTP Archive (HAR)

Filed under: Interface Research/Design,Performance,Web Server — Patrick Durusau @ 2:52 pm

Web Performance Power Tool: HTTP Archive (HAR) by Ilya Grigorik.

From the post:

When it comes to analyzing page performance, the network waterfall tab of your favorite HTTP monitoring tool (e.g. Chrome Dev Tools, Firebug, Fiddler, etc) is arguably the single most useful power tool at our disposal. Now, wouldn’t it be nice if we could export the waterfall for better bug reports, performance monitoring, or later in-depth analysis?

Well, good news, that is precisely what the HTTP Archive (HAR) data format was created for. Even better, chances are, you favorite monitoring tool already knows how to speak in HAR, which opens up a lot of possibilities – let’s explore.

If you are tuning or developing a web interface, there is much here you will find helpful.

The gathering of information for later analysis, by other tools, was what interested me the most.

August 9, 2012

…Creating Reliable Billion Page View Web Services

Filed under: Performance,Systems Administration,Web Analytics,Web Server — Patrick Durusau @ 3:40 pm

High Scalability reports in 3 Tips and Tools for Creating Reliable Billion Page View Web Services an article by Amir Salihefendic that suggests:

  • Realtime monitor everything
  • Be proactive
  • Be notified when crashes happen

Are three tips to follow on the hunt to a reliable billion page view web service.

I’m a few short of that number but it was still an interesting post. 😉

And you can’t ever tell, might snag a client that is more likely to reach those numbers.

May 2, 2012

12 Ways to Increase Throughput by 32X and Reduce Latency by 20X

Filed under: Java,Messaging,Performance — Patrick Durusau @ 3:31 pm

12 Ways to Increase Throughput by 32X and Reduce Latency by 20X

From the post:

Martin Thompson, a high-performance technology geek, has written an awesome post, Fun with my-Channels Nirvana and Azul Zing. In it Martin shows the process and techniques he used to take an existing messaging product, written in Java, and increase throughput by 32X and reduce latency by 20X. The article is very well written with lots of interesting details that make it well worth reading.

You might want to start with the High Scalability summary before tackling the “real thing.”

Of interest to subject-centric applications that rely on messaging. And anyone interested in performance for the sheer pleasure of it.

Powered by WordPress