Apache Kafka for Beginners by Gwen Shapira and Jeff Holoman.
From the post:
When used in the right way and for the right use case, Kafka has unique attributes that make it a highly attractive option for data integration.
Apache Kafka is creating a lot of buzz these days. While LinkedIn, where Kafka was founded, is the most well known user, there are many companies successfully using this technology.
So now that the word is out, it seems the world wants to know: What does it do? Why does everyone want to use it? How is it better than existing solutions? Do the benefits justify replacing existing systems and infrastructure?
In this post, we’ll try to answers those questions. We’ll begin by briefly introducing Kafka, and then demonstrate some of Kafka’s unique features by walking through an example scenario. We’ll also cover some additional use cases and also compare Kafka to existing solutions.
What is Kafka?
Kafka is one of those systems that is very simple to describe at a high level, but has an incredible depth of technical detail when you dig deeper. The Kafka documentation does an excellent job of explaining the many design and implementation subtleties in the system, so we will not attempt to explain them all here. In summary, Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable. (emphasis in original)
A great reference to use for your case to technical management about Kafka. In particular the line:
even a small three-node cluster can process close to a million events per second with an average latency of 3ms.
Sure, there are applications with more stringent processing requirements, but there are far more applications with less than a million events per second.
Does your topic map system get updated more than a million times a second?