Watch & Download the summit presentation

Stream Processing, Choosing the Right Tool for the Job

Due to the increasing interest in real-time processing, many stream processing frameworks were developed. However, no clear guidelines have been established for choosing a framework for a specific use case. In this talk, two different scenarios are taken and the audience is guided through the thought process and questions that one should ask oneself when choosing the right tool. The stream processing frameworks that will be discussed are Spark Streaming, Structured Streaming, Flink and Kafka Streams.

The main questions are:

  • How much data does it need to process? (throughput)
  • Does it need to be fast? (latency)
  • Who will build it? (supported languages, level of API, SQL capabilities, built-in windowing and joining functionalities, etc)
  • Is accurate ordering important? (event time vs. processing time)
  • Is there a batch component? (integration of batch API)
  • How do we want it to run? (deployment options: standalone, YARN, mesos, …)
  • How much state do we have? (state store options) – What if a message gets lost? (message delivery guarantees, checkpointing).

For each of these questions, we look at how each framework tackles this and what the main differences are. The content is based on the PhD research of Giselle van Dongen in benchmarking stream processing frameworks in several scenarios using latency, throughput and resource utilization.

Author

Giselle van Dongen, Lead Data Scientist Klarrio.

Dr. Giselle van Dongen is Lead Data Scientist at Klarrio specializing in real-time data analysis, processing and visualization. After a successful PhD defense on bench marking real-time distributed processing systems such as Spark Streaming, Structured Streaming, Flink and Kafka Streams, the entire benchmark code base has now also been open-sourced and can be found here.

Download the meetup presentation

Please fill in the form and you’ll get an email with a direct link to the file.