Streaming ETL on the Shoulders of Giants

A presentation at MongoDB World 2019 in in New York, NY, USA by Hans-Peter Grahsl

Without doubt stream processing is a big deal these days and oftentimes we find Apache Kafka as the central nervous system of company-wide data architectures. However, many real-world uses cases simply need an operational data store which is flexible, robust and scalable enough to live up to diverse application-related requirements and challenges. This session discusses different options in order to build solid data integration pipelines between MongoDB and Apache Kafka. The focus lies on configuration-based data in motion scenarios leveraging the Kafka Connect framework in order to lay out streaming ETL pipeline examples without writing a single line of code.

Buzz and feedback

Here’s what was said about this presentation on Twitter.