Streams Must Flow: Deployment fault-tolerant stream processing application with Kafka Streams, Kubernetes

A presentation at Raleigh Apache Kafka® Meetup in April 2019 in Raleigh, NC, USA by Viktor Gamov

Slide 1

Slide 1

Streams Must Flow: fault-tolerant stream processing apps on Kubernetes April, 2019 / New York, NY @gamussa | #raleighKafka | @ConfluentINc

Slide 2

Slide 2

2 Special thanks! @gwenshap @gamussa @MatthiasJSax | #raleighKafka | @ConfluentINc

Slide 3

Slide 3

3 Agenda Kafka Streams 101 How do Kafka Streams applications scale? Kubernetes 101 Recommendations for Kafka Streams @gamussa | #raleighKafka | @ConfluentINc

Slide 4

Slide 4

https://gamov.dev/kstreams-k8s-pr @gamussa | #raleighKafka | @ConfluentINc

Slide 5

Slide 5

5 Kafka Streams – 101 Your App @gamussa | #raleighKafka | @ConfluentINc Other Systems Kafka Connect Kafka Connect Other Systems Kafka Streams

Slide 6

Slide 6

6 Stock Trade Stats Example KStream<String, Trade> source = builder.stream(STOCK_TOPIC); KStream<Windowed<String>, TradeStats> stats = source .groupByKey() .windowedBy(TimeWindows.of(5000).advanceBy(1000)) .aggregate(TradeStats::new, (k, v, tradestats) -> tradestats.add(v), Materialized.<~>as(“trade-aggregates”) .withValueSerde(new TradeStatsSerde())) .toStream() .mapValues(TradeStats::computeAvgPrice); stats.to(STATS_OUT_TOPIC, Produced.keySerde(WindowedSerdes.timeWindowedSerdeFrom(String.class))); @gamussa | #raleighKafka | @ConfluentINc

Slide 7

Slide 7

7 Stock Trade Stats Example KStream<String, Trade> source = builder.stream(STOCK_TOPIC); KStream<Windowed<String>, TradeStats> stats = source .groupByKey() .windowedBy(TimeWindows.of(5000).advanceBy(1000)) .aggregate(TradeStats::new, (k, v, tradestats) -> tradestats.add(v), Materialized.<~>as(“trade-aggregates”) .withValueSerde(new TradeStatsSerde())) .toStream() .mapValues(TradeStats::computeAvgPrice); stats.to(STATS_OUT_TOPIC, Produced.keySerde(WindowedSerdes.timeWindowedSerdeFrom(String.class))); @gamussa | #raleighKafka | @ConfluentINc

Slide 8

Slide 8

8 Stock Trade Stats Example KStream<String, Trade> source = builder.stream(STOCK_TOPIC); KStream<Windowed<String>, TradeStats> stats = source .groupByKey() .windowedBy(TimeWindows.of(5000).advanceBy(1000)) .aggregate(TradeStats::new, (k, v, tradestats) -> tradestats.add(v), Materialized.<~>as(“trade-aggregates”) .withValueSerde(new TradeStatsSerde())) .toStream() .mapValues(TradeStats::computeAvgPrice); stats.to(STATS_OUT_TOPIC, Produced.keySerde(WindowedSerdes.timeWindowedSerdeFrom(String.class))); @gamussa | #raleighKafka | @ConfluentINc

Slide 9

Slide 9

9 Stock Trade Stats Example KStream<String, Trade> source = builder.stream(STOCK_TOPIC); KStream<Windowed<String>, TradeStats> stats = source .groupByKey() .windowedBy(TimeWindows.of(5000).advanceBy(1000)) .aggregate(TradeStats::new, (k, v, tradestats) -> tradestats.add(v), Materialized.<~>as(“trade-aggregates”) .withValueSerde(new TradeStatsSerde())) .toStream() .mapValues(TradeStats::computeAvgPrice); stats.to(STATS_OUT_TOPIC, Produced.keySerde(WindowedSerdes.timeWindowedSerdeFrom(String.class))); @gamussa | #raleighKafka | @ConfluentINc

Slide 10

Slide 10

10 Topologies builder.stream Source Node state stores source.groupByKey .windowedBy(…) .aggregate(…) Processor Node mapValues() Processor Node streams Sink Node to(…) @gamussa | #raleighKafka | @ConfluentINc Processor Topology

Slide 11

Slide 11

How Do Kafka Streams Application Scale? @gamussa | #raleighKafka | @ConfluentINc

Slide 12

Slide 12

12 Partitions, Tasks, and Consumer Groups input topic Task executes processor topology One consumer group: can be executed with 1 - 4 threads on 1 - 4 machines 4 input topic partitions => 4 tasks result topic @gamussa | #raleighKafka | @ConfluentINc

Slide 13

Slide 13

13 Scaling with State “no state” Trade Stats App Instance 1 @gamussa | #raleighKafka | @ConfluentINc

Slide 14

Slide 14

Scaling with State “no state” Trade Stats App Trade Stats App Instance 1 @gamussa Instance 2 | #raleighKafka | @ConfluentINc 14

Slide 15

Slide 15

15 Scaling with State “no state” Trade Stats App Trade Stats App Instance 1 @gamussa Instance 2 | #raleighKafka | Trade Stats App Instance 3 @ConfluentINc

Slide 16

Slide 16

16 Scaling and FaultTolerance Two Sides of Same Coin @gamussa | #raleighKafka | @ConfluentINc

Slide 17

Slide 17

17 Fault-Tolerance Trade Stats App Trade Stats App Instance 1 @gamussa | Instance 2 #raleighKafka | Trade Stats App @ConfluentINc Instance 3

Slide 18

Slide 18

18 Fault-Tolerant State State Updates Input Topic Changelog Topic Result Topic @gamussa | #raleighKafka | @ConfluentINc

Slide 19

Slide 19

19 Migrate State Trade Stats App Instance 1 Trade Stats App Instance 2 restore Changelog Topic @gamussa | #raleighKafka | @ConfluentINc

Slide 20

Slide 20

20 Recovery Time • Changelog topics are log compacted • Size of changelog topic linear in size of state • Large state implies high recovery times @gamussa | #raleighKafka | @ConfluentINc

Slide 21

Slide 21

21 Recovery Overhead • Recovery overhead is proportional to ○ segment-size / state-size • Segment-size is smaller than state-size => reduced overhead • Update changelog topic segment size accordingly ○ topic config: log.segments.bytes ○ log cleaner interval important, too @gamussa | #raleighKafka | @ConfluentINc

Slide 22

Slide 22

22 Kubernetes Fundamentals @gamussa | #raleighKafka | @ConfluentINc

Slide 23

Slide 23

23 https://twitter.com/sahrizv/status/1018184792611827712 @gamussa | #raleighKafka | @ConfluentINc

Slide 24

Slide 24

24 @gamussa | #raleighKafka | @ConfluentINc

Slide 25

Slide 25

25 Orchestration ●Compute ●Networking ●Storage ●Service Discovery @gamussa | #raleighKafka | @ConfluentINc

Slide 26

Slide 26

26 Kubernetes ●Schedules and allocates resources ●Networking between Pods ●Storage ●Service Discovery @gamussa | #raleighKafka | @ConfluentINc

Slide 27

Slide 27

27 Refresher - Kubernetes Architecture kubectl https://thenewstack.io/kubernetes-an-overview/ @gamussa | #raleighKafka | @ConfluentINc

Slide 28

Slide 28

28 Pod • Basic Unit of Deployment in Kubernetes • A collection of containers sharing: • Namespace • Network • Volumes @gamussa | #raleighKafka | @ConfluentINc

Slide 29

Slide 29

29 Storage • Persistent Volume (PV) & Persistent Volume Claim (PVC) • Both PV and PVC are ‘resources’ @gamussa | #raleighKafka | @ConfluentINc

Slide 30

Slide 30

30 Storage • Persistent Volume (PV) & Persistent Volume Claim (PVC) • PV is a piece of storage that is provisioned dynamic or static of any individual pod that uses the PV @gamussa | #raleighKafka | @ConfluentINc

Slide 31

Slide 31

31 Storage • Persistent Volume (PV) & Persistent Volume Claim (PVC) • PVC is a request for storage by a User @gamussa | #raleighKafka | @ConfluentINc

Slide 32

Slide 32

32 Storage • Persistent Volume (PV) & Persistent Volume Claim (PVC) • PVCs consume PV @gamussa | #raleighKafka | @ConfluentINc

Slide 33

Slide 33

33 Stateful Workloads @gamussa | #raleighKafka | @ConfluentINc

Slide 34

Slide 34

34 StatefulSet ● Rely on Headless Headless Service Service to provide network identity Pod-0 ● Ideal for highly available stateful Pod-1 Pod-2 Containers Containers Containers Volumes Volumes Volumes workloads @gamussa | #raleighKafka | @ConfluentINc

Slide 35

Slide 35

Recommendations for Kafka Streams @gamussa | #raleighKafka | @ConfluentINc

Slide 36

Slide 36

36 Stock Stats App Stock Stats App Stock Stats App Kafka Streams Kafka Streams Kafka Streams Instance 1 Instance 2 Instance 3 @gamussa | #raleighKafka | @ConfluentINc

Slide 37

Slide 37

37 WordCount App WordCount App WordCount App Kafka Streams Kafka Streams Kafka Streams Instance 1 Instance 2 Instance 3 @gamussa | #raleighKafka | @ConfluentINc

Slide 38

Slide 38

38 StatefulSets are new and complicated. We don’t need them. @gamussa | #raleighKafka | @ConfluentINc

Slide 39

Slide 39

39 Recovering state takes time. Statelful is faster. @gamussa | #raleighKafka | @ConfluentINc

Slide 40

Slide 40

40 But I’ll want to scale-out and back anyway. @gamussa | #raleighKafka | @ConfluentINc

Slide 41

Slide 41

41 @gamussa | #raleighKafka | @ConfluentINc

Slide 42

Slide 42

42 I don’t really trust my storage admin anyway @gamussa | #raleighKafka | @ConfluentINc

Slide 43

Slide 43

43 Recommendations ● Keep changelog shards small ● If you trust your storage: Use StatefulSets ● Use anti-affinity when possible ● Use “parallel” pod management

Slide 44

Slide 44

44 🛑 Stop! Demo time! @gamussa | #raleighKafka | @ConfluentINc

Slide 45

Slide 45

45 Summary Kafka Streams has recoverable state, that gives streams apps easy elasticity and high availability Kubernetes makes it easy to scale applications It also has StatefulSets for applications with state @gamussa | #raleighKafka | @ConfluentINc

Slide 46

Slide 46

46 Summary Now you know how to deploy Kafka Streams on Kubernetes and take advantage on all the scalability and highavailability capabilities @gamussa | #raleighKafka | @ConfluentINc

Slide 47

Slide 47

47 But what about Kafka itself? @gamussa | #raleighKafka | @ConfluentINc

Slide 48

Slide 48

48 https://www.confluent.io/resources/kafka-summit-new-york-2019/ @gamussa | #raleighKafka | @ConfluentINc

Slide 49

Slide 49

49 Confluent Operator Automate provisioning Scale your Kafkas and CP clusters elastically Monitoring with Confluent Control Center or Prometheus Operate at scale with enterprise support from Confluent @gamussa | #raleighKafka | @ConfluentINc

Slide 50

Slide 50

50 Resources and Next Steps https://cnfl.io/helm_video https://cnfl.io/cp-helm https://cnfl.io/k8s https://slackpass.io/confluentcommunity #kubernetes @gamussa | #raleighKafka | @ConfluentINc

Slide 51

Slide 51

Thanks! @gamussa viktor@confluent.io We are hiring! https://www.confluent.io/careers/ @gamussa | @ #raleighKafka | @ConfluentINc

Slide 52

Slide 52

52