Building an End-to-End Analytics Pipeline with PyFlink

A presentation at Data Science UA in November 2020 in by Marta Paes

Slide 1

Slide 1

Building an E2E Analytics Pipeline with PyFlink Marta Paes (@morsapaes) Developer Advocate @morsapaes

Slide 2

Slide 2

About Ververica Original Creators of Apache Flink® 2 @morsapaes Enterprise Stream Processing With Ververica Platform Part of Alibaba Group

Slide 3

Slide 3

Apache Flink Flink is an open source framework and distributed engine for unified batch and stream processing. Flink Runtime Stateful Computations over Data Streams 3 @morsapaes Learn more: flink.apache.org

Slide 4

Slide 4

Apache Flink Flink is an open source framework and distributed engine for unified batch and stream processing. High Performance Fault Tolerance Stateful Processing Flexible APIs Flink Runtime Stateful Computations over Data Streams 4 @morsapaes Learn more: flink.apache.org

Slide 5

Slide 5

Use Cases This gives you a robust foundation for a wide range of use cases: Streaming Analytics & ML Stateful Stream Processing Event-Driven Applications Streams, State, Time SQL, PyFlink, Tables Stateful Functions Flink Runtime Stateful Computations over Data Streams 5 @morsapaes Learn more: flink.apache.org

Slide 6

Slide 6

Use Cases Classical, core stream processing use cases that build on the primitives of streams, state and time. Streaming Analytics & ML Stateful Stream Processing Event-Driven Applications Streams, State, Time SQL, PyFlink, Tables Stateful Functions Flink Runtime Stateful Computations over Data Streams 6 @morsapaes Learn more: flink.apache.org

Slide 7

Slide 7

Stateful Stream Processing Classical, core stream processing use cases that build on the primitives of streams, state and time. ● Explicit control over these primitives ● Complex computations and customization ● Maximize performance and reliability Example Use Cases Large-scale Data Pipelines 7 @morsapaes ML-Based Fraud Detection Service Monitoring & Anomaly Detection

Slide 8

Slide 8

Use Cases More high-level or domain-specific use cases that can be modeled with SQL or Python and dynamic tables. Streaming Analytics & ML Stateful Stream Processing Event-Driven Applications Streams, State, Time SQL, PyFlink, Tables Stateful Functions Flink Runtime Stateful Computations over Data Streams 8 @morsapaes Learn more: flink.apache.org

Slide 9

Slide 9

Streaming Analytics & ML More high-level or domain-specific use cases that can be modeled with SQL or Python and dynamic tables. ● Focus on logic, not implementation ● Mixed workloads (batch and streaming) ● Maximize developer speed and autonomy Example Use Cases Unified Online/Offline Model Training 9 @morsapaes E2E Streaming Analytics Pipelines ML Feature Generation

Slide 10

Slide 10

More Flink Users 10 @morsapaes Learn More: Powered by Flink, Speakers – Flink Forward San Francisco 2019, Speakers – Flink Forward Europe 2019

Slide 11

Slide 11

Why PyFlink? 11 @morsapaes

Slide 12

Slide 12

Python is…pretty stacked? Mature analytics stack, with libraries that are fast and intuitive. 12 @morsapaes Source: JetBrains’ Developer Ecosystem Report 2020

Slide 13

Slide 13

…and also timeless! 1995 2008 2003 2015 2001 13 @morsapaes Mature analytics stack, with libraries that are fast and intuitive.

Slide 14

Slide 14

…and also timeless! 1995 2008 2003 2015 Mature analytics stack, with libraries that are fast and intuitive. 2001 Older libraries are mostly restricted to a data size that fits in memory (RAM), and designed to run on a single core (CPU). 14 @morsapaes

Slide 15

Slide 15

More Formats This is a problem, because Moving Faster More Data More Sources 15 @morsapaes Stricter SLAs

Slide 16

Slide 16

More Formats This is a problem, because Moving Faster More Data More Sources But you still want to use these powerful libraries, right? 16 @morsapaes Stricter SLAs

Slide 17

Slide 17

Why PyFlink? Java Scala 17 @morsapaes

Slide 18

Slide 18

Why PyFlink? Java Python Scala Expose the functionality of Flink beyond the JVM 18 @morsapaes

Slide 19

Slide 19

Why PyFlink? 19 @morsapaes

Slide 20

Slide 20

Why PyFlink? Distribute and scale the functionality of Python through Flink 20 @morsapaes

Slide 21

Slide 21

Flink at Alibaba scale Double 11 / Singles Day incl. sub-second updates to the GMV dashboard Real-time Data Applications Search Recomm. Data Size Ads BI Throughput (Peak) 4B 1.7EB State Size (Biggest) 100TB 21 @morsapaes Security events/sec Latency Sub-sec Learn more: Alibaba Cloud Unveils ‘Magic’ Behind the World’s Largest Online Shopping Festival

Slide 22

Slide 22

Demo 22 @morsapaes

Slide 23

Slide 23

Can we use PyFlink to identify the most frequent topics in the Flink User Mailing List? 23 @morsapaes

Slide 24

Slide 24

The Demo Environment JDBC Connector PyFlink Submit job Postgres Visualization UDF JobManager Assign & monitor tasks

  • (Awfully) Trained LDA Model TaskManager JDBC Connector Exec. Query Tasks Postgres 24 @morsapaes

Slide 25

Slide 25

DEMO Step 1. Create the source and sink tables. Connector Properties DDL Statement Execution 25 @morsapaes

Slide 26

Slide 26

DEMO Step 2. Write and register a UDF to clean and classify the messages. Text pre-processing and classification logic UDF Registration How would this perform if it were defined as a Pandas UDF? 26 @morsapaes Learn more: NLP with LDA - Analyzing Topics in the Enron Email dataset

Slide 27

Slide 27

DEMO Step 3. Build your query, that will insert your results into the sink table. OR 27 @morsapaes

Slide 28

Slide 28

DEMO Step 4. Submit the job (and dependencies) to the cluster. docker-compose exec jobmanager ./bin/flink run -py /opt/pyflink-ff2020/pipeline.py \ —pyArchives /opt/pyflink-ff2020/lda_model.zip#model \ —pyFiles /opt/pyflink-ff2020/tokenizer.py -d Flink Web UI 28 @morsapaes LDA Model + Dictionary Pre-processing Class

Slide 29

Slide 29

DEMO Step 5. Visualize in Superset! 29 @morsapaes

Slide 30

Slide 30

PyFlink in a Nutshell* 30 ● Native SQL integration ● Unified APIs for batch and streaming ● Support for a large set of operations (incl. complex joins, windowing, pattern matching/CEP) @morsapaes

  • As of Flink 1.11, only the Table API is exposed through PyFlink. The low-level DataStream API is going to be supported in 1.12.

Slide 31

Slide 31

PyFlink in a Nutshell* ● Native SQL integration ● Unified APIs for batch and streaming ● Support for a large set of operations (incl. complex joins, windowing, pattern matching/CEP) Execution Streaming Batch UDF Support Python UDF Pandas UDF

  • UDTF, UDAF (1.12) + UDTF, UDAF (1.12) 31 @morsapaes
  • As of Flink 1.11, only the Table API is exposed through PyFlink. The low-level DataStream API is going to be supported in 1.12.

Slide 32

Slide 32

PyFlink in a Nutshell* ● Native SQL integration ● Unified APIs for batch and streaming ● Support for a large set of operations (incl. complex joins, windowing, pattern matching/CEP) Execution Streaming Native Connectors Batch FileSystems Apache Kafka UDF Support Python UDF Formats Kinesis Pandas UDF HBase JDBC

  • UDTF, UDAF (1.12) + UDTF, UDAF (1.12) Elasticsearch
  • 32 @morsapaes
  • As of Flink 1.11, only the Table API is exposed through PyFlink. The low-level DataStream API is going to be supported in 1.12.

Slide 33

Slide 33

PyFlink in a Nutshell* ● Native SQL integration ● Unified APIs for batch and streaming ● Support for a large set of operations (incl. complex joins, windowing, pattern matching/CEP) Execution Streaming Native Connectors Batch FileSystems Apache Kafka ML Library (WIP) FLIP-39 Notebooks UDF Support Python UDF Formats Kinesis Pandas UDF HBase JDBC

  • UDTF, UDAF (1.12) + UDTF, UDAF (1.12) Elasticsearch Apache Zeppelin
  • 33 @morsapaes
  • As of Flink 1.11, only the Table API is exposed through PyFlink. The low-level DataStream API is going to be supported in 1.12.

Slide 34

Slide 34

Thank you, Data Science UA! Follow me on Twitter: @morsapaes Learn more about Flink: https://flink.apache.org/ © 2020 Ververica