Le langage Mojo 🔥: Un nouvel espoir pour l’IA ?

A presentation at BreizhCamp in June 2024 in Rennes, France by Jean-Luc Tromparent

Slide 1

Slide 1

mojo 🔥 A NEW Hope for ai ?

Slide 2

Slide 2

mojo 🔥 The next ai language ?

Slide 3

Slide 3

Slide 4

Slide 4

CONCLUSION Use Case Value Proposition AI in 2024

Slide 5

Slide 5

What is The current language of AI ?

Slide 6

Slide 6

c3-GPTo Advisor L’IA, ou intelligence artificielle, peut être développée et programmée dans différents langages de programmation. Certains des langages les plus couramment utilisés pour créer des systèmes d’IA incluent Python, Java, C++, et R, entre autres. Python est particulièrement populaire dans le domaine de l’IA en raison de sa simplicité, de sa flexibilité, de sa large gamme de bibliothèques et de frameworks dédiés à l’IA (comme TensorFlow, PyTorch, scikit-learn, etc.), et de sa communauté active de développeurs.

Slide 7

Slide 7

StackOverflow Advisor

Slide 8

Slide 8

StackOverflow Advisor

Slide 9

Slide 9

StackOverflow Advisor

Slide 10

Slide 10

StackOverflow Advisor

Slide 11

Slide 11

StackOverflow Advisor

Slide 12

Slide 12

How To Choose a Framework ? https://www.youtube.com/watch?v=k4Tfg6-7cyQ

Slide 13

Slide 13

AI Programming Landscape Model System Hardware CUDA, OpenCL, ROCm

Slide 14

Slide 14

NEW Kid in ToWN ! Mojo 🔥 02/05/2023

Slide 15

Slide 15

Mojo is born ! https://www.modular.com/blog/the-future-of-ai-depends-on-modularity

Slide 16

Slide 16

Mojo is born ! Modular Accelerated eXecution platform https://www.modular.com/blog/a-unified-extensible-platform-to-superpower-your-ai

Slide 17

Slide 17

Mojo is born ! • Member of the python family (superset of python) • Support modern chip architectures (thanks to MLIR) • Predictable low level performance https://www.modular.com/blog/a-unified-extensible-platform-to-superpower-your-ai

Slide 18

Slide 18

Mojo is born ! Chris Lattner 2000 beginning of the project LLVM 2003 release of LLVM 1.0 2007 release of CLang 1.0 2008 XCode 3.1 2011 Clang replace gcc on macos 2014 release of Swift 1.0 2018 beginning of the MLIR 2022 creation of Modular cie 2023 🔥 https://www.nondot.org/sabre/

Slide 19

Slide 19

Mojo is blazing fast ! https://www.modular.com/blog/how-mojo-gets-a-35-000x-speedup-over-python-part-1

Slide 20

Slide 20

Mojo is blazing fast ! Changelog 2022/01 incorporation 2022/07 seed round (30 M$) 2023/05 announce MAX & Mojo 2023/08 serie B (100 M$) 2023/09 release mojo 0.2.1 2023/10 release mojo 0.4.0 .. 2024/01 release mojo 0.7.0 2024/02 release MAX & mojo 24.1 https://www.modular.com/blog/how-mojo-gets-a-35-000x-speedup-over-python-part-1 2024/06 release MAX & mojo 24.4

Slide 21

Slide 21

Mojo is blazing fast ! https://www.modular.com/blog/how-mojo-gets-a-35-000x-speedup-over-python-part-1

Slide 22

Slide 22

Performance matters ! Performance matters : • for our users

Slide 23

Slide 23

Performance matters ! Your resume is being processed

Slide 24

Slide 24

Performance matters ! Performance matters : • for our users • for (artificial) intelligence

Slide 25

Slide 25

Performance matters !

Slide 26

Slide 26

Performance matters ! Performance matters : • for our users • for (artificial) intelligence • for the planet

Slide 27

Slide 27

Performance matters ! https://haslab.github.io/SAFER/scp21.pdf

Slide 28

Slide 28

Meetup python-rennes https://www.meetup.com/fr-FR/python-rennes/ https://www.youtube.com/watch?v=gE6HUsmh554

Slide 29

Slide 29

Performance matters ! Performance matters : • for our users • for (artificial) intelligence • for the planet

Slide 30

Slide 30

it’s demo time ! Laplacian filter (edge detection)

Slide 31

Slide 31

Edge Detection

Slide 32

Slide 32

Edge Detection kernel Convolve

Slide 33

Slide 33

Hop hop hop 2D Convolution Animation — M ichael Plotke, CC BY-SA 3.0 via W ikimedia Commons

Slide 34

Slide 34

Edge Detection 2D Convolution Animation — M ichael Plotke, CC BY-SA 3.0 via W ikimedia Commons

Slide 35

Slide 35

Edge Detection 2D Convolution Animation — M ichael Plotke, CC BY-SA 3.0 via W ikimedia Commons

Slide 36

Slide 36

naïve version : 500 ms numpy mul : 250 ms Recap numpy+numba : 50 ms opencv : 0.5 ms x2 x 10 x 1000 And now in mojo ?

Slide 37

Slide 37

And now in mojo ! https://www.modular.com/blog/implementing-numpy-style-matrix-slicing-in-mojo

Slide 38

Slide 38

Power ! Unlimited Power ! Credits to Georges L. and Corentin R.

Slide 39

Slide 39

“The greatest teacher, failure is”

Slide 40

Slide 40

it’s demo time ! Let’s optimize !

Slide 41

Slide 41

SISD Architecture

Slide 42

Slide 42

SIMD Architecture

Slide 43

Slide 43

Algorithm vectorization fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1): # For each pixel, compute the product elements wise var acc: Float32 = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, x-1+l] * kernel[k, l] # Normalize the result result[y, x] = min(255, max(0, acc)) return result

Slide 44

Slide 44

Algorithm vectorization alias nelts = simdwidthofDType.float32 x fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1): # For each pixel, compute the product elements wise var acc: Float32 = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, x-1+l] * kernel[k, l] # Normalize the result result[y, x] = min(255, max(0, acc)) return result

Slide 45

Slide 45

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1, nelts): # For each pixel, compute the product elements wise var acc: Float32 = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, x-1+l] * kernel[k, l] # Normalize the result result[y, x] = min(255, max(0, acc)) return result

Slide 46

Slide 46

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1, nelts): # For each pixel, compute the product elements wise # var acc: Float32 = 0 var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, x-1+l] * kernel[k, l] # Normalize the result result[y, x] = min(255, max(0, acc)) return result

Slide 47

Slide 47

Algorithm vectorization fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1, nelts): # For each pixel, compute the product elements wise # var acc: Float32 = 0 var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): # acc += img[y-1+k, x-1+l] * kernel[k, l] acc += img.simd_load[nelts](y-1+k, x-1+l) * kernel[k, l] # Normalize the result # result[y, x] = min(255, max(0, acc)) result.simd_store[nelts](y, x, min(255, max(0, acc))) return result

Slide 48

Slide 48

Algorithm vectorization fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1, nelts): # For each pixel, compute the product elements wise # var acc: Float32 = 0 var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): # acc += img[y-1+k, x-1+l] * kernel[k, l] acc += img.simd_load[nelts](y-1+k, x-1+l) * kernel[k, l] # Normalize the result # result[y, x] = min(255, max(0, acc)) result.simd_store[nelts](y, x, min(255, max(0, acc))) # Handle remaining elements with scalars. for n in range(nelts * (img.width-1 // nelts), img.width-1) : var acc: Float32 = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, n-1+l] * kernel[k, l] result[y, n] = min(255, max(0, acc)) return result

Slide 49

Slide 49

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): @parameter fn dot[nelts: Int](x: Int): # For each pixel, compute the product elements wise var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): acc += img.simd_load[nelts](y-1+k, x-1+l) * kernel[k, l] # Normalize the result result.simd_store[nelts](y, x, min(255, max(0, acc))) vectorizedot, nelts return result

Slide 50

Slide 50

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): @parameter fn dot[nelts: Int](x: Int): # For each pixel, compute the product elements wise var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): acc += img.simd_load[nelts](y-1+k, x-1+l) * kernel[k, l] # Normalize the result result.simd_store[nelts](y, x, min(255, max(0, acc))) vectorizedot, nelts return result

Slide 51

Slide 51

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): @parameter fn dot[nelts: Int](x: Int): # For each pixel, compute the product elements wise var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): acc += img.simd_load[nelts](y-1+k, x+l) * kernel[k, l] # Normalize the result result.simd_store[nelts](y, x+1, min(255, max(0, acc))) vectorizedot, nelts return result

Slide 52

Slide 52

• Far from stable • Compilation AOT or JIT Recap • Python friendly but not Python • Dynamic Python vs Static Mojo • Python interoperability • Predictable behavior with semantic ownership • Low level optimization • Blazingly fast

Slide 53

Slide 53

Mojo 🔥 The next AI Language?

Slide 54

Slide 54

Conclusion • Python is not yet dead ! But he moves slowly • This is a great team ! Will they be able to deploy their platform strategy ? • Will they be able to unite a community? To be open-source or not to be

Slide 55

Slide 55

Jean-Luc Tromparent Principal Engineer @ Merci ! https://linkedin.com/in/jltromparent https://github.com/jiel/laplacian_filters_benchmark https://noti.st/jlt/qheM0m 👉 I need your feedback 👈