Mojo, le futur langage de l’lA ?

A presentation at Salon de la DATA in September 2024 in Nantes, France by Jean-Luc Tromparent

Slide 1

Slide 1

Mojo The next AI language ? Jean-L uc T ROMPA RENT HE LL OWO RK

Slide 2

Slide 2

Use Case Value Proposition

Slide 3

Slide 3

What is The current language of AI ?

Slide 4

Slide 4

Chat GPT Advisor L’IA, ou intelligence artificielle, peut être développée et programmée dans différents langages de programmation. Certains des langages les plus couramment utilisés pour créer des systèmes d’IA incluent Python, Java, C++, et R, entre autres. Python est particulièrement populaire dans le domaine de l’IA en raison de sa simplicité, de sa flexibilité, de sa large gamme de bibliothèques et de frameworks dédiés à l’IA (comme TensorFlow, PyTorch, scikit-learn, etc.), et de sa communauté active de développeurs.

Slide 5

Slide 5

StackOverflow Advisor

Slide 6

Slide 6

StackOverflow Advisor

Slide 7

Slide 7

StackOverflow Advisor

Slide 8

Slide 8

StackOverflow Advisor

Slide 9

Slide 9

StackOverflow Advisor

Slide 10

Slide 10

AI Programming Landscape Model System Hardware CUDA, OpenCL, ROCm

Slide 11

Slide 11

NEW Kid in ToWN ! Mojo 02/05/2023

Slide 12

Slide 12

Mojo is born ! https://www.modular.com/blog/the-future-of-ai-depends-on-modularity

Slide 13

Slide 13

Mojo is born ! Modular Accelerated eXecution platform https://www.modular.com/blog/a-unified-extensible-platform-to-superpower-your-ai

Slide 14

Slide 14

Mojo is born ! • Member of the python family (superset of python) • Support modern chip architectures (thanks to MLIR) • Predictable low level performance https://www.modular.com/blog/a-unified-extensible-platform-to-superpower-your-ai

Slide 15

Slide 15

Mojo is born ! Chris Lattner 2000 beginning of the project LLVM 2003 release of LLVM 1.0 2007 release of CLang 1.0 2008 XCode 3.1 2011 Clang replace gcc on macos 2014 release of Swift 1.0 2018 beginning of the MLIR 2022 creation of Modular cie 2023 https://www.nondot.org/sabre/

Slide 16

Slide 16

Mojo is blazing fast ! https://www.modular.com/blog/how-mojo-gets-a-35-000x-speedup-over-python-part-1

Slide 17

Slide 17

Mojo is blazing fast ! Changelog https://www.modular.com/blog/how-mojo-gets-a-35-000x-speedup-over-python-part-1 2022/01 incorporation 2022/07 seed round (30 M$) 2023/05 announce MAX & Mojo 2023/08 serie B (100 M$) 2023/09 release mojo 0.2.1 2023/10 release mojo 0.4.0 .. 2024/01 release mojo 0.7.0 2024/02 release MAX & mojo 24.1 2024/06 release MAX & mojo 24.4

Slide 18

Slide 18

Mojo is blazing fast ! https://www.modular.com/blog/how-mojo-gets-a-35-000x-speedup-over-python-part-1

Slide 19

Slide 19

Performance matters ! Performance matters : • for our users

Slide 20

Slide 20

Performance matters ! Your resume is being processed

Slide 21

Slide 21

Performance matters ! Performance matters : • for our users • for (artificial) intelligence

Slide 22

Slide 22

Performance matters !

Slide 23

Slide 23

Performance matters ! Performance matters : • for our users • for (artificial) intelligence • for the planet

Slide 24

Slide 24

Performance matters ! https://haslab.github.io/SAFER/scp21.pdf

Slide 25

Slide 25

Meetup python-rennes https://www.meetup.com/fr-FR/python-rennes/ https://www.youtube.com/watch?v=gE6HUsmh554

Slide 26

Slide 26

Performance matters ! Performance matters : • for our users • for (artificial) intelligence • for the planet

Slide 27

Slide 27

it’s demo time ! Laplacian filter (edge detection)

Slide 28

Slide 28

Edge Detection

Slide 29

Slide 29

Edge Detection kernel Convolve

Slide 30

Slide 30

Hop hop hop 2D Convolu tion Animati on — Mi chael Plotke, CC B Y-SA 3.0 via Wi ki medi a Commons

Slide 31

Slide 31

Edge Detection 2D Convolu tion Animati on — Mi chael Plotke, CC B Y-SA 3.0 via Wi ki medi a Commons

Slide 32

Slide 32

Edge Detection 2D Convolu tion Animati on — Mi chael Plotke, CC B Y-SA 3.0 via Wi ki medi a Commons

Slide 33

Slide 33

it’s demo time ! Python implementations

Slide 34

Slide 34

naïve version : 500 ms numpy mul : 250 ms numpy+numba : 50 ms Recap opencv : 0.5 ms x2 x 10 x 1000 And now in mojo ?

Slide 35

Slide 35

Mojo : let’s create a Matrix

Slide 36

Slide 36

Mojo : module

Slide 37

Slide 37

Mojo : naive.mojo

Slide 38

Slide 38

Mojo : naive.mojo

Slide 39

Slide 39

Mojo : interoperability with Python

Slide 40

Slide 40

Mojo : interoperability with Python

Slide 41

Slide 41

Mojo : interoperability with Python

Slide 42

Slide 42

Mojo : loading PGM picture

Slide 43

Slide 43

Mojo : loading PGM picture

Slide 44

Slide 44

Mojo : naive.mojo

Slide 45

Slide 45

Mojo : naive.mojo

Slide 46

Slide 46

it’s demo time ! Let’s optimize !

Slide 47

Slide 47

SISD Architecture

Slide 48

Slide 48

SIMD Architecture

Slide 49

Slide 49

Algorithm vectorization fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1): # For each pixel, compute the product elements wise var acc: Float32 = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, x-1+l] * kernel[k, l] # Normalize the result result[y, x] = min(255, max(0, acc)) return result

Slide 50

Slide 50

Algorithm vectorization alias nelts = simdwidthofDType.float32 x fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1): # For each pixel, compute the product elements wise var acc: Float32 = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, x-1+l] * kernel[k, l] # Normalize the result result[y, x] = min(255, max(0, acc)) return result

Slide 51

Slide 51

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1, nelts): # For each pixel, compute the product elements wise var acc: Float32 = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, x-1+l] * kernel[k, l] # Normalize the result result[y, x] = min(255, max(0, acc)) return result

Slide 52

Slide 52

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1, nelts): # For each pixel, compute the product elements wise # var acc: Float32 = 0 var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, x-1+l] * kernel[k, l] # Normalize the result result[y, x] = min(255, max(0, acc)) return result

Slide 53

Slide 53

Algorithm vectorization fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1, nelts): # For each pixel, compute the product elements wise # var acc: Float32 = 0 var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): # acc += img[y-1+k, x-1+l] * kernel[k, l] acc += img.simd_load[nelts](y-1+k, x-1+l) * kernel[k, l] # Normalize the result # result[y, x] = min(255, max(0, acc)) result.simd_store[nelts](y, x, min(255, max(0, acc))) return result

Slide 54

Slide 54

Algorithm vectorization fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): for x in range(1, img.width-1, nelts): # For each pixel, compute the product elements wise # var acc: Float32 = 0 var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): # acc += img[y-1+k, x-1+l] * kernel[k, l] acc += img.simd_load[nelts](y-1+k, x-1+l) * kernel[k, l] # Normalize the result # result[y, x] = min(255, max(0, acc)) result.simd_store[nelts](y, x, min(255, max(0, acc))) # Handle remaining elements with scalars. for n in range(nelts * (img.width-1 // nelts), img.width-1) : var acc: Float32 = 0 for k in range(3) : for l in range(3): acc += img[y-1+k, n-1+l] * kernel[k, l] result[y, n] = min(255, max(0, acc)) return result

Slide 55

Slide 55

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): @parameter fn dot[nelts: Int](x: Int): # For each pixel, compute the product elements wise var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): acc += img.simd_load[nelts](y-1+k, x-1+l) * kernel[k, l] # Normalize the result result.simd_store[nelts](y, x, min(255, max(0, acc))) vectorizedot, nelts return result

Slide 56

Slide 56

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): @parameter fn dot[nelts: Int](x: Int): # For each pixel, compute the product elements wise var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): acc += img.simd_load[nelts](y-1+k, x-1+l) * kernel[k, l] # Normalize the result result.simd_store[nelts](y, x, min(255, max(0, acc))) vectorizedot, nelts return result

Slide 57

Slide 57

Algorithm vectorization alias nelts = simdwidthofDType.float32 fn naive(img: Matrix[DType.float32], kernel: Matrix[DType.float32]) -> Matrix[DType.float32]: var result = Matrix[DType.float32](img.height, img.width) # Loop through each pixel in the image # But skip the outer edges of the image for y in range(1, img.height-1): @parameter fn dot[nelts: Int](x: Int): # For each pixel, compute the product elements wise var acc: SIMD[DType.float32,nelts] = 0 for k in range(3) : for l in range(3): acc += img.simd_load[nelts](y-1+k, x+l) * kernel[k, l] # Normalize the result result.simd_store[nelts](y, x+1, min(255, max(0, acc))) vectorizedot, nelts return result

Slide 58

Slide 58

• Far from stable • Compilation AOT or JIT • Python friendly but not Python Recap • Dynamic Python vs Static Mojo • Python interoperability • Predictable behavior with semantic ownership • Low level optimization • Blazingly fast

Slide 59

Slide 59

Mojo The next AI language ?

Slide 60

Slide 60

Conclusion • Python is not yet dead ! But he moves slowly • This is a great team ! Will they be able to deploy their platform strategy ? • Will they be able to unite a community? To be open-source or not to be

Slide 61

Slide 61

Jean-Luc Tromparent Principal Engineer @ https://linkedin.com/in/jltromparent Merci ! https://github.com/jiel/laplacian_filters_benchmark https://noti.st/jlt/fBMjCq