WebAssembly: To the Browser and Beyond!

A presentation at performance.now() in November 2019 in Amsterdam, Netherlands by Patrick Hamann

Slide 1

Slide 1

WebAssembly: To the Browser and Beyond!

Hello, Amsterdam! Firstly let me thank the organisers for inviting me to come and speak at this conference, I’m both honoured and excited to be here and share this stage with so many other wonderful humans. My name is Patrick Hamann, I’m a Principal Engineer at Fastly in the office of the CTO. You can catch me on the Twitters @patrickhamann, but please come and talk to me in the break or party if you want to chat more about anything.

Slide 2

Slide 2

Agenda

So I want to talk to you about one of my newly found passions, WebAssembly – that we’ve been exploring for the last two years in my team! - I want to discuss why we even need WebAssembly. - How you can use it today, (and take advantages of new languages such as Rust) - Some practical use-cases you can use it for - Then we’ll look at recent advancements of moving WebAssembly beyond the browser - And finally we’ll look at what the future holds of WebAssembly

Slide 3

Slide 3

Let’s talk about the Web Platform!

Let’s talk about the Web Platform!

Before we discuss what WebAssembly is, let’s first look at why it exists. And to do that we need to take a look back at the history of the web platform.

Slide 4

Slide 4

So for the last 20 years, the Web Platform has largely consisted of these three technologies, HTML, CSS and JavaScript. Initially we had HTML to describe documents and CSS for styles. Then as the web grew and the demand for interactivity increased, we realised we needed a “glue language” that was easy to use by programmers, could be embedded within web browsers and the code could be written directly in the Web page markup. Therefore the high-level language JavaScript was created. It’s its a true testament to the design of these technologies and languages that they’ve stood the test of time, yes, we a the occasional fling with the likes of Adobe Flash or Java Applets that have attempted to change how we program for the web, but none of them have survived. Later with the birth of A JAX we were able to request fragments of the document or even just data payloads from our servers and stitch these responses together and inject into the DOM dynamically. This caused a paradigm shift from Thick Servers, Thin Clients to Thick Clients, Thin Servers. With the likes of React and other javascript frameworks we now live in a world of single page web applications where most of our heavy lifting and compute is happening on the client.

Slide 5

Slide 5

Later with the birth of A JAX we were able to request fragments of the document or even just data payloads from our servers and stitch these responses together and inject into the DOM dynamically. This caused a paradigm shift from Thick Servers, Thin Clients to Thick Clients, Thin Servers. With the likes of React and other javascript frameworks we now live in a world of single page web applications where most of our heavy lifting and compute is happening on the client.

Slide 6

Slide 6

Is this really the most efficient model?

But is this really the best model? Or rather, is sending megabytes of JavaScript to the client really the most efficient use of resource? Or is the client the best location for all of our logic to run? Or what if there are somethings we run on the server which actually would be better running directly on the client to eliminate requests and reduce latency?

Slide 7

Slide 7

Challenges with JavaScript and the Web Platform

JavaScript was designed to be an easy to use, safe, high-level language (and by that I mean It can’t manage it’s own memory access or system calls), however this also means we can’t optimise it that easily to run efficiently on our hardware. As it’s written and distributed in plain text it and is interpreted by the runtime using just-in-time compilation, this means we pass around large files that are expensive to execute. And we have a single ecosystem (a very large one) but we can benefit from tools build in other language ecosystems.

Slide 8

Slide 8

That statement sounded like i’m about to start hating on JavaScript. Which I’m not. JavaScript is the most ubiquitous programming language to have ever existed, it’s helped the web succeed and the web has helped it succeed. However, on the web all we have is JavaScript, we realised that we may need something else to augment the things that JS was never designed to do. You don’t have the choice of dropping down to a lower-level language to solve specific problems you have that JavaScript can’t do, or optimise certain hot paths of our applications.

Slide 9

Slide 9

So what can we do about this? We can’t implement C++ for the web or introduce runtimes for every popular language within browsers, that would be a maintenance nightmare, security nightmare, slow down deployment of features - simply it just wouldn’t scale.

Slide 10

Slide 10

So in an attempt to solve this problem, Mozilla created emscripten, a C/C++ to JavaScript compiler. This allows anyone to compile and existing codebase to JavaScript, but initially resulted in quite large JavaScript bundles. They then started to notice patterns in the the JavaScript especially when it came to managing numbers and byte arrays, which lead to them creating asm.js a JavaScript subset, which emscripten would compile programs to, this subset could be efficiently optimised by the JavaScript runtime and drastically increased the speed of the programs. The most interesting thing with emscripten and asm is how they managed the memory of the C programs, the compiled JS would use a single array buffer to store the memory for the entire program. And then other browser vendors started to implement asm.js optimisations themselves.

Slide 11

Slide 11

This is where WebAssembly comes in, vendors then started to see the power in asm.js. However, it still was JavasScript and we could only optimise it so much. So we needed a new compilation target like asm.js. In 2015 WebAssembly was born to solve these problems combining some of the ideas from asm.js and Google’s native client (NaCL). Its’ now an official standard with a working group and all the normal things you’d expect from a WebPlatform feature.

Slide 12

Slide 12

What is WebAssembly?

So now we understand how WebAssembly came to exist, lets looks at what it is. Many people think its’ Just C++ or C, but its so much more than that!

Slide 13

Slide 13

WebAssembly

WebAssembly https://webassembly.org It’s first easier to talk about what it is not, It’s not a programming language that you’d write by hand, or view in plain text, it’s also not C++. Most importantly it’s another tool in toolbox to solve problems on the Web using other languages. WebAssembly is a compilation target for other languages to compile to. I.e. you don’t write it yourself, you write a host language (such as Rust, Go, C++) which the language toolchain then compiles to WebAssmebly. If we were to look at the WebAssembly website it describes itself as:

Slide 14

Slide 14

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine.

Slide 15

Slide 15

So right now, you’re probably thinking WTF does that mean. So let’s pick it apart.

Slide 16

Slide 16

A binary instruction format: This means that its a set of instructions or operations for machine to process that has been encoded in a binary format. I.e. its not plain text like JavaScript is. Using a binary format means is can be natively decoded much faster than JavaScript can be parsed (experiments show more than 20× faster). On mobile, large compiled codes can easily take 20–40 seconds just to parse, so native decoding (especially when combined with other techniques like streaming for better-than-gzip compression) is critical to providing a good cold-load user experience.

Slide 17

Slide 17

It’s a stack based virtual machine. There are several different ways to represent machines, stack based or register based, your CPU is register based, but stack based is easier to compile too. A virtual machine is a processor that doesn’t actually exist which makes it a a target that is easier for other languages to target without knowing anything about the underlying hardware or CPU.

Slide 18

Slide 18

The WebAssembly is spec is pretty simple, it just knows about numbers and memory, essentially it’s a cpu, hence the virtual machine bit in the previous slide. It can’t interact with the outside world (yet), it just computes math. It only has, bytes, integers, floats and names. You also have operations/instructions to operate on those types: add, substract, multiply etc. But really, a lot of complex programs can be reduced down to this and moving memory around. For example processing an image or an audio stream etc.

Slide 19

Slide 19

So when you write some code in a lower level language, normally the compiler will compile it to an intermediate representation, which is the same regardless of the system architecture, and then it has a different backend of each architecture and will use that to compile the IR to your target machine. For WebAssembly the code gets compiled to the instruction set for the WebAssebmly virtual machine, this is normally in the form of a binary file with a .wasm extension. Because the WebAssembly instruction set has a very small API, it is then very easy for the runtime, normally a browser to efficiently convert this to the actual instruction set for the machine.

Slide 20

Slide 20

WebAssembly’s design goals

So for it to succeed, WebAssembly had a few guiding design principles, to ensure it solves the problems it was designed to solve: Compilation target: It should be easy for other language toolchains to compile to Fast execution: It should be fast to parse and execute especially in browser environments Compact: it should be a compact instruction format and easy to distribute across the web. Web distribution is different to native applications, native are usually large and preinstalled on your machine. On the web we have to download, parse and execute on every page load. Linear memory: Native applications written in low-level languages, such as C and C++, have direct access to memory. However, I don’t want to allow any application a website downloads on the web to have access to any random bytes in memory. So in order to create a secure environment for the web, WebAssembly uses a linear memory model, which is a very specific section of memory and nothing else, which works like a typed array buffer in JavaScript.

Slide 21

Slide 21

What does it look like?

To understand what’s happening, I think its good to visualise the relationship between, the source language, WebAssembly and then the machine code the runtime converts it too. Michael Bebenita from Mozilla created this cool tool called the WasmExplorer which allows you to write some C and see what it looks like in WebAssembly and then the Assembly which Firefox compiles it too. WAT, WebAssembly Text Format is the text representation of WASM ASM, is the text representation of binary machine code They look a bit different because one is a stack machine and one is a register machine.

Slide 22

Slide 22

Can I use it today?

You may be asking what’s the support like, this is one of the most exciting things. WebAssembly is now supported in all major browsers. Making it the first language since JavaScript was shipped over 20 years ago to be supported across the web platform.

Slide 23

Slide 23

WebAssembly is:

A new language for the web Compiled from other languages Portable representation of low-level languages Native speed, consistency and reliability Access to ecosystems of other languages Polyfill and extend the web platform So to summarise. WebASsembly is: A new language for the web, which is compiled from other languages It gives us native speed, consistency and reliability in the browser I.e. for the first time ever we have a portable representation of programs, regardless of the language they were written, which we can universally distribute in browsers and run anywhere. THIS IS AMAZING.

Slide 24

Slide 24

How?

So hopefully we now have a better understanding of what WebAssembly is, lets now look at how we can use it!

Slide 25

Slide 25

Loading WebAssembly in the browser

We now have a collection of new JavaScript APIs to enable users to load Wasm modules and invoke the exported functions from the module. This shows how we can load a wasm module today, using the new instantiate streaming API. This uses the fetch API to request the file and then streams the compilation. Firefox can now compile the wasm to assembly faster than it is coming in over the wire. Loading modules via fetch this was also allows us to benefit from the implicit HTTP cache. As unlike JS, wasm will always compile to the same machine code, regardless of input data. This means once compiled once, the compiled code can be stored in the HTTP cache and not recompiled every time the file is requested.

Slide 26

Slide 26

Wasm Markdown parser

So let’s use this to solve a real use-case. Imagine I am GitHub or a new website that accepts comments in the form of Markdown. I currently have a large JS library that parses and compiles the markdown to HTML, whilst the library is ok, its very inefficient to do this in JavaScript. What if I wanted to take advantage of a fast and safe Markdown parser written in Rust to solve this problem. To showcase this, I’ve built a little demo at markdown.fastlylabs.com. Let’s take a dive into how this works.

Slide 27

Slide 27

lib.rs

Why I’ve chosen this as an example, is it shows that I don’t even have to write or understand much of the source language to take advantage of its ecosystem. Here is the rust library code, I’m importing the pulldown_cmark library methods, html and Parser at the top of the file. I’m then making a public (i.e. exported) function called render which accepts a string (our input) and returns a string (our html output). We then pass the input string to the markdown library and write the html back into a new string called html_output. You may have noticed I’m also importing another library called wasm_bindgen, and have decorated my function with a wasm_bindgen macro, which tells the compiler to also generated the JavaScript bindings for this function.

Slide 28

Slide 28

wasm-bindgen

So why do we need to do that? If you remember form the previous section, WebAssembly only supports bytes and number types. Therefore, we can’t just pass our JavaScript input string directly to the render function, nor can we write a Rust function which accepts a Rust string. We need to write some JS glue code that converts our string to a byte array in memory and pass a pointer to that. However, this isn’t ideal from a developer experience perspective, having to manually wire up the functions like this. Which is where wasm_bindgen comes in, this automates the creation of the glue code, so we can continue to program at a high-level and pass strings or objects to our rust code. This is eventually going to be replaced by the future interface types proposal that I’ll discuss later.

Slide 29

Slide 29

wasm-pack

The next tool we’re going to use is wasm-pack, which is the one-stop shop for building and packaging Rust WebAssembly projects, much like webpack. It automates the compilation and wasm-bindgen exporting of your rust libraries and even allows you to publish them as NPM modules to the registry. Wasm bindgen and wasm-pack are both tools by the Rust WebAssembly working group - which have been doing some amazing work in this area to make it as frictionless as possible for people to adopt WebASsembly.

Slide 30

Slide 30

Command line:

So now that we’ve got our Rust file which exposes the markdown render function, next we need to compile it to webassmbly and our JS glue code with wasm-pack build. Once running that command we’re left with the following files inside a pkg directory. Our wasm file, and a corresponding JS file which we can import into our project.

Slide 31

Slide 31

main.js

Finally inside our JS application, all we need to do now is import the JS module produced by warm-pack and call the render function. We import it as we would any other ES6 module Bind an event lister to our input texture Every time we have input, we get the current value and pass it to our render function and set the innerHtml of the output element.

Slide 32

Slide 32

Side-by-side

It may be easier to understand what’s happening if we compare our JS application and rust library side-by-side. wasm_bindgen exposes the render function in our rust function as a JS module which we can pass a string to Under the hood this converts the string to a byte array and passes it to the wasm module With 8 lines of rust and 8 lines of javascript we’ve been a bale to make a fast and type safe markdown parser on the web.

Slide 33

Slide 33

Demo

Demo https://markdown.fastlylabs.com/ Let’s have a look at what it is like for a user. You can go and check it out at markdown.fastlylabs.com Note the speed here, we haven’t had to debounce or throttle the input to get instant results from the parser. Don’t know about you, but I think that’s pretty cool!

Slide 34

Slide 34

Why?

So we know what WebAssmebly is and how it works. Let’s look at a couple of practical use-cases and research as to why we should be using it to speed up our applications.

Slide 35

Slide 35

“Rendering PDF documents is a complex task: The format is 25 years old and has many edge cases. It wouldn’t be feasible for us to rewrite the equivalent of 500,000 lines of C++ code to JavaScript.” REAL WORLD WEB ASSEMBLY BENCHMARK HTTPS://PSPDFKIT.COM/BLOG/2018/A-REAL-WORLD-WEBASSEMBLYBENCHMARK/

PS PDF Kit is one of the most prolific PDF viewers on the web, It allows consumers to render PDFs and embed into any page, for instance a news article or a legal document to sign. They released their first web ui in 2016, but it required a server rendering component which generated a lot of latency. They’ve now been able to replace that completely with a pdf renderer which instead uses WebAssembly in the browser.

Slide 36

Slide 36

squoosh.app

Squoosh.app is an application made by the Chrome developer relations team, its an image compression tool that allows you to upload an image and play tweak settings before exporting.

Slide 37

Slide 37

To do this, they compiled various image libraries such as MozJPEG and libwebp written in C++ to WebAssembly via emscripten and loaded in the browser. The point here is that WebAssembly allowed them to Polyfill the Web Platform, only some browsers haves support for certain codecs. Therefore, they were able to have consistent codec support across all browsers using a battle harded hardened C++ library (2014), that wasn’t written with the web in mind, and use it on the web anyway today with amazing performance.

Slide 38

Slide 38

As part of the project the team did some interesting research. For one of the functions, image rotate, it’s quite easy to implement in JavaScript via the canvas api, so they decided to write this function in JS and then WebAssembly ports in C, AssemblyScript and Rust. They benchmarked each implementation across for browsers and plotted the results. What is interesting here is browsers with optimised JS engines perform nearly as fast as the WebAssembly, however the Wasm implementations are consistency fast across all browsers. Proving that WebAssembly can be very useful for the situations in which we need consistent performance.

Das Surma - WebAssembly for Web Developers (Google I/O ’19) - https://youtube.com/watch?v=njt-Qzw0mVY

Slide 39

Slide 39

WebAssembly at eBay - A real-world use case

They first ran an A/B test, between having a scanner at all and having one, it showed that sellers are 30% more liking to complete their draft if a scanner is avaliable. Next they tested three versions of the scanner, one JS implementation, one of their own implementation and one called ZBar which is one of the most popular open-source C++ implementations. Zbar contributed over 50% of completions, proving that a faster and more mature library from an existing ecosystem is better to solve this problem.

Slide 40

Slide 40

Use cases

So that was a few examples of genuine use cases in the wild and hopefully it showed the power of WebAssembly and the speed benefits associated with it. I also hope it showed you that, Wasm isn’t going to replace JavaScript, but its here to use as another tool to augment JavaScripts missing parts. Any part of your application, that you current have to do on the server, or are causing hot paths in your application today with JS implementations. These are the things you should consider WebAssembly alternatives for. Such as encoding of file formats, parallelisation of data, intensive data visualisation, cryptography. The list is endless!

Slide 41

Slide 41

Beyond the browser!

So far we’ve looked at WebAssembly in the browser. But what about about beyond the browser. It is the title of the talk after all.

Slide 42

Slide 42

So we already have JS outside of the browser with Node.js, Node is based on the chromium JavaScript engine V8, so when V8 got support for WebAssembly, node got support for WebAssembly. So you can now use wasm modules inside your node applications. Whats cool about this is Node can now support other languages via Wasm - very efficiently and safely.

Slide 43

Slide 43

V8 isolates

This safety factor is useful. One of the problems with running native software is the safety and trust guarantees of it, if it’s capable of reading and writing memory, it’s capable of being dangerous. Which is why we go to great efforts to isolate our software running on our servers, for instance, we don’t share them with other untrusted programs. Browsers handle this very efficiently with the sandbox models and concepts like tabs. For instance, the JavaScript running in one tab can’t access the memory of another. This is why some CDN vendors have used the browser sandboxing models inside V8 to isolate code that runs on their servers. Traditionally you’d do this with virtual machines, like you get with AWS or GCP, however the CPU and memory overhead of doing this is impossible to use on every request. So what if you could spin up a single instance of v8 and use the isolation model already built into it to run JavaScript and WebAssembly. I think this is great use of this technology. However there is still overhead to this. Its designed for a browser.

Slide 44

Slide 44

Compute@Edge

“At 35.4 microseconds, Fastly’s Compute@Edge environment offers a 100x faster startup time than any other solution on the market.

https://www.fastly.com/blog/join-the-beta-new-serverless-compute-environment-at-the-edge

We learnt that WebAssembly’s design means that we get memory isolation for free. So maybe WebAssembly really useful outside of JavaScript too. So at Fastly we’ve been working on exactly this, what if we could take a WebAssembly module and run it on a server directly via a standalone runtime. No browser or JavaScript engine needed. By doing this we can get extremely performant processes. We can instantiate a wasm module is 34 microseconds with just a few kilobytes of memory overhead, by comparison starting up v8 takes ~5ms and 10’s of megabytes.

Slide 45

Slide 45

Fastly Lucet

https://www.fastly.com/blog/announcing-lucet-fastly-native-webassembly-compiler-runtime

This is possible with Lucet, our sandboxing WebAssembly compiler. This works by compiling the Wasm module ahead of time to a machine code shared object, so the instances can be loaded and torn down instantly, each with their own sandbox and trust guarentees.

Slide 46

Slide 46

What does this enable?

So what type of thing does this enable? For instance you could now run a GraphQL server at the edge of the network as close to your users as possible and execute it on every request with little to no performance overhead.

Slide 47

Slide 47

WASI - WebAssembly System Interface

However one of the problems with this is, as we saw earlier, WebAssembly can only operate on numbers and memory. It can talk to the outside world or hardware on our devices via system interfaces, i.e. it can’t read files from disk or create network connections. These are all things we’d need if we wanted to write any non-trivial server or edge application. So this is why Mozilla started to define the Web Assembly Systems Interface. It’s a specfication of a standardised interface between a webassembly module and the host system.

Slide 48

Slide 48

WASI - WebAssembly System Interface

This allows source compilers to easily compile system calls to an abstract interface which the host WebAssembly runtime can then decide what do with it. I.e. if I wrote a println() statement in Rust, my browser can choose to implement that system interface as a call to console.log(), if running on Fastly we could log stream it to your designation of choice, or if running locally on a server just log to stdout. Just like wasm is an abstraction over a cpu instruction, wasi is an abstraction over an operating system. It’s just like posix is for unix systems. This now allows us to port any application to WebAssembly, not just the examples we saw previously which process numbers.

Slide 49

Slide 49

Bytecode Alliance

We’re so excited about what tools like Lucet and standards like WASI mean for the future of the web outside the browser, which is why we’ve teamed up with Mozilla, Intel and Red Hat to form the Bytecode Alliance. A new industry partnership to collaborate on implementation standards and proposing new ones.

Slide 50

Slide 50

Any code, anywhere, fast and secure

So at the beginning of the talk we discussed the problems with untrusted native code within the browser, but what we didn’t realise is the solution actually created a universal format that we can use to run code safely anywhere. Be that the browser, edge or server. Blurring the lines between, what is a web app, and what is a native app.

Slide 51

Slide 51

I think this was greatly summarised by Solomon Hykes the co-founder of Docker. In which he stated: If Wasm + WASI existed in 2008, we wouldn’t have needed to create Docker. Thats how important it is. WebAssembly on the server is the future of computing. A standardised system interface was the missing link. I mean at this point, I could just walk of the stage now and be done with it.

https://twitter.com/solomonstre/status/1111004913222324225?lang=en

Slide 52

Slide 52

The future!

So to finish I want to take a look at the future for WebAssembly.

Slide 53

Slide 53

We’re only at the MVP stage of WebAssembly!

When it first shipped in browsers, many thought it was finished, and only useful for processing numbers efficiently. This couldn’t be further from the truth, it’s only just the beginning.

Slide 54

Slide 54

Interface types

As we saw in the markdown demo, one of WebAssembly’s biggest pain points is the how we call a modules exported functions and pass data to them from other languages. WebAssembly only knows about numbers, but what if we want to pass other more complex data types, such as strings, arrays objects. It should be possible to ship a single WebAssembly module and have it run anywhere… without making life hard for either the module’s user or developer. However what we need is some kind of common interface type. The type systems in each language are different, so we need a common a translation layer, so the host knows how to convert between them. The interface types proposal defines a way for a host runtime to map its type system to WebAssembly’s. I.e. how do I pass this JavaScript string or object to a WebAssembly module. This is a Lin Clark cartoon, if you haven’t seen Lin’s work, go check got out.

Slide 55

Slide 55

Without interface types:

This means we would no longer need the JS glue code that we saw in the Markdown demo and can now import WebAssembly modules directly in our JavaScript applications. This is great as the Javascript glue code has all the same problems as JS, its text and not as fast as WebAssembly, and the host doesn’t need to serialise the data on every call.

Slide 56

Slide 56

Interface types

The mind blowing moment for me is when you realise what possibilities this opens. We can have JS dependencies that import wasm modules, which themselves import other modules.

Slide 57

Slide 57

Runtimes like Node or Python’s CPython often allow you to write modules in low-level languages like C++, too. That’s because these low-level languages are often much faster. So you can use native modules in Node, or extension modules in Python. But these modules are often hard to use because they need to be compiled on the user’s device. With a WebAssembly “native” module, you can get most of the speed without the complication.

Slide 58

Slide 58

Node WASI support

Node is starting to take the first steps towards this, with experimental support for WASI landing soon. This coupled with interface types in V8 will allow full intro between Node and Wasm. No longer will we need node-gyp to run node Sass or other native libraries, authors can precompile their libraries to WebAssembly and ship inside node modules, one module for all platforms, no more recompilation of the binary everytime you download it.

Slide 59

Slide 59

Other proposals

I don’t have enough time to go into all of the upcoming propsals, but some of the exciting things being actively developed are: Threads: First, we need support for multithreading. Modern-day computers have multiple cores. Threads would give us massive performance boosts for applications that take advantage of them. For this to work we need SharedArray buffers to be renlled in browsers. SIMD: Alongside threading, there’s another technique that utilizes modern hardware, and which enables you to process things in parallel. Garbage collection: We need integration with the browser’s garbage collector. Until this is possible we can’t compile garbage collected languages such as, JavaScript, Elm, go etc to WebAssembly that easily. Debugging: Its currently very hard to debug Wasm modules in browser and server environments. We can achieve speed that was previously impossible with threads and simd

Slide 60

Slide 60

Frameworks

I wanted to leave you with an exciting thought. Once we have threads and Garbage collection support, we will be in a position that JavaScript frameworks can start to take full advantage of Wasm. For example, in React, we could rewrite the virtual DOM diffing algorithm in Rust, which has very ergonomic multithreading support, and parallelise that algorithm. And keep that process of the main thread by spawning it inside a WebWorker. This is the kind of future that really excites me!

Slide 61

Slide 61

Takeaways

So I’d like to recap and end with some takeaways from all of this.

Slide 62

Slide 62

What have we learnt?

WebAssembly isn’t going to kill JS WebAssembly allows us to extend the Web Platform WebAssembly gives us portability, speed and safety WebAssembly isn’t just for the browser WebAssembly allows us to move logic closer to the user What have we learnt from this talk? WebAssembly isn’t going to kill JS Its here to augment the holes in the WebPlatform in a fast and secure way. It’s not just for the browser It’s portability is the future of fast, compassable and safe software intro between languages and runtimes It allows to use the correct tool for the problem and run that logic closer to the user, on the edge or the client or both

Slide 63

Slide 63

What can you do today?

Try it out! Identify hot paths in your applications, consider porting them to Wasm Identify parts of your server-side logic which could move to the client or the edge Find new tools from existing ecosystems which could solve your problem Share your findings What can you do today? Try out WebAssembly and Rust, they’re awesome. Profile and Identify hot paths in your applications and consider porting them to Wasm Identify parts of your server-side applications which could benefit from moving the edge or the client Find tools from existing ecosystems outside of JavaScript which can solve your problems more efficiently and share your findings.

Slide 64

Slide 64

WebAssembly is the new standard for portability and safety on the web and beyond.

Slide 65

Slide 65

The future is bright!

And the future is bright!

Slide 66

Slide 66

You can find the talk transcript, links, further reading and more on my notist page here

Slide 67

Slide 67

Thank you!