Better performance for component-based web apps

A presentation at Codemotion Madrid in in Madrid, Spain by José M. Pérez

Better performance for component-based web apps by José M. Pérez - @jmperezperez

Better performance for component-based web apps by José M. Pérez - @jmperezperez

Welcome to era of <components />

Welcome to era of <components />

Serving a large bundle in the old days

Serving a large bundle in the old days

Modern tools are fun to use and they are useful to create websites with great performance.

This talk aims to show how to apply ideas like code-splitting, lazy-loading and CSS-in-JS (aka JSS) to websites built with components. This represents a superior way of delivering to the user only the assets that are needed and it's essential for creating fast loading sites.

We might be suffering from JS fatigue, trying to keep up to date with the latest and greatest. We don't need to rush, just understand what these techniques do and how they fit in the bigger picture.


Transcript

Better performance for component-based web apps

This talk explores the latest techniques and tools to create modern websites with very good performance.

I will not that about the typical tips about minifying or gzipping our assets. I want to focus on some performance benefits that we can get using state-of-the-art tooling.

We will go through a high level description on how we are building websites these days, from the frontend point of view. We will see how ES6 has set a before and after thanks to its definition of modules and their dependencies. I will also describe why code splitting is important when it comes to performance, and I will describe how CSS-in-JS helps with splitting CSS code.

We will end up combining the described tools to create sites that load fast. They will download and process as little resources as possible.

All of this trying to survive the so called JavaScript fatigue.

Who I am

My name is José and I'm currently working as a Senior Software Engineer at Spotify. I'm also a Google Developer Expert in Web Technologies and I speak and blog about web performance.

During my 6+ years at Spotify I have worked in lots of web projects. Some of the most fun ones have been the Spotify applications for TV, web, and desktop.

We love modern tools

I think I'm not wrong if I say we like using modern tools to do our job. We are living in very exciting times where web development is advancing at a fast rate. Every week we see the release of some interesting browser API, some library to do UI rendering and handle state, and some exciting environment to deploy our project. It's difficult to stay up to date and easy to feel overwhelmed.

A couple of weeks ago the State of Javascript 2018 was released. It shows feedback from 20,000 web developers, who tell about what libraries they are using, which ones they would like to continue using, and which ones they would like to learn, amongst lots of other data.

Above 75% of respondents have used or would like to learn React, Vue.js, Redux, GraphQL, ES6 and Typescript. They are well established in the regular stack for building web sites despite some of them being rather young (GraphQL and Redux are 3 years old and Vue.js is 4).

Developers tend to prioritize developer ergonomics when choosing the tech stack and tools. Rarely do we put the user experience upfront. There is the general believe that by using these solutions, it we build features faster and with fewer bugs. This eventually benefits the user experience since they can enjoy a more complete and solid product that can be adapted over time to their needs.

I believe it is important to consider the user experience when choosing taking decisions on tooling. Thinking of users sets constraints that make developers make better decisions What is going to be the data payload when using X library? How will a new API or syntax additions be polyfilled in browsers that don't support it? How do we ensure those polyfills don't penalize users with a capable browser?

The devices and network conditions that we use as developers are not representative of what a regular user experiments.

Do not blame the user

Every time someone talks about a new browser API there is the typical reaction, from excitement at first (oh my good, this fixes everything!) to sadness when looking the support table from caniuse.com.

It is natural to blame browser vendors who are late in the game, but also users who are using old versions of browsers. As developers we are paid to build products and the more users that can use them, the better.

Instead of complaining about having to support old or cheap android phones that can barely run our nice animations, let's think of the user. They might not be able to afford a better phone. I see this more and more on Spotify, as we expand to emerging countries where they don't (or can't) use the devices we are used to as privileged developers in the Western world. Another case might be teenagers using phones that have been "retired" by their parents, who got upgraded ones.

Instead of complaining about old browsers think that maybe the user can't install newer versions on that computer. They might be in a corporate or educational environment with no admin permissions, so even evergreen browsers will be stuck in older versions.

Instead of complaning about slow networks and assume that everyone enjoys a 4G connection on mobile and fiber on desktop think about users on a plane using the limited Wifi onboard. Think about users on a public shared wifi at a local café. Or users who have limited data plan because they run out of data traffic and are now browsing at a very low speed.

Taking these into account helps you deciding tools and how to build web experiences.

Welcome to era of <components />

I care about users and I have always been a big proponent of progressive enhancement. In the past I would even build sites completely functional without Javascript, that would offer an improved experience if Javascript was there. That's why I have approached the modern JS-based tech stack with caution.

Components are here to stay. Whether you are a fan of React, Vue or a similar library, it makes sense to build a site as a Lego, creating a complex project using smaller solid modules.

Components are easy to develop and unit tested in isolation, reducing the need to write browser tests. They are also a great to communicate and collaborate with designers, with tools like Sketch or Storybook that are blurring the line between design and code.

Components can be rendered elements on the screen, but not always. They can embed behaviours like routing or subscription to a shared global state in a neat way. In a component-based web app everything can be modelled as components.

Let's take Spotify's web player as an example. It is built using React and uses components for a variety of things. Let's go through some examples:

  • All the routing between different paths is managed by several components provided by react-router.
  • TabBar, which is a component that renders a list of buttons that are used to load different content underneath.
  • Components can take other components as an input and add some behavior to them. These are called High Order Components (HOC in short). In this case, <CoverArt> represents an image and <LazyLoad> adds lazy loading to it.
  • Another HOC is <Connect>, provided by redux, so the component subscribes to changes in a shared state and gets data from it.

Serving a large bundle in the old days

When we first started working on Single Page Applications it was quite normal to include all the JavaScript in a big file and serve that to the user from a mostly empty page.

This resulted in a blank page until the browser downloaded, parsed, and executed the script, especially noticeable in slower networks.

In that big bundle, our application code is in many cases smaller than the libraries/frameworks we are including. Using these tools makes our application code smaller since we don't need to deal with some common functionality that now lives in those tools. However, I have seen many small and medium projects where a better choice of tooling would have an important impact on performance.

There are many case studies showing how improvements in performance (measured in loading time, speed index, first-paint, etc) improves business metrics like user engagement and retention. Delays produce stress to our users, even more than watching a horror movie.

A way to solve those empty pages is by using Server-Side Rendering (SSR). Doing it we can serve some content, which gets rehydrated when the JavaScript kicks in. SSR is not straightforward to implement. It adds load to the server, that now needs to do data fetching and template rendering heavier. If the server is not Node.JS then it needs to be moved to Node.JS or run Node.JS in parallel and pass to it the data about the request and return the result of the call.

When the delays are caused by a large bundle, SSR is a patch that won't solve the problem but move it further.

In Client-Side Rendering the browser waits for the server response, shows a blank page, executes the JavaScript and it's done.

In SSR the server usually takes longer to create the response since it needs to do more work and the payload is larger, then it can render the page but it's not interactive, finally it executes the JavaScript and it's done.

With SSR we get faster painting times, but we delay the Time to Interactive (TTI). We show content that the user can't interact with until the scripts are executed. This is sometimes called Uncanny Valley.

Clearly, we can do better. The way forward is to reduce the size of the JavaScript bundle, and serve to the user only what is needed.

How do we know what is needed? Traditionally we imported several scripts with the dependencies for our application code. In many cases we requested scripts for carousels, UI components, and widgets even if that specific page didn't need them. Same applied to CSS.

In short, we didn't have a way to define dependencies for our projects. Well, there were a couple of ways using YUI Loader or Require.JS that got very little traction. Some large companies developed their own approach, like Facebook's Haste and Bootloader and Google's Module Server.

Then ES6 came in. Lots of developers might see ES6 as some nice syntactic sugar, but its capability to import and export modules is a game-changer. In this example we have an example math library that exports some functions and constants. Then a file imports part of it.

Since we don't use a constant like somethingElse there is no need to include it in the output JavaScript file. Bundlers like webpack, parcel and rollup are able to leave unused parts of the code out in a process called tree shaking.

Imagine we are modelling a web site as a set of pages defined as dependencies. We would import all the pages and we would check what page we need to render based on the current path. Although this works, it's not very optimal. Why loading the contact page if it's not the current one? What if it rarely gets accessed?

A better approach is to use Dynamic Loading. The idea is to require the page that needs to be served. We use import to fetch the proper module and get a Promise back. When it succeeds, we render the page.

This way of importing modules allows bundlers to split the code. Since they return a Promise we stop assuming the dependency is already there, and that Promise can wrap a network request to get that module in the shape of a JavaScript chunk.

This is exactly how code splitting works. Code splitting is easy to implement at a path level, adding those dynamic imports in the router. They let us move from a large bundle into a smaller one with common functionality, plus other bundles that get loaded when the user visits a specific path.

This is a very good way to break the bundle file, serving just the code that is need for the current path.

More importantly, it allows us to add features without incurring in an increase in the payload size or rendering time. A site with 10 pages shouldn't load slower when it grows to 20 pages. Also, if one of the pages of the web site has many dependencies, it will only affect the performance of loading that page.

The idea is to move from a situation where as the project grows so it does the loading time, to a situation where the loading time is constant regardless of the size of the project.

Another technique that has been used for a long time is lazy loading. The idea is to delay some requests so they are made when needed.

If we imagine a regular website, we will want to load the content that is visible in the viewport (current screen), but there is no need to load content that the user won't see. A good candidate is a long page with data-heavy elements, or images in a carousel.

Lazy loading is typically applied to images, but in reality it works with any asset. Take web fonts, for instance: we could save requests to web fonts if that text isn't rendered. Some of the biggest savings can be accomplished lazy loading other components.

This is a very good deal. Instead of serving the code for the current path we can serve the code for the current screen.

In the line of serving just what is needed, I wanted to show another example. As I said, we should try to support the browsers our users are using, and one of the ways we can do it is introducing polyfills. In short, a polyfill is some code that is going to implement a certain capability on a browser that doesn't natively support it.

One example can be the IntersectionObserver API. This is a handy browser API that can be used to know when an element is shown on the screen. It's handy for lazy loading and also to track how many users see a certain element (ideal for measuring ad impressions). This API is not supported everywhere, being Safari the main browser without support t the time of this writing (December 2018).

Historically we have been serving the same bundle to all browsers. It is definitely difficult to do a correct feature detection on the server. It usually involves user agent sniffing, which is brittle and costly to maintain. Feature detection on the client is the way to go. How do we do it in a good way, without introducing large delays?

One way is to to use lazy-loading and polyfilling on-demand. We can apply the dynamic loading technique and import the polyfill only when the browser needs it. If the browser supports that feature natively we will save the overhead of fetching and executing unneeded code.

Have a look at this example. It shows a website that loads additional content, in this case a map from Google, when the user scrolls down. The example uses the mentioned Intersection Observer.

Since Safari doesn't support at this time the Intersection Observer API, there will be an extra request to the polyfill when using this browser.

This is an additional improvement because instead of serving the code for the current screen we also take into account the current browser.

We have been talking a lot about JavaScript but we can apply similar techniques to CSS. Splitting and serving CSS on-demand presents the challenge of Flash of unstyled content (FOUC). In short, the style of the current content could change when the asynchronously loaded CSS is added to the page.

CSS-in-JS is a technique that has been getting some traction lately. Some people like it because it helps defining the styles closer to the component's logic, the same way JSX was a shift and helped collocating the HTML markup within the definition of a component. Its detractors claim that CSS-in-JS is used by people who don't understand the cascade.

CSS-in-JS solutions look similar to this code snippet. In this case we are styling a modal component. We import a styled function and we create components with some style attached to them.

A great thing about CSS-in-JS is that since the styles are part of the component, we can take advantage of the same technniques we have seen so far.

Before using code-splitting we had large CSS and JS bundles.

After using code-splitting with CSS-in-JS we get a smaller common bundle and other bundles that will be loaded dynamically. These bundles contain the JS logic but also the CSS for the components included in them.

This is great because instead of serving the JS code for the current screen and browser, we can finally break down the monolith of JSS and CSS in an elegant way.

Wrapping up, I would like to send the message that we don't need to fall into the JS fatigue. It's easy to want to try and use the latest and greatest right away. However, I think it's smarter to read a bit about these tools, understand where they fit and add them to our toolbox.

In isolation they might feel useless, but when you look at the bigger picture you might find interesting new patterns and ways of building sites that are both developer and user friendly.

Thank you so much! If you appreciate this topic and enjoyed the talk, you might want to follow me on Twitter or have a look at my website.

Video

Resources

The following resources were mentioned during the presentation or are useful additional information.

  • Slides of the presentation on slides.com

  • Code Splitting with React and React Router

    Code splitting has gained popularity recently for its ability to allow you to split your app into separate bundles your users can progressively load. In this post, Tyler will take a look at not only what code splitting is and how to do it, but also how to implement it with React Router.

  • Code Splitting with React

    A look at React's Lazy and Suspense to achieve code splitting

  • Increase the Performance of your Site with Lazy-Loading and Code-Splitting

    A post explaining how to implement a High-Order Component to lazy-load other components and polyfills.

  • The State of JavaScript 2018

    The State Of JavaScript report, 2018 edition. It shows data collected from over 20,000 developers, asking them questions on topics ranging from front-end frameworks to testing.

  • When everything's important, nothing is!

    Do libraries and frameworks prioritize components on boot? If so, how, and if not what can we do? And, in exploring that question, Paul Lewis discovered that Server-Side Rendering isn't a silver bullet!

  • Front End Tech Talk - Facebook

    A blog post that summarizes Facebook's approach to dependency and resource management on the web.

  • Module Server by Google

    Module server is a system for efficient serving of CommonJS modules to web browsers. The core feature is that it supports incremental loading of modules and their dependencies with exactly 1 HTTP request per incremental load.

    The serving system implements the following constraints:

    Requesting a module initiates exactly 1 HTTP request

    • This single requests contains the requested module and all its dependencies.
    • Incremental loading (every module request after the first one) of additional modules only downloads dependencies that have not been requested already.
    • The client does not need to download a dependency tree to decide which additional dependencies to download.
  • Page about the talk on Codemotion Madrid 2018's website

Buzz and feedback

Here’s what was said about this presentation on social media.