Fun with Browser and Sensor APIS

A presentation at DDD Perth in August 2019 in Perth WA, Australia by Mandy Michael

Slide 1

Slide 1

Slide 2

Slide 2

Slide 3

Slide 3

Hi everyone, my name is Mandy, i am a front end developer and the development manager at Seven West Media here in Perth. My passion is experimenting with front end technologies I like to see how we can combine techniques that we already use on the front end and create new and exciting experiences in our web projects. So that is what we are going to be doing today, - we are going to look at some experiments.

Mandy Michael @mandymichael @mandy_kerr

Slide 4

Slide 4

Specifically we are going to have some fun with device sensors and browser apis. Given the short time we have i will focus specifically on Speech Recognition API The Device Orientation API Light Sensor

Slide 5

Slide 5

What i would like you to remember is that the talk is called “FUN” with sensors and browser apis, because i like to play and tinker, so that is what my examples are going to be. I haven’t built anything particulary serious with it, but I see this as a good opportunity to start thinking about how you might be able to make the most of them in your work, experiments or side projects. Some of my examples will have really good browser support, and some are still very experimental, but i’d say remember that Everything we have on the web started out as a piece of experimental technology so let’s see what we can do with it and have a little fun along the way.

Slide 6

Slide 6

Support Legend full support behind a flag next version

Slide 7

Slide 7

So to start off we’ll look at some audio and speech demos.

Slide 8

Slide 8

“ The Web Speech API enables you to incorporate voice data into web apps. ”

I want to start with The Web Speech API is made up of two separate interfaces. SpeechRecognition for understanding human voice and turning it into text and SpeechSynthesis which reads out text in a computer generated voice. We are going to look at SpeechRecognition today, but if you want to check out synthesis i recommend the Mozilla Docs they have some great examples.

Slide 9

Slide 9

Demo

Slide 10

Slide 10

To get started we’ll need a few things SpeechRecognition() constructor which will create a new SpeechRecognition object instance and that will give us access to all the API’s methods and properties. there are quite a few of them but we’ll keep it pretty simple today.

Slide 11

Slide 11

Next we need recognition.start () - the start() method starts the speech recognition service listening to incoming audio recognition.stop () - stops the speech recognition service from listening to audio, and attempts to return a result from the audio captured

Slide 12

Slide 12

Then we’ll also need a couple of event handlers. Most of them simply listen for changes in the recognition status. onresult - property that will run when the service returns a result. This is special because it’s executed every time the user speaks a word or several words in quick succession. onspeechend - will run when the speech recognition service has stopped being detected

Slide 13

Slide 13

The last thing we need is to access the results, which we can access with the SpeechRecongitionResultsList object, which is basically a collection of information about the result. This contains the first result object This result contains an item called a SpeechRecognitionAlternative object and SpeechRecognitionAlternative contains what we really want - the transcript which is what it thinks it heard from the mic This sounds confusing, but the reason it exists, is because a result can, if you decide to, contain multiple “alternative” interpretations of what you said. By default it’s only 1, but if you want to get fancy you can return multiple result options each with a “confidence rating”. We’ll just focus on a single result today.

Slide 14

Slide 14

So first we get the List object Then we access the first result Then the first alternative then the transcript.

Slide 15

Slide 15

So if we put that all together we get something like this

Slide 16

Slide 16

Once you have these basics, you can start combining them to do all sorts of things.

Slide 17

Slide 17

“ Audio & speech allow for more interactive and more accessible experiences. ”

Slide 18

Slide 18

Orientation Sensors

Slide 19

Slide 19

“ The accelerometer detects a change to the orientation of the device. ” Many of our web-enabled devices have a gyroscope and accelerometer that can determine a devices orientation. This is commonly see in mobile devices to automatically rotate the display so that it’s the correct way up.

Slide 20

Slide 20

DeviceOrientationEvent

So there are a few key things we’ll need First the Device Orientation Event which is what provides information on the physical orientation of the device that we are using. Then we can access the Device Orientation Event Properties Alpha, Beta and Gamma. These each represent different number values depending on the device orientation.

Slide 21

Slide 21

event.alpha;

Alpha represents the rotation on a flat surface, and can be a number from 0 to 360 kind of like a level

Slide 22

Slide 22

event.beta; Beta represents tilting forward and backwards, with a range of -180 to 180

Slide 23

Slide 23

event.gamma; Gamma is tilting left or right, which represents a range from -90 to 90.

Slide 24

Slide 24

if (window.DeviceOrientationEvent) { window.addEventListener(‘deviceorientation’, deviceOrientationFunction); }

First we want to check to see if the DeviceOrientationEvent exists - this is just to make sure that we have access to the device information before we try to do anything. If we have access to the event we can add a deviceorientation event listener to the window and pass in a function that executes some code.

Slide 25

Slide 25

function deviceOrientationHandler(event) { const currentGamma = event.gamma; if (currentGamma < -50) { !// do something } }

Inside the function we can then access the alpha, gamma and beta properties i mentioned earlier in order to do something.

Slide 26

Slide 26

Demo

Slide 27

Slide 27

Demo

Slide 28

Slide 28

Demo

Slide 29

Slide 29

“ We can move on from static, predetermined interactions and allow the user to shape the experience. ”

The use of orientation and motion are often seen in games or augmented reality. But it’s also used on the web for updating map information, like turn by turn navigation, or even subtle UI effects like parallax. While people are making more use of it for those subtle UI effects what I think is really exciting is that it offers us the opportunity to add an additional dimension to our storytelling and interactive experiences. Right now a lot of what we create on the web is static, aside from basic things like hover states or predetermined animations. But with device orientation we can create an experience that takes information from our users environment and uses that to tell the story. To me that is really amazing, and allows us to begin to create more immersive and engaging stories and allow the user to be a part of that experience.

Slide 30

Slide 30

Light Sensors

Slide 31

Slide 31

“ A photodetector that is used to sense the amount of ambient light present. Ambient light sensors can be found in all sorts of devices like mobile phones, laptops and tvs. It senses the amount of ambient light in the room and is most often it’s used to dim screen brightness. ”

Slide 32

Slide 32

const sensor = new AmbientLightSensor(); !// Creates a new AmbientLightSensor object.

Much like the Device Orientation API we need to create a new instance which will give us access to the AmbientLightSensor

Slide 33

Slide 33

sensor.onreading !// returns the current light level

We’ll also need the onreading event handler - this is not specifically part of the ambient light sensor api, instead this is part of the Sensor interface which contains a bunch of properties, event handlers, and methods that any sensor can use. The onreading EventHandler is called when a reading is taken The reading frequency is decided by you, you can pass an optional value to the sensor’s constructor which will specify the number of readings per second.

Slide 34

Slide 34

sensor.illuminance !// returns the current light level (lux)

Finally we can access the illuminance which is the only property attached to the Ambient Light Sensor.

Slide 35

Slide 35

if ( ‘AmbientLightSensor’ in window ) { const sensor = new AmbientLightSensor(); sensor.onreading = () !=> { if ( sensor.illuminance < 20) { element.style.setProperty(‘!—bg’, ‘black’); } else { element.style.setProperty(‘!—bg’, ‘blue’); } }; sensor.start(); }

Slide 36

Slide 36

Demo

Slide 37

Slide 37

Demo

Font: Tiny by Jack Halten Fahnestock

we can improve legibility of our text with new features allowing us to query the characteristics of the user’s display or light-level This demo currently only works behind a flag, the support isn’t great but the opportunity is there for us in the future.

Slide 38

Slide 38

“ Don’t be limited by what we can already do, the web is still young and there is so much for us to create. ”

Slide 39

Slide 39

Demo

You could create puzzles or other interactive experiences that require multiple inputs in order to complete them. Like escape the room.

Slide 40

Slide 40

Demo

Slide 41

Slide 41

developer.mozilla.org/docs/Web/API developer.mozilla.org/docs/Web/API/Sensor_APIs developer.mozilla.org/docs/Web/API/SpeechRecognition developer.mozilla.org/docs/Web/API/AmbientLightSensor developer.mozilla.org/docs/Web/API/DeviceOrientationEvent https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition