Live and (Machine) Learn

A presentation at Women Who Code Tokyo in in Tokyo, Japan by Sarah Drasner

The life we live online increasingly informs the way we live offline as well. Businesses live and die through algorithms like SEO, humans are sorted in government systems, and we make large, life-governing decisions through what is shown to us on the web: home buying, where to live, what to eat, and who we're in contact with regularly. The first shift we as web developers saw was people living and learning on the web more and more, which excited us. But as we start to automate those tasks through machine learning algorithms, a lot of us have trepidation. We know systems have flaws, what are the political and social consequences?

In this talk we'll explore this paradigm shift and some of it's dangers, but we'll also talk about the good impacts technology can bring. Helping people who need it, automating tasks for humans with disabilities, communication for emergency services: the possibilities for positive influence are endless. We'll explore just some of the tools that are out there, how with a little creativity, we can use these technologies for good. We as developers have a voice and chance to make a difference.

Code

The following code examples from the presentation can be tried out live.

  • Dynamically Generated Alt Text with Azure's Computer Vision API

    I kept hearing about machine learning being used for evil and wanted to use it for something good. Social media posts typically don't have a way to enter alt text and the only users I see that reliably remember to add descriptions to the post are accessibility experts or blind people. Hopefully this allows good alt text to be a bit more ubiquitous. You can find more information on how Azure's Computer Vision API works, as well as how to use it in your own projects here: https://aka.ms/Uzrshc

  • Azure's Emotion API and Emoji

  • Three.js, Vue, and LUIS

    I have wanted to be able to update a three.js visualization on the fly with Vue for a little while ago. This app started with the base concepts outlined in this repo and refactors/extends them to be manipulated by your emotion based on speech. You can update the visualization (through state in Vuex) by using LUIS to analyze your speech.

    LUIS is a machine learning based service to build natural language through the use of custom models that can continuously improve. We can use this for apps, bots, and even IoT devices. Here we're guiding our visualization, first by telling it our mood, and then we're able to control with it with our voice to update it on the fly and without the use of our hands. The purpose of this demo is to create a biofeedback visualization for those who are trying to guide themselves through healing.

  • Firefighter Demo

    Showing how to improve offline functionality for an app that firefighters use. Live demo at https://sdras.github.io/firefighter-demo/

Buzz and feedback

Here’s what was said about this presentation on social media.