A presentation at Tech Wednesday in in Birmingham, UK by Stuart Langridge
The UX of Text Stuart Langridge @sil kryogenix.org
17000 years ago, the world's first modern French person stood in a cave and tried to find a way to tell us, three hundred generations into the future, what was on their mind.
The survival of the tribe
the majesty of the animals that shared their world
the glory of the hunt
the stars in the sky
And they drew it, as best as they could. These are the cave paintings at Lascaux in southern France.
The Great Bull is 17 feet long; it's the largest cave art animal anywhere in the world.
The Crossed Bison tells us something about the artist; they understood perspective and the idea of representing three dimensions in two-dimensional painting.
Four thousand years ago, an unknown author first wrote down a story about the king of Uruk, a city in what's now Iraq, and he opened it like this:
I will proclaim to the world the deeds of Gilgamesh. This was the man to whom all things were known; this was the king who knew the countries of the world. He was wise, he saw mysteries and knew secret things, he brought us a tale of the days before the flood. He went on a long journey, was weary, worn-out with labour, returning he rested, he engraved on a stone the whole story.
That's the stone: the first tablet of the epic of Gilgamesh, the world's first surviving work of literature.
Gilgamesh was two-thirds god and one-third man. He fought and defeated the monster Humbaba. He sought eternal life.
He is famous enough to receive the ultimate accolade: four hundred years into our own future, Jean-Luc Picard tells his story to Dathon at Eladril.
The French chap? He liked animals. Don't know a lot else. What's the difference?
Words
words are good
We talk to computers by pointing and tapping and clicking. Like a caveman. We talk to one another with words; pure, unvarnished text.
There will be some of you now thinking, oh god, what he means by using text is this: the command line. That is not what I mean.
Everyone is familiar with the "computer says no" thing. The issue is that to talk to computers, we require that people speak computerese, and that doesn't work. We've avoided that by not having people converse at all; press a little picture instead. But there's a better way.
Importantly, we don't want to try and make a computer be a human either. Computers are really bad at pretending to be a human. The Turing test is hard, and nobody's even close to passing it. And you don't need to. What's required is that someone, interacting with your app with language, knows that it's a computer but doesn't have to talk to it like a computer.
A little story. When I got my first Amazon Echo, I did all the usual things with it: Alexa, set a countdown timer for 20 minutes; Alexa, what's in the news; Alexa, play Roads by Portishead. One day, I got a phone call while it was playing music. And, without thinking, I snapped at it "Alexa, shut up!" And it did. This is how you make your electronics come alive. I actually felt guilty for being mean to it.
What we need to do is help people have a conversation with your app. To lead them down a path that lets them get what they want. This is all the same things you're doing anyway: plot a user journey, do interaction design, understand the user experience. It doesn't have to be about graphics and whether your buttons have rounded corners: designing the user experience applies to everything.
But now it's about language. About how to ask questions in a way that helps people to do what they want. We don't need artists: we need poets.
So where does this language, this conversation, take place? I've already mentioned one place: Alexa. But there are actually two. There are home assistants: Alexa, Google Assistant and Google Home, maybe Siri.
And there are messaging apps: Facebook Messenger, Whatsapp, Slack. Two completely different platforms that everybody's already using, so why shouldn't they be using them to work with you? THat's a huge userbase.
And you don't have to worry so much about installation. Nobody has to go off to the app store and install your thing, or remember your URL. Interacting with your service is just "Alexa, ask Your Thing about Some Thing". Or sending you a message, right from the messaging app they're already in for most of the day.
The beauty here is that from a development point of view these are actually very similar services. If you build a textual way of interacting with your service, you can then plug it into everywhere. Once it exists, you can put it on Alexa, and then putting it on Google Home or HomePod is doable too, and putting it on Facebook Messenger and Whatsapp is just as doable, and it's all the same interface. Your work to help people interact works everywhere. Because this isn't really about the technology. It's about the user experience. And then you've got the reach to get everywhere. Everyone's on at least one chat app, and maybe on twenty. Voice assistants will be in 40% of homes this year.
An example. This isn't just theoretical. One chap in London set up a chatbot lawyer to talk peopple through the process of appealing a parking ticket. And with the bot's help, 160,000 parking tickets were overturned. A hundred and sixty THOUSAND. That's what you get if you apply an understandable user experience to something complex like the legal system.
And because this is something new, just by being involved, you're being ahead. Nobody's going to write a headline saying "new startup has a website". But "Birmingham firm an early adopter of new chat technology"? Maybe, yeah. You're getting to people where they are.
Words are good. Virginia Woolf said that language is wine upon the lips. I'm not sure if it's also pizza and beer upon the lips, but we can find out in half an hour.
I’m happy to talk more about this to any of you. Any questions?
About bots, and how they shouldn't pretend to be humans. About how user experience is more than just user interface. About language, and how a word can be worth a thousand pictures. About le mot juste. About where we go next.