What Alan Turing can teach us about UX
MLUX is rolling off our lips quite a lot, it’s true. But with reason. Because every other day, we come across some new insight or learning about the crossroads of ML and UX. And sometimes, it just takes an old anecdote to show the relevance of MLUX today.
The father of computer science. The creator of the Enigma apparatus. The face on a 50 quid banknote. Let’s face it. Your resume is nothing compared to that of Alan Turing (don’t worry, neither is ours). The man even lends his name to one of technology's most famous tests: the Turing test. And even today, this age-old test provides us with insights into emerging technologies. Allow us to elaborate.
Turing test in today’s world
Quick memory refresh: the Turing test is a thought experiment that tests if artificial intelligence exhibits human behaviour or not. It probes whether a distinction can be made between a human and an AI. But do you realise we conduct this test multiple times every single day?
Imagine yourself opening a messaging app. You can chat and ask questions to a random person on the other side. How would you know if the person you’re chatting with is an AI or a real person? In a certain way, you’re performing a Turing test with every conversation. If you’re unsure or if you would classify an AI as human, the AI would pass the Turing test. But has someone/something ever passed the Turing test?
Eugene, the machine
Let’s take you back to 2014. That year, a group of people were introduced to Eugene Goostman, a 13-year-old Ukrainian boy, whom judges could chat with during a Turing test competition. Little did the judges know that Eugene was in reality an AI in disguise. However, Eugene managed to fool up to ⅓ of the people who chatted with him. Enough to pass the Turing test.
But it wasn’t the achievement that sparked our interest. It was the critique. Admittedly, this chatbot wasn’t very powerful, unable to answer most basic questions. But, the thought of chatting with a 13-year-old with limited knowledge of English grammar, lowered the expectations of the judges significantly. It made them forgive lots of errors. In fact, the judges were primed to be forgiving. It managed their expectations.
Context changes perception
Imagine if we framed Eugene as a 50-year-old linguistic professor. Chances of passing the Turing test would have plummeted. But, when we provide the right context, we can make it pass as human. This experiment shows that when we understand and manage users’ expectations, they can become more understanding of the AI’s capabilities.
Every day, we interact with AI models. Whether it’s through a dating app giving you the best match, a health-tracking app giving you advice or even just your banking application showing you the best loan. But it’s not our goal anymore to trick our users into believing that there’s a human providing you these recommendations. It’s way more important to provide them with the right context, and the right clues to know when to believe or doubt the AI predictions.
The beauty of not having to pretend to be human is that we don’t have to stick to simple dialogue. We can use the best practices of UI and UX patterns to deliver a tailor-made message to your user. A message that provides context and manages the user’s expectations in a way that humans could never do.
Don’t say ML. Say MLUX.
That’s why you can’t ignore UX when working with AI or ML solutions. Put your UX design to good use by providing context for your user. Tell them what they can expect of your AI model. Explain to them what it’s capable of and aim to complement human behaviour instead of mimicking it. So, whenever you mention the words ML, make sure to add UX into the mix.
Make the Turing test your minimum threshold and dare to go beyond.