Image Image Image Image Image Image Image Image Image Image

Media Future Week | March 19, 2024

Scroll to top

Top

Bots: What is behind the curtain?

Bots: What is behind the curtain?

Scott Smith is Media Future Week’s favorite trend watcher. Wednesday morning on April 20th, he took us to a future where bots are part of our daily lives. What are bots? What do they do? And what topics are important when looking at how bots are being utilized?

26512350956_515c2a8723_oThe so-called ‘Turing test’ is a 50-year-old test, still being used today to check if a bot comes across as believable. You chat with a robot, but the question is do you realize that it is a robot or is the robot convincing as a human being? Originally the ‘Turing test’ was devised to check if you could tell the identity of a person hidden behind a curtain (e.g. male or female), without hearing their voice. Although the test is still more than 50 years old, it is still used today to establish if a bot is credible.
Bots are not new. Already in the 1960’s experiments were carried out with chatbot Eliza. In the 1990’s there was the chatbot SmartChild (quite a scary name when you think about it) and in more recent years, the Asian chatbot XIAOICE. Today’s bots have the ability to learn from previous conversations and therefore give the impression that they really know you. The technology behind bots has become cheaper and less complex, so bots are now frequent.

What do bots do?
There are several different types of bots, e.g. there is this bot that combine different headlines on Twitter in an interesting (and sometimes weird) way. You have bots that interfere with systems, like the bots being used by artists on Spotify to ensure their music is played more frequently. Bots can also do illegal things, like the ‘Random Darknet Shopper’ bot, a bot that randomly orders things on the Internet. The researchers who created this bot had the strangest things delivered to them, including illegal drugs! Bots can also learn a language, understand context and adapt to it, upscale, develop, multiply and hide.

26472444611_8b2895d7f9_oActivist bots
Bots are also used to encourage societal and/or political activity. There are bots attempting to influence the environment on Twitter by tweeting about certain candidates or issues and it is not immediately recognizable that bots are doing the tweeting. There is also a bot asking people to correct their tweet(s) if they use a particular word or combination of words (for example, when people tweet about immigrants, they are asked to leave out the word ‘illegal’).
Then you have the bots asking questions on Twitter and thereby trying to initiate a discussion. Once the discussion is up and running, the work is continued by humans. Bots can also be active behind the scene, monitoring and tracking tweets and/or changes to websites. These bots are a kind of ‘transparency bots’. You also have activist bots that recruit humans to their cause on Twitter, like the ones used by ISIS. The group ‘Anonymous’ used bots to undermine the impact of the ISIS recruitment bots. Activist bots against other activist bots, are you still following?

26528084435_a5ef38cc72_oBots in media
In the media, bots are used to write simple articles, such as financial or sports news flashes. Increasingly bots are also being used to split news items into smaller pieces and bring them to the user through a chat service. After each news item you can either decide to read more about the topic or move on to the next one. The news is being presented as instant (chat) messages. It looks like the work of a bot, but still human editors are often involved in the process of presenting the news this way. Still, you are probably wondering who is behind the curtain – are you chatting with a human or a bot?

Scott predicts this is also going to happen with Facebook Messenger. Recently, Facebook announced that Facebook messenger will be made accessible to brands and news organizations. It will become a place where chatbots can send you messages, a mix of news, editorial and commercial ones, but will you, as a user, still be able to recognize what type of message it is?

Developments are rapid and Scott is seeing many companies starting to use bots, without considering the user experience. How would the users like to be addressed? According to Scott, Silicon Valley needs to involve poets, writers and screenplay editors in shaping the conversation in bots. In that respect, bots can learn a lot from theater plays and poetry.

Where are the limits?
Do you really want to know that it is a robot addressing you? Do you need to know? Where is the limit? What are the ethical boundaries and when do you cross them? Would it matter if the ‘Man Booker Prize’ was awarded to a book written by a robot? Does it matter whether it is a human or a bot hiding behind the curtain?

And if you take a look at the messages produced by bots, how much is code and how much is human? And who is responsible for the output? If a bot learns from previous experience and thereby turns racist, who is at fault? Should bots be allowed to make decisions on their own or should there always be human supervision?

Or are we going to adapt to the behavior of robots? After all, you still do not know what or who is behind the curtain of the ‘Turing test’.

Scott Smith @ MFW16 from iMMovator on Vimeo.