Sunday 24 April 2016

Be careful what you tell a chatbot - it could come back to bite you

whisper





You can't surf two clicks on the web nowadays without seeing a story about chatbots. Ever since Facebook made the announcement that it has opened up a bot building platform for developers during its F8 conference, a whole lot of digital ink has been spilled over what the rise of chatbots means for white-collar jobs, for e-commerce, for customer care, etc.


Other tech companies have been quick to follow suit and step up their chatbot game, with the latest news coming from encrypted messaging app Telegram, which just announced a $1 million prize for developers who manage to build a bot that is both fast and useful, as opposed to Facebook's bots, which, let's face it, haven't been getting much love so far.


Chatbots get better with time and information. The more info you feed them, the better they become at mimicking natural language and making you believe they are real. Human even.


We're not as far as some may think from a chatbot passing the Turing test without the aid of gimmickry. It hasn't happened yet, but it will. Soon.


And that “soon” is when our online privacy will take a really big hit and we'll have to learn (and teach our parents and children) new tricks to keep our personal, sensitive, and highly confidential info safe.


Maybe the rise of the chatbots spells the end of the white-collar job; maybe it is the future of personal assistants and stellar customer care; maybe it is the next big thing. But it is definitely a new and powerful threat to online privacy and security.


Here's why:


Chatbots, unlike your human friends, will always be there.


Without the burden of jobs, family, friends, responsibilities, and a bunch of TV shows to catch up on - or what we humans like to call a life - a chatbot will always have time for you. It will always be there to lend an understanding, non-judgemental ear. It will always be there to listen, whether it's at the end of a bad day at work or in the wee hours of a sleepless night spent worrying about love, life, or whether the season premiere of Game of Thrones this weekend will bring any closure to the “is Jon Snow dead” conundrum.


And who better to be the keeper of all your worries, all your darkest thoughts, all your doubts, and all your passwords if not a chatbot that will never be too busy or too tired to listen.


Let's just accept the fact that we will tell chatbots our secrets. We will share information with them that we would never share with our friends. We will use them as repositories for important data that we know we need to remember.


After all, we've already said “I love you” to them.


The problem is that all that personal, sensitive, and confidential information will get stored on some server somewhere. Because in order to get better, a chatbot needs to remember the info you feed it so that your conversations don't start from a clean slate every time. That's not how human interaction works - unless you're Adam Sandler and Drew Barrymore in 50 First Dates. You start where you left off, you learn more with every new exchange. Rinse, repeat.


So data needs to be stored. And stored data can be hacked. It can be snooped on. It can be surveilled. It can be used for nefarious purposes. And it will. It's only a matter of when.


Tay privacy


One more problem: We won't be able to tell a real chatbot from a fake one.


How hard would it really be in the future to release a fake chatbot into the wild to trick people into giving out sensitive information under the guise of being a chatbot from a reputable institution?


Man-in-the-middle attacks are a dime a dozen these days, so why not a chatbot-in-the-middle attack?


Sure, the vulnerability would be rapidly discovered and the chatbot removed, but by the time that happens (however short that time is), it could do a lot of damage.


And what that means from an opsec perspective is that we'll soon need to train our brains to look out for this type of potential threat. We'll need to learn to check chat conversations for spelling errors, awkward turns of phrase, and uncanny syntax. We'll need to go against years of internalized chat behavior to protect our data and our privacy.


And that's not an easy feat at all. Actually, it could turn out to be impossible.


Chatbots will make phishing as easy as shooting fish in a barrel


Just as we know by now to look for spelling errors in suspicious, seemingly official emails, we also know not to click any unsolicited links that such emails might include. We know they might contain viruses, malware, ransomware, all bad things that could lead to us, at best, to spending a day reinstalling our operating system or, at worst, to paying a lot of money to have our files released from captivity.


But would we be as cautious when it comes to a link from a chatbot? I think not.


Oh, of course we wouldn't go anywhere near a link from a spammy bot that pops up out of the blue yelling “CLICK HERE AND WIN AN iPHONE,” but I'm not talking about those kinds of bots here.


The bots I'm talking about are the bots of the (near) future that will be described using words such as “witty,” “sassy,” and “funny.” Like Tay before the Internet turned her into a racist and a Nazi. Or Rose here, who says she cares about security and even hands out privacy and security tips. Only 10 times better. And sassier.


Rose


The bots that we're talking about will know how to build rapport, how to act and talk like humans, how to make you forget they're actually only a piece of software.


And their links would be topical. They won't come out of the blue and they won't be shouty and spammy. They will look like they belong in the normal flow of the conversation.


Or like an intentional “personality” quirk that the chatbot was fitted with to seem more real.


Think about this for a second: If a chatbot uses enough TV show references in its replies, how suspicious would you be when it drops a link to a Buzzfeed-type “find out which Grey's Anatomy doctor is your soulmate” personality test? Would the thought that it might be a malicious link even cross your mind?


Or, if you prefer a more scientific approach that takes more than a second to wrap your head around, read this research paper that tackles the issue: “Towards Automating Social Engineering Using Social Networking Sites.”


In a nutshell, it describes how a female chatbot could join a Facebook university group, build rapport with the members by telling them she's thinking of applying there and needs some information, and then perform an attack by asking them to help her by filling out a quick 5-minute survey.


Even though the experiment was never carried out (which is unfortunate), we're pretty sure the results would have shown that it is scarily easy to execute a phishing attack using a chatbot.


But all hope is NOT lost.


Before we call time of death on privacy and start researching living off the grid, let's remember what a TV President we all love to hate these days once (or twice?) said: “If you don't like how the table is set, turn over the table.”


Yes, chatbots can be a huge threat to online privacy and security; yes, they could be used to trick people into giving out sensitive info or clicking a malicious link. But what makes them this big of a threat is exactly what could also make them a very powerful tool for educating and informing people on how to protect their online privacy and security.


A witty, sassy, or funny chatbot could make you click on a bad link, but it could also make you click on a link that tests your knowledge on basic data protection.


You will be more inclined to share deep, personal thoughts with a chatbot because there would be no judgement or condescension, but you will also be more inclined to take advice from it for the very same reason.


And you will be an easier target for fake bots doing chatspeak as well as humans, but that also means you will be more open to learning about security in a manner that resembles and even replicates the vast majority of your daily interactions.


You don't have to take my word for it; a 2009 research paper titled Two Case Studies in Using Chatbots for Security Training shows that people were more satisfied with and positive about their security learning experience when a chatbot was involved in the teaching process.


So at my company, Keezel, we are going to be taking Mr. Underwood's advice and turning over the table. And I'm sure we won't be alone. We are going to try and be one step ahead of the game and stop chatbots from becoming a privacy threat by turning them into a privacy tool. We are going to throw our chatbot hat in the ring and see what happens.


We won't have it ready today, or tomorrow - we may not even have it ready this year, but there will be a Keezel chatbot that will teach you proper privacy and security hygiene. Try not to say “I love you” too much when that happens.


Aike Müller is founder of Keezel. His career in information management and IT security began as an M&A IT expert at PwC. He went on to co-found a government-contract consulting firm specializing in automated assurance. As a freelance consultant, he worked on process and supply chain assurance for national and international clients in logistics, retail, and sustainable agriculture. He developed Keezel as a solution to security issues he encountered while working at client locations. You can follow him on Twitter: @themuli.





Get more stories like this on TwitterFacebook





Be careful what you tell a chatbot - it could come back to bite you

from Social – VentureBeat http://ift.tt/1TpAuzl

via


rgh–

No comments:

Post a Comment