As adoption of chatbots and conversational interfaces continues to grow, how will businesses keep their brand safe and their customer’s data safer?

From deliberate infiltration of  systems to bugs that cause accidental data leakage, these days, the exposure or loss of personal data is a large part of what occupies almost every self-respecting CIO’s mind. Especially since the EU has just slapped its first defendant with a GDPR fine.

Over the last 10-15 years, through the rise of the “interactive” web and social media, many companies have learned the hard way about the importance of techniques like hashing passwords stored in databases and sanitising user input before it is used for querying databases. However as the use of chatbots continues to grow, conversational systems are almost certain to become an attractive method of attack for discerning hackers.

In this article I’m going to talk about some different types of chatbot attacks that we might start to see and what could be done to prevent them.

Man in the Middle Attack

In a man in the middle attack, the adversary intercepts traffic in between the many components that make up a chatbot. Baddies might be able to inject something into a library that your beautiful UX uses that logs everything that your user is saying or they might not need to change the code at all if you are not using HTTPS.

The chat interface on your device communicates (hopefully securely over HTTPS) with a server that the developer operates and may in term communicate with an external NLU provider. If someone was able to build a man-in-the-middle attack between any of these components it could be a big problem.

These sorts of attacks are clearly a serious problem for any chatbot that will be talking to users about personal information. Even if your chatbot is designed to answer frequently asked questions without any specific link to personal accounts, vulnerability to this attack could give away personal information that the user has inadvertently shared (From “Do you have kids meals?” and “Do you deliver to Example Street” we can infer that the user has children and lives on Example Street).  

Mitigation

Developers of chatbots should make sure that bots are using the latest security standards – at a minimum all communication should be encrypted at the transport layer (e.g. HTTPS) but you might also consider encrypting the actual messages before they are transmitted as well. If you’re reliant on external open source libraries then make sure you regularly run security checks on your codebase to make sure that those external libraries can be trusted. If you are deploying a bot in a commercial context then you should definitely have independent security/penetration testing of chatbots as a key part of your quality assurance process.

Exploitation of Third Party Services

The chatbot has often been seen as the “silver bullet” for quickly acquiring usage. No longer do you need to build an app that users have to install on their devices, simply integrate with the platforms that people already use e.g. Facebook, Google Home, Alexa and others. However, it’s important to remember the security consequences of this approach, especially in use cases with sensitive personal information and high stakes if there was ever a data leak.

Facebook, Alexa, WhatsApp, Telegram, Google Home and other bots use this pattern: your device communicates with the chat service you are engaging with which in turn sends messages back to your service via a “WebHook”

In this scenario your bot’s security is heavily reliant on the security of the messaging platform that you deploy your system onto. For the most part,  these platforms typically have sensible security procedures. However it’s important to consider that large companies and platforms are desirable targets for hackers due to the huge potential personal data pay off from a successful breach. 

Of course it’s not just the “Messenger Platform” part of this system that’s of interest to attackers. The “External NLU provider” in our diagram above could also be the target of an attack and user utterances stolen. Remember that any external service, whilst useful in many use cases, should be regarded with a healthy scepticism where security is concerned.

Mitigation

If you are building chatbots tied to third party platforms then you can try to mitigate risks by coding defensively and sharing information sparingly. For example, never have your chatbot ask the user for things like passwords or credit card numbers through one of these portals. Instead use your companion app or website to gather this information securely and tie the user’s Messenger ID to their user account within your infrastructure.

When it comes to using external NLU a good practice is to run some anonymisation, removing things like names, addresses, phone numbers etc, on input utterances before passing them on to the service. You might also consider using on-premise NLU solutions so that chat utterances never have to leave your secure environment once they’ve been received.

Webhook Exploits

When your bot relies on an external messaging platform as in the above scenario, the WebHook can be another point of weakness. If hackers can find the URL of your webhook then they can probe it and they can send it messages that look like they’re from the messaging platform. 

Mitigation

Make sure that your webhook requires authentication and make sure that you follow the guidelines of whichever messenger platform you are using in order to authenticate all incoming messages. Never process messages that fail these checks. 

Unprotected Device Attacks

Have you ever left your computer unlocked and gone to the water cooler? How about handing your mobile phone to a friend in order to make a call or look at a funny meme? Most people have done this at least once and if you haven’t, well done!

You should be prepared for opportunistic attackers posing as other users when using your chatbot. They might ask probing questions in order to get the user’s information “What delivery address do you have for me again?” or “What credit card am I using?” 

Mitigation

Remember to code and design defensively. Responding with something like “I’m sorry I don’t know that but you can find out by logging in to the secure preferences page [URL Here]” would be a relatively good response.

Of course there’s not much you can do if the user leaves their passwords written down on a sticky note next to the terminal or leaves their password manager app unlocked but by requiring users log in to get access to sensitive personal info we’ve taken some sensible precautions.

Brand Poisoning Attacks

Microsoft Tay is one of the most famous examples of a brand poisoning attack

User data and proprietary information are clearly a high priority but there are other risks to your chatbot that you should also be mindful of. An adversary could poison the way that your chatbot responds in order to screen capture it saying something controversial and start a defamation campaign, poisoning your brand and putting you in a sticky situation. 

In March 2016, Microsoft brought online an experimental chatbot called “Tay” which was designed to learn to respond in new ways by interacting with its users over time. From a technical perspective, Tay was an incredible piece of kit combining state of the art Natural Language Processing with Online Machine Learning. However, the developers didn’t bank on swathes of twitter trolls poisoning Tay’s memory bank and turning her into a Holocaust denying racist.

This attack was able to happen because of Tay’s state-of-the-art architecture that allowed her to learn over time and change her vocabulary and responses over time.  In 2018 most bots still  use a combination of intent detection and static rules in order to work out how to reply to users.  This means that most bots probably isn’t susceptible to this kind of attack. 

 However, there are still ways that this kind of attack can trip you up. It all hinges on how your bot reacts to abusive messages and whether it’s allowed to reiterate stuff that the user has said.

Take the example conversation to the left here. It’s not exactly undeniable proof of wrongdoing by Joe’s Shoe Emporium but a well timed social media post or BuzzFeed article with “#NotADenial #BoycottJoes #ChildLabour” or could be enough to really do a number on Joe’s brand.

Mitigation

So how can we avoid this kind of thing? Well a good start would be to check the user input for profanity as part of validation and then refuse to continue the conversation if things turn hairy. Think of this a bit like a real contact centre handler who has been trained to hang up the phone if the customer gets angry or aggressive. IBM advocate for all chatbots being able to detect and react to profanity and there’s a great post here about some approaches to doing that. Ultimately the way that your bot reacts to rude input – whether passive, humorous or a simple shut down – will depend on how you want your brand to come across.

I’d advocate for “dealing with aggressive/subversive user interactions” being high on the chatbot QA team’s todo list.