Workplace chatbots: too little, too soon?
“The reaction to ELIZA showed me the enormously exaggerated attributions an audience is capable of making to a technology it does not understand” – Joseph Weizenbaum on the first chatbot created in 1964
The desirability of workplace chatbots is undeniably strong, as Andrew Pope and Chris McGrath have recently argued. I too have advocated the view that the digital workplace should be like a workplace concierge and would be delighted is chatbots could deliver. In the research we’ve been doing for our next Intranets in-a-box report, we’ve also seen that many vendors will be adding bot frameworks to their offer, implying that interest levels are high.
However, I’m concerned not so much about the technology as the ability for organisations to implement them in practice. With the current hype levels, I can see a backlash in 2-3 years where people are saying “our company created a chatbot but it never has the answers and nobody maintains it”.
Penny Wilson reports that the disillusion with consumer chatbots has already begun, so is the digital workplace leaping in just as the rest of the world is having second thoughts? We shouldn’t be seduced into thinking that this time the technology will solve our problems.
Chatbots are nothing new, so there’s no specific reason to say that they should be adopted now because they have suddenly broken a barrier. Their ability to parse natural language queries has improved, but there’s little to no AI going on in generating an answer once the query has been interpreted.
What has changed in the consumer world is the dominance of smartphone use. This makes a simpler interface approach much more appealing. We shouldn’t over-interpret consumer patterns though – most knowledge work still uses large screens – but there is still appeal for floor and field worker scenarios and for the many times an office worker is away from their desk.
What hasn’t changed is a need for good information management skills to implement them. Chatbot frameworks make the task code-free, but just like search, they need good content and ongoing maintenance to work well.
You can ask me anything
A primary appeal of a chatbot is in its promise as a universal interface. Rather than saying “here is a menu of the things I can do”, the chat dialogue allows unlimited requests.
This is particularly useful on smartphones because cascading menus work poorly with touch, and complex forms are harder to fill in. Even on the desktop, integrating with people-to-people chat environments such as Slack, Workplace by Facebook and Microsoft Teams is convenient because simple tasks like booking a meeting room can be done without jumping out of the conversation that may have triggered it.
A paradox of the chatbot is that the more universal it gets, the harder it is to perform well because there are more options to disambiguate. My big fear is that companies will try to simplify by releasing an HR chatbot, and then a Facilities Management one, then an IT one. Before you know it we’ll be back to the bad old days of intranet information architecture where everything was structured by department.
Don’t blame me, I only work here
The second challenge is that our digital workplaces are not well interconnected at the moment. It’s not the chatbot’s fault. Most workplace applications for HR, expenses and travel have terrible user experiences of their own, let alone playing nicely with other people’s. But just as intranet managers get blamed for the clenchingly-bad expense systems that they linked to, it is the chatbot interface that will take the grief. Beezy bot uses Nintex to act as glue, and that at least gives some interoperability.
In the consumer world we’re not there yet either. Alexa requires you to know the name of a skill before you can hook into other services. I have to say “Alexa ask Hive to turn the thermostat to 21 degrees”. Why doesn’t Alexa know that Hive is the right service to delegate temperature to?
Get your search right first
For many queries, your workplace bot is just going to be an alternative search interface. So ask yourself: would the same query in our enterprise search box give reliable results? If not, start there.
Bot frameworks such as Microsoft’s do well at spelling correction and parsing but they need a lot of training for anything specific to your company. Just as Siri struggles when you ask her to call a friend with an unusual name, chatbots will need significant manual intervention for customer names or product details unless correctly spelled (and mobile interfaces are more typo-prone).
Search results too must work within the limitations of the chatbot interface. It’s no good if your mobile messaging bot says “I found this” and links to a hulking PowerPoint file. What’s needed are more like card-based search results, and that may need a rethink of your content.
More positively, chatbot search queries tend to be richer, so give us more context to improve results. And chatbots can make people search more: If someone asks the team a question on Slack, the chatbot can intervene and supply an answer using search.
You can’t afford it
Right now, chatbots are like MS-DOS on steroids. You have to be quite accurate with the command or you won’t get an answer, and yet the interface gives no clues about what you can ask. This is the concept of affordance – popularised by the graphical user interface where the options should be clearly laid out. It is fine when you know what you want (“claim hotel expenses”) but not when you forget what is there and it becomes tedious to keep asking “what employee benefits there?”.
Something like vacation tracker in Slack gets it right – it’s more of an app integrated into the slack interface than a natural language interface. To start it you type ‘vacation [command]’, and see a more traditional set of dialog boxes to make your selection.
Without clear affordance, people tend to over-generalise AI competence. In an excellent piece on mistaken AI predictions, Rodney Brookes points out that “We all use cues about how people perform some particular task to estimate how well they might perform some different task”
So when a chatbot does one thing well we tend to assume it can do a wide range of things equally well, but that isn’t the case. This will lead to frustration.
We still want a chatbot
Chatbots still make sense for specific niches: to support tasks that are naturally mobile-oriented, for example, and where the scope of what is supported is sufficiently clear and constrained that there is a fighting chance of maintaining it.
Chris McGrath is realistic when he says it will take 5 years for most companies to have a chatbot of some sort and 10 years before they can handle any question. I’m less convinced when he says the technology of today is ready for it.
I also doubt that companies are in a position to put the work in unassisted, leading to poor maintenance. In general, consumer chatbots don’t yet work well, and, like website search, a whole lot more resources get thrown into them than enterprise equivalents. Few companies have demonstrated that they can adequately and sustainably resource enterprise search, so I see no reason to think that workplace chatbots will be different.
Similarly, much of the promise of chatbots is in gluing services together. That integration hasn’t happened yet and it really should. The risk is that we dilute our efforts by also trying to add the chatbot on top, rather than getting the fundamentals right. Even then, if we can pull together all our workplace services into a single consistent interface, then I predict most people would rather work with a graphical interface rather than a conversational one.