The AI Wave: New AI features in employee experience platforms
I recently hosted a webinar where I shared details of what AI features are available right now in employee experience platforms, plus what’s coming in the next six months or so.
Generally, I avoided the larger-ranging debates about AI, but addressed these four points for each example I shared:
YouTube: click to play, then click to embiggen.
- What features are going to successfully save time
- What features are going to generate a better experience for end users
- Where human intervention might be needed
- The implications for admin teams and organisations.
Suggested answers in search
In mid-2023 Haiilo launched a ‘likely answers’ feature that will be familiar from internet search engines like Google, where an answer is given instead of simply returning a document or page. In this instance an AI is generating the answer from the intranet database of content. Where you’re looking for a file, it’ll tell you where to navigate to find it, plus the files results are displayed underneath for ease of access.
This is a helpful feature, although as it says in the helpful tag, “generated answers can be wrong”. I’d therefore suggest that intranet teams have a simple process or policy in place on what to do should the AI hallucinate and provide inaccurate information. This will mean employees should have a way to report inaccurate information, as well as the intranet team understanding how they can effect change.
The value of this sort of approach relies in part on the quality of the analytics in the background. I’ve not seen Haiilo’s offering in this regard, but generally analytics are a weak area for these sorts of products. I hope vendors will consider how intranet teams can enact positive change and AI training, as well as focussing on the user experience in the front end.
A ‘likely answers’ approach from Haiilo using AI to generate a result.
Profiles and audience targeting
The Firstup platform includes very detailed profiles pages, which can be populated from HR systems or updated by the individual, for example so they can add pronouns. What’s included in the profile is then available for audience targeting.
Once the audience is selected the publisher has the option to let Firstup send the message to the individuals within that group. The message will be delivered at the best time for the individual, and also for the priority level of the message. This is on an individual basis, so Jameel may receive the message at 9am on a Monday in Paris, while Philippa receives it at 11am on a Tuesday in Dublin.
The AI delivery is helpful for the end-user, as it matches the individual’s behaviour and helps reduce overwhelm. The AI also adjusts itself based on profile changes, so if Philippa travels to New York the AI will adjust based on her new behaviour and time zone.
For publishers this will help reach the right people at the right time, with the additional support of a fatigue rating so they know whether this audience has been targeted too much with messages. There is a way for publishers to manually choose when to publish too, so those who are nervous about AI can regain control.
As with any audience targeting, the Firstup AI relies on audience data being present and maintained, which can be a project in its own right for some organisations. It would be helpful for platforms like this to offer reminders to prompt people to keep their profile updated and offer an easy way to contact IT or HR to get incorrect information changed.
Firstup’s audience targeting and personalisation features are excellent, although rely on good profile data.
Content companion (generative AI)
The first example of generative AI is Staffbase Companion. Staffbase have said they are “taking things slow-ish” with generative AI, as they are aware there are risks and conversations still happening across organisations around its usage. The Companion has therefore been designed to offer support, rather than be an end-to-end publishing tool. For example, it will shorten content length, so those of us who like writing a lot can self-edit using AI. It will also provide a draft that must be physically edited before it can be published, as well as generating data such as titles and summaries from the body of the content.
The Companion helps provide a nice starting point for publishers, while relying on them to flesh out and edit what’s provided. Publishers will also need to understand what’s changed and decide whether it has changed for the better. Ultimately, while the Companion will save time with initial drafts and editing down content, the publisher needs to understand whether the output is appropriate for their reader. As with all generative AI features, the skills around editing (for accuracy and phrasing choices) therefore shouldn’t be underestimated and organisations must start thinking now about what skills publishers may need in future.
Staffbase Companion offers a light-touch generative AI tool to give publishers a helping hand.
Embedded prompts (generative AI)
Haiilo includes an integration with ChatGPT where publishers can access the AI from within the post they’re crafting. The screenshot shows the editing features for existing text, which includes making the tone more or less formal, changing the length, and simplifying the content too. Haiilo also includes generative AI features that will create a draft based on prompts. What’s returned isn’t based on the confines of intranet or company data, it’s a direct pull from ChatGPT, so publishers will have to carefully check the accuracy and appropriateness of tone. There are security considerations here too that organisations should be discussing now, regardless of whether AI is present in the business yet.
As with the Staffbase example, the importance of editing what’s returned shouldn’t be underestimated. It’s also worth noting there’s no flag or other indication that content was created using AI, so organisations should consider whether they’re comfortable with that or whether a note should be physically added to the end of the content. In my opinion, being upfront about AI generated content is important and I’d like to see these tags automatically added to the resulting content, as well as in the back end.
The ChatGPT integration in Haiilo is easy to access directly within the body of a post that’s being created.
A form approach to generative AI
Unily takes a different approach to generative AI features. This form-like approach prompts publishers to make choices as they progress through the content generation process.
Unily has an ‘intranet’ filter in the back end, which will sieve results from the Microsoft AI to make sure it’s appropriate for an internal audience. It will also gather context from the type of content that’s being created, so if the publisher is creating an ‘event’ then that context is overlaid to make sure what’s created is appropriate. Again, there is no flag to show it’s been AI-generated, and publishers will have to carefully edit the output.
When a Unily rep showed me this feature, they highlighted that the company is considering the laws, legislations and conversations on the topic of AI as it develops this tool. So this, and other products, may well change as these conversations evolve.
Unily includes a simple form approach for AI generated content.
Like Unily, Atlas (an intranet product that requires SharePoint in the back end) includes a form approach to AI. Admins can control who has access to this tool as well as which sites include this AI capability. Plus, customers can set up approval workflows so there can be more than one editor of the generated content.
In the form below there’s a prompt for subject metadata to be applied, which helps provide useful context for the article and for the AI. The generated article looks attractive, with good formatting applied, making this a helpful feature. However, there’s no ‘generated by AI’ tag again and, as with the others, the publisher or an additional editor will need to carefully check and edit the content.
The Atlas form is simple, but the output includes helpful formatting that can be easily adjusted.
The final example is from Interact, where AI supports publishers with language choice. The feature can help with aspects like inclusive language selection – ‘folks’ instead of ‘guys,’ or ‘allow list’ instead of ‘white list’ for example. It will also gauge the sentiment of the content, highlighting where negative language might be used so publishers may choose to adjust their wording.
While these features don’t save time for the publisher, they do help to create a better experience for readers – by presenting information with inclusive and positive wording. As we all know from experience with spelling and grammar checkers, these suggestions shouldn’t always be applied – but the fact Interact flags them is very helpful. In my opinion this is a positive use of AI, supporting and complementing someone’s writing skills to make small but important improvements.
Interact’s language checker will help publishers use more inclusive and positive language where they feel it’s appropriate.
My next article shows the new features coming to employee platforms in early 2024.
A version of this article was first published by Reworked.