In this series Jeroen Thissen (Creative Director) and Erik Rave (Technology Director) from Creative Digital Agency CODE D’AZUR look back on three years of developing voice applications for the likes of KLM Royal Dutch Airlines and LeasePlan. Each episode will highlight a new learning.
Previously, we focused on the need to stay flexible. Our next point is very much connected to this: Focus on your architecture.
It’s not just your organization has to stay flexible, but also your voice service itself. The system behind the voice application might be considered even more important than the application itself.
It’s important to get started with your first action (otherwise there will never be an architecture). But, realize at the same time that this action is very likely to be altered in the future or will even fail and get killed. You’ll have to ask yourself what happens if it does. Do you have a framework set up where you can easily plug in different actions for different platforms? And what happens if your A.I. ecosystem broadens to, let’s say, also your customer service team? Much like your website there’s a front end and a back end (and an API). Don’t forget to think about the long-term ecosystem as well.
You want to ensure you offer a consistent user experience throughout all channels. If your customer tells you something via Google Home, it would be great if this information will be available to the human agent in the service center that contacts the customer the next day.
An important strategy we used in our collaboration with KLM was to not build everything on Google’s DialogFlow. This would have been the easiest thing to do, but it would also mean that we would be dependent on its functionalities in controlling our actions. On top of that, all of our data would run through Google. And with the eye on data-ownership and privacy, this isn’t desirable. So instead, we built our own autonomous A.I. ecosystem that sends information to Google when requested. We use our NLP as a service to just us. Google isn’t able to use us as a service. This means that (in theory) we could swap out Google for any other system at any time if we deem it to serve our interests better. This also made it easier for us to ‘plug and play’ new actions and make our services available on Alexa.
So, we’ve seen that staying flexible is key, both in your organization as well as your technical architecture. Next, we’ll look at another key learning that fits in the slipstream of this topic: why it is important to focus on the intent of your user rather than the conversation itself.