Why voice assistants could be the future of user experience
Join today’s leading executives online at the Data Summit on March 9th. Register here.
This article was contributed by Ramu Sunkara, CEO and cofounder of Alan AI, and Andrey Ryabov, CTO and cofounder of Alan AI.
You created the next killer app, and you’re a few steps away from making history. As soon as you roll out the first version, users will fall in love with it. They will recommend it to their friends, network effects will kick in, put you ahead of your competitors, and ensure your success. All you have to do is figure out how to make the app user-friendly.
It sounds easy, but that last part, the user-friendliness, is easier said than done. And it happens to be one of the most important and most difficult parts of creating products.
As anyone with experience in the software industry can attest to, users’ reactions to the first version of your application will likely be very different from your expectations. You’ll witness confusion, frustration, and churn as users struggle to figure out how to use your application and experience its true value.
First impressions are very important. When you launch a new application, you have a very small window of opportunity to learn from your users and adjust. You must identify pain points and continuously adjust the application’s interface to make sure your users receive the optimal experience.
Previously, this endeavor was a painful and slow process, requiring expensive changes to the graphical user interface and hoping that it works out. Fortunately, with the advent of a new generation of app-centric, AI-powered voice assistants, the equation is about to change.
Why do good applications fail?
The gap between developer vision and user experience is the reason why many applications die. A relevant case study is Hipstamatic, the application that first brought the idea of photo filters in 2009. While Hipstamatic had an excellent idea, it had poor design choices, the user interface introduced a lot of friction, and it missed features that would have made it appealing to users.
Hipstamatic failed to learn its flaws and fix them in time. As a result, it gave its way to Instagram, a lesser-known app that was much more appealing to users and later became acquired by Facebook for $1 billion.
Hipstamatic is one of many examples of good products that die because their teams don’t learn to adapt to their users’ needs and preferences. Today’s applications — especially in the enterprise and workplace domain — have very complicated user interfaces and features. It is very easy to confuse users and hard to find the best layout that will put the right features front and center.
Creating the optimal user interface and experience hinges on two key factors. First, developer and product managers need the right tools to gather relevant data and learn from users’ interactions with their application. And second, they need the tools to quickly iterate and update their user interface.
Wealthy software companies can overcome these challenges by hiring many developers working in parallel on different versions of an application’s user interface. They can roll out and manage complicated A/B/n tests and hire analytics experts to steer their way toward the optimal user interface. They might be able to afford expensive in-person studies and surveys to spot the reasons users leave the conversion funnel.
But for a small startup that is burning investor cash and has limited time and resources, learning can be too expensive — which is why many developers resort to launching their app and praying that it works.
This is about to change with the new generation of voice assistants.
Improving the user journey
First impressions and experience of an app will have a profound impact on users’ retention. If a user quickly finds their way around the interface and gets to experience the app’s true value, they will likely use it again and recommend it to their friends. If they get confused, there’s a likely chance they will become disenchanted and divert their attention to something else. The problem is, however, that users come with different backgrounds, experiences, and expectations. You’ll rarely find a user interface that appeals to all your users.
Now imagine a voice assistant that is deeply integrated in your application and can guide the user through the features. If users are struggling to find something in the app, they can just ask the assistant and it will either take them there or guide them to it. This can be extremely helpful in the onboarding process, where users often become confused and need guidance. As users become familiar with the application, the assistant’s role will gradually change from guidance to optimization, helping them automate tasks and take shortcuts to their favorite features. In applications where users need hands-free experience or quick access to information, the in-app voice assistant will become an invaluable interface.
The in-app voice assistant provides unprecedented flexibility to adjust the application with the user’s level of knowledge, experience, and expertise. You can’t create a user interface that appeals to every single user. Accordingly, you would need limitless resources to create numerous versions of your application to appeal to every user. A voice assistant, however, can act as a dynamic interface that can be used in various ways, providing each user with a unique experience.
Basically, instead of having your users adapt themselves to a very convoluted user interface, having an in-app voice assistant makes a simple user interface that adapts to your users.
For both new and experienced users, the voice assistant can be a huge differentiating factor that can improve conversion and retention rates.
Improving product development and management
The flipside of the user experience is the product development and management process. Here, time is of the essence. Your success largely depends on how fast you can get feedback from your users, learn from their experience, and adjust your application.
Having an in-app voice assistant is the closest thing you can get to being physically present when users are interacting with your app. As you gather voice and app analytics data, you’ll be able to answer pertinent questions such as “On which pages are users getting stuck?” “What features are they struggling to find?” “What are the most asked questions?” “What features do users expect the app to have?” Through this data, you’ll be able to glean important behavior patterns that will steer you in the right direction.
Discovering users’ needs is one side of the equation. Responding to them is another and equally challenging part of creating good products. The classic product development paradigm requires you to redesign your application’s user interface, submit it to app stores, wait for it to be vetted and published, and then roll it out to users. For web applications, you’ll have to go through multiple designs, run A/B tests, choose the best new design and then roll it out to all users.
With in-app voice assistants, the interface is already there, so in most cases, you won’t need to make any change to the graphical interface and can roll out new features on the server side with minimal friction.
In-app voice assistants provide a smooth shortcut to the finish line. Instead of feeling your way through the dark, you’ll be casting a bright light on your app and will be able to direct your resources in the right direction with a laser focus. A lot of time and money will be saved. Instead of taking weeks or months to deliver new versions of your app, you’ll be able to iterate several times per week or even per day.
Voice assistants have been around for a decade. So why should you be focusing on in-app voice experience now?
There are a couple of reasons. First, the first generation of assistants such as Siri, Alexa and Cortana have helped bring about wide acceptance of voice user interfaces. Today, a wide array of consumer and industrial devices support voice assistants. Millions of families across the world use smart speakers and other voice-enabled devices. Voice accounts for a substantial share of online search queries.
At the same time, first-generation voice assistants have distinct limits that make their use limited to simple tasks such as invoking apps, reading emails, online search, and setting timers. When it comes to specialized, multi-step tasks, classic assistants are of little use and can’t keep track of user context and intent. These assistants live outside applications and are tied to their vendors’ platforms. They are separate from the application’s graphical interface and are blind to the user context, which makes it impossible fully understand user intent and provide visual feedback to users.
The shortcomings of current voice assistants is especially evident in the enterprise sector, where companies are spending millions of dollars to build mobile and web applications for their internal workflows to improve productivity. These applications can largely benefit from voice assistant support but only if it’s tightly integrated into the special workflows that support these businesses.
To solve these challenges, the next generation of voice assistants will live inside applications and will be deeply integrated with the app’s user interface, workflow, taxonomy, and user context. This shift in architecture will enable developers to use various data sources and contexts to improve the quality and precision of in-app voice recognition and language understanding. Users will see the voice assistant leverage the existing UI to confirm that it has understood and documented their input correctly, and this will help to avoid the friction and frustrations that happen when older voice assistants are applied to complex tasks. This new generation of assistants makes it possible for voice to become an integral part of the app experience.
The new era of voice user interface is just beginning. This is a great opportunity for developers and product managers to make sure their great ideas become great and successful applications and create significant ROI, especially in the enterprise sector.
Ramu Sunkara is the CEO and cofounder of Alan AI.
Andrey Ryabov is the CTO and cofounder of Alan AI.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Discover Past Posts