The Web doesn’t really look like a book, but we still refer to its “pages.” We also still tap on images of paper file folders, even though they obviously don’t contain any paper. “Notebook” is as synonymous with a mobile computer as it is a pad to write things down by hand.
User interface design may be an area of great innovation, but there’s a reason we’ve tended to be more cautious in how we describe new UIs as we move from one generation of invention and innovation to another. Indeed, interface design means recognizing that an initial learning curve may exist, and that part of creating a great customer experience involves building a sort of bridge between the old and the new that makes users comfortable and leads them to use applications with confidence.
As the world enters into an era of voice-activated virtual assistants like Siri, Alexa and Google Assistant, heads-up display projection systems in cars and ‘smart adhesives,’ it can be easy to forget the giant leap society took when traditional UIs gave way to those that control digital experiences.
Think of an oven, for example. You turn on a burner with a knob or a button and decide how hot it should be. This met the basic definition of an interface, which is the point of transference between input commands and validation and output. You might also say the interface is the point of confirmation of value transfer — in other words, when you engage with a UI, you should get back what you want or need.
For a while, increased functionality in a device tended to mean more challenging or complicated UIs. TV sets and entertainment systems can still involve working with multiple remote controls, for instance. The early Web may look antiquated now, but its maturity has led to an evolution in interface design that is both ubiquitous and, at least in some cases, much more seamless.
Already there are researchers exploring applications using “smart paper” that can react to human gestures, and virtual keyboards that might be both easier to use and more secure. Audio inputs, combined with artificially intelligent software (AI) that can anticipate what users will want, may make typing eventually seem like an antiquated idea of human-computer interaction (HCI). These are technology trends that might not only make computing quicker but more accessible to those with literacy and other challenges.
Just having innovative technology is only the beginning: as we keep moving forward into a world of AI and connected computing, the more we can harken back and harvest (or extrapolate) from well-known experiential frameworks (just as the early Web did with our bibliographic experience), the stronger our results will be.
So, in contemplating the future of the interface, we suggest paying attention to a few key principles:
- Break through The Experiential Walls
An interface is something that should be adopted in such a way that the newness soon wears off and it can be taken for granted. We hit an experiential wall, however, when something banal and “everyday” comes between a human expectation of how a tool should behave and the actual human experience of a machine’s behavior. For instance, an oral interface like Siri may be based on the most advanced technologies but that will mean nothing if it can’t understand a particular user’s accent.
- Use Cases Should be as Intuitive As the User Interfaces Themselves
There was a time when it was difficult to imagine why anyone would want to carry around something like the Sony Walkman. When you saw someone wearing a Walkman while running, however, those questions disappeared. Users may not be actively looking for what a company sees as the solution to a problem. When we do forward-looking research, we need to not only better understand current ways of doing things but also probe on desired experiential outcomes: these help us identify product opportunities, as well as how to communicate as-yet unimagined use cases.
- Choice Should Not Become Chore
When retail experiences first began migrating to the Web, the idea of customization and personalization became all the rage. But several early instances of customization failed because users were (a) confused and overwhelmed by having to start at a blank slate and (b) because users were unwilling to invest the time and effort required to manually customize their interface and experience. As interfaces became adaptive, customization and personalization became not only commonplace but (as in the case of smartphones) a table-stakes expectation. When designing interfaces it is important to remember that there is a limit to how much effort people might be willing to put into an experience, regardless of the UI’s ability to respond to those efforts.
Some Final Thoughts
The Internet of Things (IoT) is allowing everyday objects and even the walls around us to be connected via sensors to digital applications and data. In imagining the future of the interface, we need to understand the holistic nature of a consumer’s experience so that the design of a UI and the applications that tie into it are focused on the right outcomes and the appropriate emotional context.
Indeed, the interface is something that the user might not even notice in the long run. It is not that the interface will cease to exist, but that it will be so ubiquitous and embedded in the work spaces and flow of everyday life that its success will depend directly on our being completely unaware of it.
Arnie Guha, Ph.D, is a partner at Phase 5 and the leader of the User Experience Strategy and Design practice. Widely regarded as an expert in online user groups and environments, Arnie helps his clients – financial institutions, technology companies, life sciences firms, media companies, publishers and information providers as well as government organizations – develop winning UX solutions and strategies that best respond to market needs and business objectives.