We are on the cusp of an entirely new way of interacting with the digital world, one that forgoes traditional, intentional user interface. Instead, it is driven by hyper-context aware AI assistants that converse with users through AR glasses and other wearables. No longer will our actions be meditated through a Google search or a forum post. Instead, our assistants will anticipate our needs without us ever making a single keypress.
These assistants will share our perception of reality. They’ll go where we go, see what we see, and hear what we hear. On top of being plugged into our physical lives, they’ll be intimately familiar with our digital expression – checking our calendars, sifting our contact lists and reading our tweets. And they’ll do all this with the entire internet behind them, ready to provide the right information at the exact moment we need it.
In the pursuit of productivity, many think this future is inevitable. Assuming it is, we need to intervene now to ensure that ethics and privacy are considered alongside utility as we design the biggest revolution in interface since the keyboard.
This short talk will introduce a framework for understanding and evaluating AI-mediated spatial (VR/AR) interfaces. We’ll draw upon some classic design frameworks, and show how these emergent technologies might rewrite the book on interaction. Alongside this, we’ll highlight the data privacy time-bomb that is ticking away, as companies like Meta, Google and Apple prepare to hoover up personal and biometric information on a scale few of us can imagine.
Finally, we’ll demonstrate how IA thinkers are especially suited to bridge the gap of understanding to help designers and engineers build experiences that help us enjoy the real world without us getting lost in the virtual.