For the UX Immersive at General Assembly, I was asked to do a 30-minute talk on a UX topic of my choice. As an INTJ, I'm always thinking strategically about the future. After digging in a little more into current trends and asking current UX designers about what they thought the future would bring for the UX field, it became apparent that Voice User Interfaces (and immersive interfaces) would become increasingly important in the next few years with the rise of voice assistants such as Alexa (Amazon Echo) and Google Assistant.
Currently, as far as I know, there aren't any great resources out there to learn more about Voice User Interfaces. UX designers in the field (who are trained on research, user flows, and visual design) are being tasked to design the experiences with VUIs, as many current applications are being designed by machine learning, linguistics, and other language professionals. Needless to say, machine learning and linguistics professionals are not UX designers, though there is quite a bit of opportunity to work together to design great VUIs.
The following is just a snapshot (and likely outdated since I last researched this in October 2017) of what VUIs are in 2017, how they work, and where they might go next. Over the course of the next few weeks, I will post my own VUI work, with the goal of showing my process in designing a VUI for a new "app" concept.
Note: The original talk was done in Google Slides (with a SlidesCarnival template) + voiceovers + slide notes. I will do my best to recreate the research in a blog format.