AI has sparked a major change in every aspect of daily life–integrated into everything from Google searches to custom emojis–and AI’s ubiquity shows no signs of slowing.
Envisioning what these changes might look like, members of GALE’s Experience Design team looked at AI’s dynamic evolution, including how it is transforming our craft and all the traditional web and app features to which we’ve grown accustomed.
1: Faster consumption and instant synthesis
AI's ability to summarize content is now built into everything from Google searches to messaging apps–and these abilities will only continue to improve.
What might we see soon?
- A “Summarize” capability: With AI taking CliffsNotes to the next level, we envision the addition of new tools for everything from articles to shopping sites, which will provide quick content snapshots.
- Comparison made easy: The ability to compare multiple sources of information within seconds instead of spending the time cross-referencing checkboxes, tools, and charts will streamline decision-making.
- From scanning to skimming: All of us “scan” headlines to get a quick glance at important news, but AI will be able to generate critical insights in the same amount of time, even synthesizing from various news sources to save time.
2: Intelligent conversations
Newer AI models are finally a useful version of “chatbots” of the past. They can respond intelligently to virtually any request, and instantly format the content however you like: bullet points, charts and more.
This could mean:
- The end of FAQs: Why read through pre-canned FAQs when AI can generate accurate, conversational responses? Get issues resolved instantly, without navigating through a massive help center.
- Tasks done for you: AI can not only search for you, but it can complete time-consuming tasks–booking flights, for instance. This functionality is already starting with OpenAI’s Operator.
- Customized learning: For visual learners, AI can tailor complex topics with imagery and diagrams to suit individual learning styles.
3: Less searching, more finding
Why search, filter, and scan through multiple websites, when we can now ask AI anything and get exactly what we need?
Many people have learned how to work around the phrasing limitations of search engines to get the best results. With LLMs, that’s no longer an issue. Simply put your stream-of-thought, run-on-sentence question into ChatGPT, Claude, or Perplexity and you’ll still find what you’re looking for.
Imagine:
- An effortless search: Now you can simply converse with AI—“I need a monitor that works for my whole family. Show me options for school, work, and gaming under $1000.”—and it will search the entire web to show relevant results.
- Search results beyond text: While ChatGPT is known for its text-based responses, OpenAI and other companies are already enhancing results with engaging visuals, making them as dynamic and adaptable as the questions you ask.
4: Personalization on an unimaginable scale
Whenever you see personalized content, it’s the result of UX teams like ours carefully analyzing user needs, mapping out every possible variation, and crafting the design and copy to match each unique scenario.
Soon, much of this will be automated. AI on your phone will learn from your photos, texts, emails, location, etc., offering hyper-specific content and predicting your needs before you even realize them.
This could look like:
- Planning around your needs: AI can plan, book, and share your next trip based on your preferences, from hotel vibes to dietary-friendly restaurants to family-friendly itineraries.
- Proactive alerts: AI has the potential to continuously manage your health data. For example, after noticing a potential concern for sleep apnea from smartwatch data, it could proactively notify you to make an appointment.
5: Voice and visual interfaces
Humans have been working on useful voice interactions–take Alexa or Siri–for a long time. But, like chatbots, they’ve had limitations. Today, instead of “Sorry, I didn't understand that,” AI can ask us for further clarification, making conversations smarter and more productive.
Here’s how combining the conversational and predictive intelligence of AI with common interactions could change the way we navigate the world:
- The rise of the "Invisible Interface": As voice becomes a primary tool for interaction, the concept of a VUI (Voice User Interface) might shift focus away from visible screens and buttons, enabling smoother, more natural conversations.
- Blinks over buttons: Several devices can now use eye tracking and movement to select apps and navigate content, signaling a shift from traditional interfaces to gesture-based interactions.
- The end of individual apps: A conceptual, chatbot-powered “AI phone” is in the works–an app-less device that primarily controls actions through voice, suggesting a future where AI could amalgamate all content into one interface.
Voice and visual capabilities might never remove the need for an actual interface for a variety of accessibility and inclusivity reasons, but they will quickly become the rule rather than the exception.
The Trust Barrier
Despite these advancements, there are still important questions. For example: If everything is summarized by AI, how do we know what is a trustworthy source–and what are the risks of this to society at large? If the future is moving toward one generic interface, how do brands stay relevant and stand out? What balance is needed between human interaction and AI in customer service?
As the bridge between technology, information, and users, Experience Designers’ job remains the same: to understand user needs and minimize friction, ultimately placing users at the heart of every experience. That includes everything from balancing automation with human interaction to ensuring ethical implementation. By prioritizing thoughtful design and keeping human experience at the center of innovation, we can shape our platforms in a way that empowers, rather than alienates, the people they serve.