Expedia
Usability testing in-lab to improve the “My Trips” feature at Expedia
Usability I In Person Usability Testing I Travel
Expedia & UW
As a part of a graduate level Usability Testing Course at UW, I collaborated with four other UW HCDE classmates to conduct in‑lab usability sessions. These sessions sought to help travelers plan and manage full itineraries on Expedia that informed key enhancements to the Trips feature in partnership with our Expedia sponsor.
The Problem
Expedia’s My Trips feature helps travelers organize their bookings, but past feedback and behavioral data had indicated some areas of friction, especially around how users discovered and interacted with their saved itineraries.
The product team wanted to better understand where users were getting stuck (the pain points), how they expected the experience to flow, and what features would increase trust and utility. Our role as HCDE students was to uncover actionable insights through moderated in-lab usability testing, focusing on both short-term usability fixes and longer-term strategic improvements.
Approach
To evaluate how users navigated the My Trips experience, we conducted in-lab moderated usability testing. We recruited a mix of new and returning Expedia users who were asked to complete key tasks related to planning, viewing, and managing trips.
We combined think-aloud protocols, task success metrics, and post-task UEQ (User Experience Questionnaire) surveys to get both behavioral and attitudinal feedback.
Key Insights
Our research uncovered several usability issues, including:
Poor visibility of the map icon, leading users to miss saved hotels when viewing trips.
Low engagement with the “past trips” feature, stemming from unclear labeling and low perceived value.
Inconsistent mental models—users expected “Trips” to include both booked and tentative plans, not just confirmed reservations.
Impact: How much trouble does the usability issue affect user experience?
Frequency: # of participants encountering the problem
Persistence: How many times does a user get affected by the usability issue?
Impact
After presenting this work to the Design and Research team at Expedia, several usability improvements were implemented following the study…
Updated Map Features
From our finding that the map display does not offer helpful features to trip planners on Expedia, the team added hotel icons for the favorited hotels in the trip to the map, allowing users to explore where they will be staying on their trip.
Easier Trips Comparison & Collaboration
Trip detail cards for Stays were improved to include photos and collaboration features (like and comments) to more easily compare stays in the Trips page.
Improved association with “heart” and saved trips
We found that when saving a hotel, oftentimes the “toast” at the bottom goes unnoticed and users had trouble associating the heart with the “Trips” feature. To combat this issue, the team added a “drawer” feature when clicking on the heart to improve the user’s relationship with these two features.
Improved “Trips” navigation and Findability
After our findings that the navigation to the Trips page was confusing for users, the team added “Trips” to the Expedia homepage, and added in more clear content around navigating the the Trips page.
My Role
I collaborated with my four other team-mates to design, moderate, analyze and present the end-to-end research process. As the only team-member with UX research experience i helped lead the team on:
Designing and facilitating moderated in-lab sessions
Created the moderator guide and participant tasks
Analyzing starategy
Packaging usability findings for sponsors
What I learned
As this was a part of a graduate-level usability testing course, I gained hands-on experience with key usability and HCI research practices.
This project deepened my understanding of usability testing in a real-world setting. Conducting research in Expedia’s in-house lab highlighted the value of observing user behavior in a controlled environment.
I learned how powerful behavioral measures, like task completion, click patterns, and frustration indicators, can be when paired with think-aloud protocols and post-task ratings.
I also learned the importance of clear research planning and analysis frameworks for easier synthesis and reporting with large data-sets.