Aetna’s customer web portal, Navigator, was built over a decade ago. The portal was constantly updated with new features but without any clear strategy on how to present their product. As a result, more than half of their consumers were unable to achieve their tasks.
Please note that due to NDA, I am unable to share certain pieces of information!
My job as a summer UX research intern was to analyze and understand the current state of the web portal.
Because the product was built with out a clear plan, the information architecture (IA) was very confusing, even to the developers. To understand how the product is organized, I first took time to create a visual representation of the application. I approached this task with an application map.
I approached the mapping process with the Hagan Rivers method in which every task and link available on a page is visually represented into a diagram. The yellow item (Spokes) are tasks, and gray items (Hubs) represent a grouping of data, more spokes, and other items. I took the time to add other item types as well to fully represent the interactive capacity of the application (for instance, blue represents data filters).
The full map was much larger than the image above, with a total of 7 more categories. While creating this map may seem slow and uninteresting, it is crucial because I found out that:
Luckily, I had the chance to look at a competitor equivalent. I created a map for their product as well to understand how their product was organized.
Next, I wanted to understand if the current state of categories established in the application map is appropriate. Additionally, to understand relative performance of the Aetna product, I decided to run a card sort with the competitor product as well. For this project, I asked:
After running a mock test with a fellow intern using an open source card sort software, I ran the actual test on UserZoom.
Here’s an idea of what average dendrograms looked like:
1. There were clear grouping patterns: some were based on their definition, based on linguistic similarities, and their desired tasks (for instance, all pharmacy items were grouped together).
2. The way users labeled certain functions (ie. account settings VS manage account) became clearer.
3. There were some strong indications of certain cards being more relevant with different clusters.
Now that the pain points in the IA were more vivid, I decided to run a tree test to explore possible solutions. For this process, I asked:
To understand their effectiveness, I compiled a list of tasks by looking at the card sort results, internal survey data, and talking with Product Managers on the team.
After several drafts, I constructed a new tree (Tree A) to based on the card sort results. Changes included language changes, re-categorizing items, and creating or deleting top level items (ie, Pharmacy). Again, after running a mock test with the same intern friend, I used UserZoom to test Tree A against the current tree.
In general, there was not a big change in the average task success rate. However, the following patterns were observable:
Because my time as an intern came to an end before the Tree Test results came in, I received results via email. If I had more time, I would’ve run another round of tree tests to further explore more efficient solutions (i.e. improving success rate of Category C).
Overall, I’m super grateful for the experience, because I was able to learn so much! Here are some of those things that I learned:
Thank you!
Special shoutouts to the boys who let me run pilot studies on them!