Aetna Navigator UX Research

Aetna’s customer web portal, Navigator, was built over a decade ago. The portal was constantly updated with new features but without any clear strategy on how to present their product. As a result, more than half of their consumers were unable to achieve their tasks.

Please note that due to NDA, I am unable to share certain pieces of information!

My Role

My job as a summer UX research intern was to analyze and understand the current state of the web portal.

Discover - Application Map

Because the product was built with out a clear plan, the information architecture (IA) was very confusing, even to the developers. To understand how the product is organized, I first took time to create a visual representation of the application. I approached this task with an application map.

I approached the mapping process with the Hagan Rivers method in which every task and link available on a page is visually represented into a diagram. The yellow item (Spokes) are tasks, and gray items (Hubs) represent a grouping of data, more spokes, and other items. I took the time to add other item types as well to fully represent the interactive capacity of the application (for instance, blue represents data filters).

The full map was much larger than the image above, with a total of 7 more categories. While creating this map may seem slow and uninteresting, it is crucial because I found out that:

  • There are loops, dead links, and items that jump in and out of other categories.
  • The language used is redundant or confusing.

Luckily, I had the chance to look at a competitor equivalent. I created a map for their product as well to understand how their product was organized.

Understand - Card Sort

Next, I wanted to understand if the current state of categories established in the application map is appropriate. Additionally, to understand relative performance of the Aetna product, I decided to run a card sort with the competitor product as well. For this project, I asked:

  • Is this the best way to categorize information?
  • Is this the best way to label the categories?

After running a mock test with a fellow intern using an open source card sort software, I ran the actual test on UserZoom.

*this is not the actual card sort

Logistics

Card Items

  • Pulled from first level and/or second level as necessary, least amount possible to fully represent the entire application
  • Added cards representing features that are planned to be implemented.
  • 55 cards for Aetna, 44 cards for competitor

Subjects

  • Goal: 100 participants
    • 109 actual participants
    • 98 clean participants
  • Task
    • Group cards into clusters
    • Name each cluster
    • Explain naming

Results

Here’s an idea of what average dendrograms looked like:

Takeaways

1. There were clear grouping patterns: some were based on their definition, based on linguistic similarities, and their desired tasks (for instance, all pharmacy items were grouped together).

2. The way users labeled certain functions (ie. account settings VS manage account) became clearer.

3. There were some strong indications of certain cards being more relevant with different clusters.

Explore - Tree Test

Now that the pain points in the IA were more vivid, I decided to run a tree test to explore possible solutions. For this process, I asked:

  • What are new ways to organize information that would reflect the card sort results?
  • How should each item and category be named?

To understand their effectiveness, I compiled a list of tasks by looking at the card sort results, internal survey data, and talking with Product Managers on the team.

After several drafts, I constructed a new tree (Tree A) to based on the card sort results. Changes included language changes, re-categorizing items, and creating or deleting top level items (ie, Pharmacy). Again, after running a mock test with the same intern friend, I used UserZoom to test Tree A against the current tree.

Logistics

Task Set

  • 12 Tasks
  • Provides scenarios and asks users to find information or task location.
  • E.g. “You are looking for a general practitioner that accepts your insurance plan. How would you look for that information on the healthcare website?”

Subjects

  • Goal: 50 participants
    • 51 actual participants
    • 50 clean participants
  • Task
    • Randomly assigned 9 out of 12 tasks

Results

In general, there was not a big change in the average task success rate. However, the following patterns were observable:

  • The language in Category C was changed. Tasks in this category showed about 15~20% reduction in success.
  • The overlapping language of certain tasks were changed to not overlap. These tasks showed vast increase in success.

Wrap Up

Because my time as an intern came to an end before the Tree Test results came in, I received results via email. If I had more time, I would’ve run another round of tree tests to further explore more efficient solutions (i.e. improving success rate of Category C).

Overall, I’m super grateful for the experience, because I was able to learn so much! Here are some of those things that I learned:

  • How to ask the right questions in order to answer the big question.
  • Understanding the tools to answer those questions.
  • Organizing the answers so that everyone is on the same page.

Thank you!

Special shoutouts to the boys who let me run pilot studies on them!