Generative and Evaluative Research

edited by Emily Remirez
Feb. 2019

Details

Work Type
Client - NAMI Central, TX
My Role
User Experience Designer and Researcher
Time Frame
Feb. - May 2019
Skills
  • Generative Research
    • Prioritized Users
    • Defined Objectives of Research
    • Conducted Moderated Card Sorting Sessions
  • Evaluative Research
    • Wrote Usability Test Protocol
    • Conducted Moderated Tree Test Sessions
  • Synthesized Research to determine Design Recommendation

Summary

In this project my team was tasked with completely redesigning the NAMI Central, TX website. We were successful in delivering the project to the client and even exceeded their expectations. Evaluation of the project exceeded the original time line, but I was fortunate to continue this project with another team. We ran another Tree Test with the new navigation. That study is still ongoing, but as of now we have seen an overall success rate of 86%.

Through this project, I came to fully comprehend and practice the two types of UX research: Generative and evaluative. While the focus of generative research is truly exploring unknown space and learning, evaluative research determines if a design is successful in meeting the user’s goals. Before this project I thought all research was just research. Because this project was larger it required more detailed and intentional research, so that's how I learned the difference.

While analyzing the results from the second set of Tree Test, I noticed a pattern. We explicitly told users who the task was for and what to do. In other words, once users saw the first question, they learned the pattern and could then just search for clues. This introduced a confound: In some ways, we were testing our ability to give clues and less the learnability of the navigation. Going forward, I’d like to to avoid enabling participants to keyword/clue search, and instead create tasks and scenarios in order to get them thinking. In the real world, users don’t have a clear understanding of users’ roles or what it is they want to accomplish or what the service calls it. Yet that’s often how we set up the Tree Test. I’d like my future Tree Tests to model the real world, making the test more valuable and the knowledge gained more informative.

The Problem

The NAMI team had a deep understanding of their users and presented us with two main user problems:

  1. The website looked dated
  2. Users had trouble using the site

Investigating the first problem, a visit to the site showed that the site’s aesthetic was dated.

NAMI Central, TX previous website design.
Figure 1. Image of the original website.

As for the second problem, NAMI had qualitative data: users would call and ask questions about how to use the site.

Screenshot of NAMI menu with 'Calendar' under the 'About tab.
Figure 2. In this screen shot we see that “Calendar” is located under the “About” tab. In Tree Testing and interviews users didn’t expect the “Calendar” to be located under “About”, so they were confused.

Research

Full details starting in "Discovering and Prioritizing Users" section of this case study.

Design

We designed a new navigation structure. Through moderated Card Sorting sessions we learned that users organize content by intended audience. This finding informed our final design for the information architecture and led us to take a “task then user role” approach.

Screenshot of Information Archtectiure site map design.
Figure 3. The proposed design of the NAMI website site map.

Discovering and Prioritizing Users

We had to focus research on Problem 2:

  1. The website looked dated
  2. Users had trouble using the site

Problem 1 was clear cut--we could redesign the website with a contemporary audience in mind. Problem 2, however, needed to be explored. Generative research was necessary to fully understand the problem. Who are our users? What trouble were they having? Why were users having issues?

Who are the Users?

NAMI explained that there were 3 users types on the site.

NAMI Users
Figure 4.1. The users presented to us by NAMI.

The team needed to determine what each user’s goals and tasks were in order to better understand what real value NAMI can deliver to them. Based on our initial meeting with our stakeholders we made informed assumptions.

NAMI Users
Figure 4.2. The Users' tasks and goals.

Prioritized Users

We had to prioritize users because of project timeline and feasibility. In order to understand where to focus, we asked ourselves one question:

Which user’s tasks and goals not being met?

As for donors, our client believed that the dated look was negatively affecting our design so a design refresh was already a part of the project plan. Because the donation process was handled through a 3rd party, that experience was largely outside of our scope.

The client expressed the tremendous value of volunteers, saying that they “provided something money couldn’t.” The client also pointed out that the volunteers were often the ones solving the issues that MHC encountered, so there was an opportunity to study what ways they were already solving problems.

NAMI reported receiving numerous calls from MHCs that made it clear the site was not serving this user. We realized quickly that the needs of MHCs should be explored--they were the ones already directly voicing problems.

NAMI Users
Figure 4.3. The Users prioritized based on the project needs.

Because there was so much at stake for the volunteers and MHC’s and because the donors were going to be served by the later design refresh, we prioritized Volunteers and MHC’s.

Generative Research

There were two main Generative Research objectives:

  1. Validate our assumptions of user’s tasks and goals
  2. Learn how users attempt to accomplish their tasks and goals

Let’s unpack these objectives.

  1. Validate our assumptions of user’s tasks and goals
    • As always, it was important to be clear about what was an assumption because if our assumptions were wrong then we would be designing for an idea and not an actual person. By validating the user tasks and goals we confirmed what would provide real value for them.
  2. Learn how users attempt to accomplish their tasks and goals
    • Understanding and learning how users attempt to accomplish their tasks and goals was important because we got an opportunity to see their reasoning, reactions, and guiding principles. We saw first hand how users reason through the website. We could see their emotional reactions to how things work or don’t work. Finally, we learned what overall ideas lead their actions.

Research Synthesis

Through interviews, conducted by other members of my team, we learned that there was confusion with the names of the all the classes offered by NAMI. The names seemingly had no relation with what the class actually was. Another issue discovered was that there was an overall confusion with how the content was organized. The excerpts below give you an idea of some of the frustration this was causing for users:

If you don't know what you are looking for, idk how you would identify those things

I don't understand these names

Calendar is under the "About" section

Can't find the volunteer Button

It was clear that more research was needed.

The interviews left us with an idea: Ok, so clearly people don't understand class names, and clearly people don’t understand how the content is organized--but we only know this in a general sense.

This lack of specificity led us to a Tree Test.

Tree Test

Why Tree Test? Well, the issues discovered in interviews was specifically around content organization. A Tree Test is an effective way to test this, because Tree Testing gives us an opportunity to create specific scenarios to see which tasks users have problems with. This is important because we can then focus on the tasks that users fail.We designed our test to have participants self identify their user type as donor, volunteer, MHC. Tree Testing told us which user types had issues and where those issues were. Here is a screenshot of the results of the Tree Test:

Tree Test Results
Figure 5. The results from a selection of the 1st round of Tree Testing the original site navigation.

To summarize the results; Volunteers understood the jargon and therefore were mostly successful; MHC did not understand the jargon and failed. After the interviews and Tree Testing, we had a clearer understanding of the confusion, where the confusion was, and who was confused, but we still needed to know: How do we fix it?

Card Sorting

To answer this question, we needed to understand how a MHC would actually organize the content, so we conducted a moderated open Card Sort. Users were presented with a deck of cards that included descriptions of the classes offered by NAMI, then they were asked to group them into categories that made sense to the user, and finally were asked to name each category. They had the ability to change category names and move cards from different groups as they saw fit.

Card Sorting
Figure 6. A user participating in a Card Sort Session.

This type of test gave us insight into several things, including users’ reasoning for organizing content and the language they use. Most participants organized content by who it was intended for and what they were trying to do. As an example, “im an mhc and i was to get some support and education (task), I need it for myself (user role)”.

After we conducted the interviews, we had a good idea of the problem, where the problem was, and who the problem affected but needed to know how to solve it. The Card Sort told us how users thought about the offerings from NAMI and the language used to organize it. We were now ready to start on designing.

Evaluative Research

Evaluation of the project exceeded the original time line, but I was fortunate to continue this project with another team. This team we wrote a research protocol in order to detail the exactly what it was were were trying to learn. Our design hypothesized that a reorganization of the IA to users’ mental model would better enable them to browse and locate what they are seeking. Well, to validate this prediction, we needed to run a usability test.

Research Protocol

What was it we wanted to test? Learnability and confidence. How learnable is the content, and how confident are users they have selected the correct path?

Who should we test with? In order to get a true one-to-one comparison vs. the first round of research we needed to test it with the same prioritized users. Volunteers were so successful with the original navigation, but now that we had changed it, would they continue to be successful? They are a key user, so we needed to confirm that we didn’t disenfranchise them. MHC should clearly be included, not only because the site was so confusing for them initially, but more importantly because this is the purpose for NAMI. Without solving this user’s problems, there is no service provided. The NAMI vision isn't met.

How to test it? A Tree Test would tell us success rate and thus learnability, but we also needed to test confidence. We chose to measure this with a simple likert scale responding to the statement I felt very confident about my choices. Users were asked to rate this statement on a 5(?)-point scale, from strongly disagree to strongly agree. This measure showed that volunteers enjoyed a success rate of 92.5% while MHC were at 81.75%.


View More Case Studies

What do you think?

Designers are nothing without critique and feedback,
so let me know what you are thinking.

Contact