Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

During user evaluation, we were primarily interested in the user's reactions during the test, and we were looking for critical incidents.

Position

Critical Incidents

Design Changes

Salesperson

Create Trip
- "What am I supposed to do?": couldn't click on "add"
- Confusion between arrival and destination
- Don't understand difference between "add", "save" and "submit"
Approve Trip
- Easily accomplished tasks
Analyze Trip
- "I understand what is being shown"
- "The graph is messy"
- Took them a minute to find date slider - noted gray color blended into background

Create Trip
- Textbox helper text appeared in black text (rather than gray) in firefox. Test on more browsers than just Chrome
- Abandon one line experiment - add multiple lines as previously discussed to make user input one leg by default.
- More descriptive terms versus "save", "add" and "submit"
Approve Trip
- None
Analyze Trip
- Clean up graphs 
- Change contrast/hue for date slider to appear more apparent.

Sales Manager

Create Trip
- Confusion between save and submit
- Did not notice mileage
Approve Trip
- None
Analyze Trip
- Did not notice date range on slider

Create Trip
- See above
- Add box that has total mileage for trip
Approve Trip
- None
Analyze Trip
- Include more labeling on slider

Auditor

Create Trip
- None
Approve Trip 
- None
Analyze Trip
- "Very useful interface to compare histories"
- "Very useful when going through employees"
- "Time bar was confusing"

Create Trip
- None
Approve Trip
- None
Analyze Trip
- Improve usability of slider using above comments

Reflection

Overall, our team was happy with the results of our project. It did not change a lot from the initial planning stages.

GR1 - Project Proposal and Analysis

We think that this part of the design process was probably the most critical to our eventual success. Factors that worked well for us were:

  • Task Analysis: Limiting the scope of the project so that we could accomplish what we needed to do in three months.
  • User Analysis: spending a lot of time finding real representative samples of our user population to interview in depth. This really set the design criteria and guided decisions for the rest of the project. We referred back to the notes gathered from the interviews quite a bit over the following months.

Parts that probably could have been done better:

  • Domain Analysis: We could have done a better job thinking about multiplicities. For instance, we missed the entity "legs of journey" which would have maybe caused us to devote a little more room to this in the final design.

GR2 - Designs

We split up and each group member came in with two options for each task. That way, we had a total of 6 designs for each task. The design meeting was thus spent discussing the pros and cons of each design. In the end, the final design that emerged from this process was a blend of designs that incorporated aspects from all three people. In general, one design "won" for each task, but we found that small embellishments were blended in from other designs for support.

GR3 - Paper Prototyping

This step was critical in improving the usability of our interface. In particular, the Create Trip and Analyze Trip interfaces underwent major redesigns as a result of this process and became a lot more efficient, learnable, and visually appealing. We had some difficulties in terms of the process of the user test itself - we found that our briefing was far too detailed to get really good feedback from users. In the end, we found that the briefing/task list should be a balance between guidance and freedom, and that a bad briefing/task list could severely compromise the effectiveness of the test.