Create TripDesign
The overall design of our interface (as shown in screenshots below) was meant to be a clean, simple application for an enterprise audience. We spent a great deal of time researching enterprise design patterns, and found that using grays and one highlighting color (red in our case) made for a professional looking application. High level navigation between interfaces was done using a horizontal tab system, familiar to many corporate sites. Each tab is generally self-contained in terms of functionality, however the underlying database on trips is shared across tabs. As the user navigated, we wanted to maintain state in each tab independently because some users may be using multiple interfaces at once (i.e. a manager who is approving trips for subordinates while planning a trip themselves).
During the initial development, we decided to restrict our "canvas" to a constant size across tabs for both consistency and re-sizing purposes.
Screenshot |
Design |
---|---|
|
The Create Trip interface was designed to optimize learnability, yet be very powerful and was inspired by Google's homepage. Originally, we had multiple lines of input fields arranged vertically, but paper prototyping revealed that a simpler version with just three boxes would be much better in terms of learnability, leading to a major design overhaul. The page itself used a high level table structure to make sure most elements aligned, with the intentional exception of the "Save Trip" and "Submit Trip" buttons, which were center justified to stand out. |
|
The Approve Trip interface was designed around efficiency (while still maintaining CRUD) because from our interviews a manager spent very little time, usually on Fridays, approving expenses. Thus, the "front page" is designed for the user to quickly scan high level details of the trip (who, what, where, when, how much $$) and approve/reject from this page alone. This functionality did not change much from the paper prototyping beyond minor changes to labeling. During heuristic evaluation, an expert recommended having a "select all" widget, which made a lot of sense considering the efficiency goal. |
|
The Analyze Trip interface was designed to maximize user control. We found during interviews that auditors used a lot of creativity in their analyses, and that their tasks were rarely "standardized", thus we wanted the user to be able to jump around and dig into details on a variety of dimensions. In this same vein, we included an option for the user to export all data to excel in tabular format. We looked to mint.com as inspiration for this interface. |
Implementation
The web application was implemented using a mixture of HTML, CSS, JavaScript and JQuery (1.5.2, ui version 1.8.11). HTML and CSS were used to regulate the static design framework of the application, including tables, forms, fonts, colors and size of elements.
JavaScript was used extensively to respond to dynamic user input. In general, we would change the HTML using Javascript, but only within certain elements.
JQuery was used mainly for direct manipulation features such as the drag and drop functionality in Create Trip and the date slider in Analyze Trip. We used local repositories of all code, rather than linking to web repositories.
In terms of code structure, each tab was self-contained, which allowed us as a team to easily work collaboratively.
Where relevant, we relied on open-source Javascript and jQuery plugins. These plugins included:
- The calendar widget (javascript)
- The date slider (jQuery)
- Table Drag n' Drop (jQuery)
- The tabs (jQuery)
- The data graphing functionality (jQuery)
We intended on also using the Google API for the calendar and map of the Create Trip interface. However, it was found that the Calendar API was far less robust than assumed. For example, we intended on allowing the user to drag and drop meetings using the calendar, just like in Google Calendar. However, it was discovered after the Computer Prototype stage that the Google calendar API only allows for the user to view an embedded calendar on a webpage - changing things has to be done via submitting things to the Google servers via forms and such, and there is no direct calendar manipulation ability. As a result, we soured on the Google API, and we just never implemented the calendar or map. This definitely affected user testing, because we effectively planned on relying on Google to be the "brains" behind our artificial intelligence - which in turn was supposed to make the interface much more learnable and powerful.
If we were to continue development, we would probably have to implement a drag and drop calendar ourselves, and write a ton of javascript to basically take the place of Google in terms of intelligently optimizing the user's trip. We think the Google Map API is still usable for display.
In terms of jQuery plugins, we experienced some problems around different plugins using older versions of jQuery and conflicting with other plugins. However, this is a problem with open-source code in general, and we were able to work around them adequately.
Evaluation
Two user tests were done in person and one was done over the phone. Each test only involved one developer, who acted as both facilitator and observer. The users included a former salesperson, a sales manager, and a corporate auditor. Two of the three (the auditor and sales manager) were previously interviewed, so they were already somewhat familiar with the project.
After a very brief introduction to the purpose of the application, all three were given the same set of tasks (written on a sheet of paper) to perform, including:
- Creating a trip with three legs
- Deleting a personal trip from "My Trips"
- Approving a trip
- Rejecting a trip after detailed investigation
- Investigating IT reimbursement data
- Finding out who approves trips in the Sales Department
- Changing the date range on graph.
We did not include a demo, because we felt that the design should be easily learnable off the bat.
During user evaluation, we were primarily interested in the user's reactions during the test, and we were looking for critical incidents.
Position |
Critical Incidents |
Design Changes |
---|---|---|
Salesperson |
Create Trip |
Create Trip |
Sales Manager |
Create Trip |
Create Trip |
Auditor |
Create Trip |
Create Trip |
Reflection
Overall, our team was happy with the results of our project. It did not change a lot from the initial planning stages.
GR1 - Project Proposal and Analysis
We think that this part of the design process was probably the most critical to our eventual success. Factors that worked well for us were:
- Task Analysis: Limiting the scope of the project so that we could accomplish what we needed to do in three months.
- User Analysis: spending a lot of time finding real representative samples of our user population to interview in depth. This really set the design criteria and guided decisions for the rest of the project. We referred back to the notes gathered from the interviews quite a bit over the following months.
Parts that probably could have been done better:
- Domain Analysis: We could have done a better job thinking about multiplicities. For instance, we missed the entity "legs of journey" which would have maybe caused us to devote a little more room to this in the final design.
GR2 - Designs
We split up and each group member came in with two options for each task. That way, we had a total of 6 designs for each task. The design meeting was thus spent discussing the pros and cons of each design. In the end, the final design that emerged from this process was a blend of designs that incorporated aspects from all three people. In general, one design "won" for each task, but we found that small embellishments were blended in from other designs for support.
GR3 - Paper Prototyping
This step was critical in improving the usability of our interface. In particular, the Create Trip and Analyze Trip interfaces underwent major redesigns as a result of this process and became a lot more efficient, learnable, and visually appealing. We had some difficulties in terms of the process of the user test itself - we found that our briefing was far too detailed to get really good feedback from users. In the end, we found that the briefing/task list should be a balance between guidance and freedom, and that a bad briefing/task list could severely compromise the effectiveness of the test.