Prototype Photos
Prototype A
Prototype B
We used a small notebook as a physical prop to represent an android phone.
We taped paper version of the phone to it and we were swapping small sheets of paper. Here, the home screen is shown.
If a user selects any of the channels, the screen slides left and one the four screens comes in its place (depending on which channel they selected)
In the new screen, the user sees thumbnail plots of the channel's components on the left and some statistics about the components (mean, variance, last value).
From the main screen the users also have the ability to click on the "Custom Plot" button which again will slide left the screen and let them create the custom plot. They can set the name of the custom plot (which is automatically hinted unless they change it) to something they want and they can select components from different channels. Each component would appear with its own color and connector line, and these will be shown immediately in the plot.
Another feature we decided to add was the ability to publish messages on a channel. Whenever they have opened a specific channel, they can tap the publish button next to the channel name and again a sliding animation will present them with a new screen on the right.
In it, the specific channel on which they want to publish is shown and they can use a starting template (since sometimes the values in a channel may look like arbitrary real numbers from a human perspective). There are two types of starting templates ( the last on the channel and the last one the user published) for two types of uses. The values are setup via text fields using the on-screen keyboard.
TODO upload photos to the wiki
Briefing
Hi,
Our project is a robotic system monitor on a mobile device ( we chose Android ).
Every robot is in some sense a computer which runs multiple processes such as sensors, logical units and actuators. These processes communicate with each other usually via a shared medium.
In the specific types of systems that we consider the communications happens on different channels, and every process can decide whether to publish or subscribe to any channels.
There are messages sent on every channel, where each message is a certain data structure - different for every channel.
The purpose of our interface is to allow the user to observe the data flowing on these channels as well as to allow the ability to publish messages in order to alter the robot state.
Tasks
Before the first task we showed the users the storyline.
Task 1: Debugging the robot
Use the interface to find what is wrong with the robot.
(successful completion of this task means that the user have found the plot of the POSITION channel and it has sharp discontinuity - a bug)
Task 2: Plot multiple graphs on the same plot
Combine all the POSITION channel fields in one figure.
Task 3: Publish a message on a channel
Publish a new waypoint at x=10.5 y=0
Observations
User 1 (Wednesday)
Our first user was confused about the whole storyline and only completed the first task. We probably didn't do very good job explaining the domain.
Drop-down menus suck - it is hard to figure out what they mean.
If we have several drop-down menues with dependencies (i.e. you set the first one and the options in the second change) there needs to be something to
User 2 (Wednesday)
User 3 (Friday)
We felt this user was closest to potential users of the application because she understood the semantics of the scenario.
User 4 (Friday)
Prototype iteration
We actually started with two prototypes. We took the two most diverse designs from GR2 and each of team created a prototype around each design. Each team member implemented a prototype, the design for which was proposed by the other team member - this way we better understood each other's ideas and we able to avoid emotional attachment to specific features in the desings.
Our iteration cosists of merging the ideas that we found work in each prototype after the in-class testing.
Discussion
We are targeting a pretty narrow user base (robotics researchers and UROPS), so one of our biggest hurdles in this stage was grounding the subject users in the task domain. In order for a user to operate our interface they'd need to have a certain background which cannot fully and deeply be explained in 5 minutes.
We also discovered that it is pretty hard to create affordances in paper prototypes with low fidelity of look. Good mobile interfaces heavily rely on such cues and techniques in order to stand out. In short - paper prototype is still useful for brainstorming design ideas for mobile interfaces but we believe that the gap between paper prototype and pixel-perfect prototypes is significant.