Yay! Done! Going home!

One hour before I hop on the 28X to Pittsburgh Airport. At 7AM local time tomorrow I’ll be back in The Netherlands! Everything is done, I earned my Masters degree in Human Computer Interaction. Cheer! Cheer! Cheer!

Okay, I’m gonna miss everyone here in the States. I’m going to be crying all day.

Above all, I’m going to have to work my ass off again starting August 27th.

But it’s going to be fun.

Mixed feelings, but I guess it’s time for another change. I’m ready for it.

To my Dutch friends: See you soon!
To my US friends: Come and visit!

Oh, and here is the thing we created for our Google-sponsored project that I’m so really proud of: Fiesta!

PicturePal – having fun with friends

The course Interface and Interaction Design (IID) teaches how to design an entirely new experience that improves your life. The difficulty of this endeavor cannot be overstated. Most usability practices are based on improving products. The research approach (as taught at the TU/e) is to make fundamental, measurable, and above all generalizable improvements to interactions that are fundamental to human behavior (for instance: how can we make menu-selection in windows-applications faster?). The usability practitioners’ approach is to improve existing work practices (as taught at CMU in Methods) or even existing user interfaces (as taught in Programming Usable Interfaces). The interaction design approach is about something completely different. It is about finding undiscovered needs and desires.

It is incredibly difficult to design products using the interaction design approach. But when it works, you get powerful products, like the iPod. Products that are no obvious fix to an identifiable problem, but nevertheless create a very deep connection with the user, and consequently leave their mark on our society.

My first somewhat successful application of this approach would be the final project I did with Daniel and Sushmita for the IID course. The product is called PicturePal.

Our assignment was to explore the opportunities and design challenges around the idea of an intelligent agent working in a home. The goal was to improve the quality of people’s lives. First thing to do was to select an audience; we chose to help roommates live together.

We started off doing some exploratory research using directed storytelling. We asked people for their horror-stories concerning roommates and living together. Maybe we could help people with chores and cleaning? Answer: “You can’t make people clean!!” Maybe people are in need for house rules? Answer: “It’s not a marriage, so you have to bend a little.” How about bills and money? Isn’t that an issue? Answer: “Someone just buys stuff without being asked.” Well, how about people making noise? Answer: “There are no rules like no noise after midnight.” It seemed that people didn’t actually have any concrete troubles living together. Yes, there were issues sometimes, but people always found a way to deal with that. They were doing fine without any help, and they were actually very proud of that!

We therefore decided to explore the positive side of the design space. Our statement was to design something that makes you feel like you”re a good roommate. We explored about 70 concepts in this space, and picked the best twelve or so for a validation session. From the validation we found that people valued spending time with their roommates, sharing memories together, and being connected to the home at all times. We used these values to iterate on one of the concepts that got the most positive feedback, which resulted in PicturePal.

PicturePal is a digital photo frame with a built-in camera. It can be mounted on a wall, and – when turned on – take pictures around the room (different angles are possible if you use extra cameras) at regular intervals. This relieves you from having to bring your camera to every party and prevents being just to late to capture that crazy moment. The pictures can be shown in the frame itself, on a TV (in which case the frame serves as a remote), or on a cellphone (so you can always check what’s going on at your place).

The concept is tailored to roommates having lots of little ad-hoc parties and funny moments, but the product would also work for new parents, nursing homes, or clubs and bars (all in need for an importantly different marketing strategy).

The project got really good feedback from the class, and I submitted it to TNO’s “Not Invented Yet” contest (see below). In the first round, I ended second place! This means that I’m through to the finals that start early November. Everyone who voted for me in the first round: Thank you! And I hope I can count on you again in the finals!

Intelligent Tutoring Systems for Computer Software

The text below is loosely based on a paper I wrote for the class Applications of Cognitive Science.

Most readers of my blog know that I’m a techno-optimist. They also know that the biggest concern I have with most technological applications is that they are just very complicated. This is especially true for software. Some companies call their software “intelligent”… I think that is because they make us feel incredibly stupid.

Luckily, there are heroes called usability engineers that try to improve the interactions we have with our systems. They iteratively design and test the user interface, using various design techniques and usability evaluation methods. However, even with this approach, it is next-to-impossible to make user interfaces usable for everyone.

So what is this “usable systems problem” anyway? How come we can’t use our systems? As Norman points out, there are two gulfs between the user and the system: the gulf of execution and the gulf of evaluation. Users have some sort of goal, but in order to fulfill this goal they have to translate it to specific actions on the system. This gulf entails forming an intention, specifying an action and executing that action. After that, the system does something, hopefully. Now the evaluation-gulf comes into play: the user has to notice a change in the system, interpret this change, and evaluate whether this was the correct change with relation to the goal. People have trouble using systems because they have to constantly bridge these gulfs. The smaller the gulfs, the better the interface.

Why do these gulfs exist? Norman gives us the following conceptual answer: There are three conceptual models. First, there’s the designer model. This model represents the way the designer of the interface maps the functionality of the system to the designed interactions. A play button to start your Ipod. A forward button to skip a song. Then the system is built, and there’s the system image, which is basically a physical version of the designer model (the interface itself). After that, a user buys the system and starts using it. By using the system, the user creates a use model: from the appearance of the interface and their reflection on their interaction with the system, they derive their own model of how the system works. The gulfs appear when the user thinks that the system works differently than it actually does, in other words, when the use model doesn’t align with the system image. Designing a good user interface, therefore, is making the gulfs as small as possible, by making a system image that can be easily interpreted and translated to a correct use model.

People fail in using systems because they don’t understand the system image. What do you do when you don’t understand something? You take a class! I have been a computer tutor for many years, and I have seen many people struggling with computer software. As expected, the problems are extremely varied: what is almost insultingly easy for one person can be almost ungraspable for the other. When I probe on problematic situations, virtually all of the problems are due to inadequate use models. Fortunately, as expected, pointing out the explicit use model work extremely well as a way to teach.

Now for a solution. Intelligent tutoring systems (ITSs) are computerized tutors. An ITS selectively presents problems to students, and corrects the students if they make mistakes. In order to do this, intelligent tutoring system has three main important parts: An expert model, Model tracing and Knowledge tracing. The expert model is the model that the system has of the solution to the problem at hand. This solution mimics the steps a knowledgeable person would take to solve the task. Model tracing means matching the observed behavior of the student to productions in the expert model. This way the tutor can understand why the student did something wrong, and show the student the correct step, or give the student a hint. Knowledge tracing means figuring out the competence level of the student. Using knowledge tracing, the tutor can gradually introduce new concepts and strengthen old ones.

The cool thing is that this is exactly what I do as a computer tutor. Model tracing means figuring out why a student makes a mistake. Knowledge tracing means figuring out what to present to the student. It also fits Normans representation of the usable systems problem: The expert model is the system image, and model tracing means interpreting the gulfs between user and system.Conclusion: Intelligent tutoring systems can solve the usable systems problem!

So, we can make an Intelligent Tutoring System to help people using software. You first figure out the user’s goal, then you match the user’s actions to correct production rules for that goal. When the user takes an incorrect action, you either correct the mistake automatically or you derive the misconception in the use model correct this use model. Finally, you trace the use model to see what the user knows about the system.

A real tutor can only be present “at tutoring time”. An ITS, however, can provides on the spot instructions whenever the user needs them: it tunes the use model while the user is doing his work. The system can also propose a goal structure that helps to define the appropriate intentions: in many cases the user knows what he/she wants to do, and what actions are available, but is lacking a plan that ties several actions together to attain the goal.

The most important benefit, however, is the fact that this system takes into account the variability of the user. With established usability engineering methods you can try to create the best system image: one that best reflects the use model. But not every user has the same use model! Everyone has a slightly different idea of how the system exactly works. With an intelligent tutoring system, you can dynamically determine the current user’s use model, and correct it on the go.

There is one more twist to make. When I was making this all up in my head I suddenly started thinking: Why do I want to adjust the use model to match the system model? Why not do it the other way around? I realized that I had mapped the system image to the expert model, making model tracing a case of altering the use model. I also realized that it would be radically different if I would map the use model to the expert model, and have model tracing adjust the system image. This would mean that the ITS still tries to interpret the user’s use model, but then instead of altering this model to match the system image, it would alter the system image to match the use model: adapting the software to the user!

This approach ‚Äì which I call “reversed tutoring” ‚Äì may very well be much more powerful than the ITS approach proposed above. Changing the system is definitely a lot less intrusive than changing the user. People are generally resistant to change, and from the user‚Äôs perspective it seems quite reasonable to ask the system to adjust to the user instead of the other way around.

Of course, reversed tutoring is not the holy grail. For one thing, use models often start out being rather incoherent, but it would be a fallacy to derive from this that we should make the system incoherent too. In reality, normal and reversed tutoring would work together to optimize the user experience and solve the usable systems problem.