The text below is loosely based on a paper I wrote for the class Applications of Cognitive Science.
Most readers of my blog know that I’m a techno-optimist. They also know that the biggest concern I have with most technological applications is that they are just very complicated. This is especially true for software. Some companies call their software “intelligent”… I think that is because they make us feel incredibly stupid.
Luckily, there are heroes called usability engineers that try to improve the interactions we have with our systems. They iteratively design and test the user interface, using various design techniques and usability evaluation methods. However, even with this approach, it is next-to-impossible to make user interfaces usable for everyone.
So what is this “usable systems problem” anyway? How come we can’t use our systems? As Norman points out, there are two gulfs between the user and the system: the gulf of execution and the gulf of evaluation. Users have some sort of goal, but in order to fulfill this goal they have to translate it to specific actions on the system. This gulf entails forming an intention, specifying an action and executing that action. After that, the system does something, hopefully. Now the evaluation-gulf comes into play: the user has to notice a change in the system, interpret this change, and evaluate whether this was the correct change with relation to the goal. People have trouble using systems because they have to constantly bridge these gulfs. The smaller the gulfs, the better the interface.
Why do these gulfs exist? Norman gives us the following conceptual answer: There are three conceptual models. First, there’s the designer model. This model represents the way the designer of the interface maps the functionality of the system to the designed interactions. A play button to start your Ipod. A forward button to skip a song. Then the system is built, and there’s the system image, which is basically a physical version of the designer model (the interface itself). After that, a user buys the system and starts using it. By using the system, the user creates a use model: from the appearance of the interface and their reflection on their interaction with the system, they derive their own model of how the system works. The gulfs appear when the user thinks that the system works differently than it actually does, in other words, when the use model doesn’t align with the system image. Designing a good user interface, therefore, is making the gulfs as small as possible, by making a system image that can be easily interpreted and translated to a correct use model.
People fail in using systems because they don’t understand the system image. What do you do when you don’t understand something? You take a class! I have been a computer tutor for many years, and I have seen many people struggling with computer software. As expected, the problems are extremely varied: what is almost insultingly easy for one person can be almost ungraspable for the other. When I probe on problematic situations, virtually all of the problems are due to inadequate use models. Fortunately, as expected, pointing out the explicit use model work extremely well as a way to teach.
Now for a solution. Intelligent tutoring systems (ITSs) are computerized tutors. An ITS selectively presents problems to students, and corrects the students if they make mistakes. In order to do this, intelligent tutoring system has three main important parts: An expert model, Model tracing and Knowledge tracing. The expert model is the model that the system has of the solution to the problem at hand. This solution mimics the steps a knowledgeable person would take to solve the task. Model tracing means matching the observed behavior of the student to productions in the expert model. This way the tutor can understand why the student did something wrong, and show the student the correct step, or give the student a hint. Knowledge tracing means figuring out the competence level of the student. Using knowledge tracing, the tutor can gradually introduce new concepts and strengthen old ones.
The cool thing is that this is exactly what I do as a computer tutor. Model tracing means figuring out why a student makes a mistake. Knowledge tracing means figuring out what to present to the student. It also fits Normans representation of the usable systems problem: The expert model is the system image, and model tracing means interpreting the gulfs between user and system.Conclusion: Intelligent tutoring systems can solve the usable systems problem!
So, we can make an Intelligent Tutoring System to help people using software. You first figure out the user‚Äôs goal, then you match the user‚Äôs actions to correct production rules for that goal. When the user takes an incorrect action, you either correct the mistake automatically or you derive the misconception in the use model correct this use model. Finally, you trace the use model to see what the user knows about the system.
A real tutor can only be present “at tutoring time”. An ITS, however, can provides on the spot instructions whenever the user needs them: it tunes the use model while the user is doing his work. The system can also propose a goal structure that helps to define the appropriate intentions: in many cases the user knows what he/she wants to do, and what actions are available, but is lacking a plan that ties several actions together to attain the goal.
The most important benefit, however, is the fact that this system takes into account the variability of the user. With established usability engineering methods you can try to create the best system image: one that best reflects the use model. But not every user has the same use model! Everyone has a slightly different idea of how the system exactly works. With an intelligent tutoring system, you can dynamically determine the current user’s use model, and correct it on the go.
There is one more twist to make. When I was making this all up in my head I suddenly started thinking: Why do I want to adjust the use model to match the system model? Why not do it the other way around? I realized that I had mapped the system image to the expert model, making model tracing a case of altering the use model. I also realized that it would be radically different if I would map the use model to the expert model, and have model tracing adjust the system image. This would mean that the ITS still tries to interpret the user‚Äôs use model, but then instead of altering this model to match the system image, it would alter the system image to match the use model: adapting the software to the user!
This approach ‚Äì which I call “reversed tutoring” ‚Äì may very well be much more powerful than the ITS approach proposed above. Changing the system is definitely a lot less intrusive than changing the user. People are generally resistant to change, and from the user‚Äôs perspective it seems quite reasonable to ask the system to adjust to the user instead of the other way around.
Of course, reversed tutoring is not the holy grail. For one thing, use models often start out being rather incoherent, but it would be a fallacy to derive from this that we should make the system incoherent too. In reality, normal and reversed tutoring would work together to optimize the user experience and solve the usable systems problem.