Update!

Okay, I have seriously neglected my blog since I came back from the US, which was like two years ago.

My recent graduation has however sparked some interest in The Netherlands as well as overseas, so I’ll give a quick update. Please leave a comment if you wish to learn more. :-)

I recently finished my graduation project on a recommender system for energy saving measures that has an interface that adapts to the user’s decision making style. The most pronounced effect was found in terms of preference elicitation: I made two systems that let the users indicate their preferences in different ways: by specifying attribute weights vs. by evaluating examples. The example-based version turned out to be better for novices, the case-based version turned out to be better for experts.

This work was recently (today!) awarded with the Best Poster/Short Paper Award on the ACM conference on Recommender Systems 2009 (RecySys09).
In my grad project I further extended this work to an adaptive system that would predict the users’ expertise based on clicking behavior, and adapt the interface on the fly.
My full grad project is nominated for the Gerrit van der Veer prijs; the thesis award of the Dutch CHI chapter. The award session will be on November 12th in Delft. I invite everyone to be there!

CHI paper on the usability of intelligent agents

Last week, half a year of work accumulated in a 6 page paper.

For my Research Project I’ve been working on the usability of intelligent agents. To give you a quick update: my main hypothesis is that there are very capable intelligent agents as well as rather less capable ones, but that usability not only depends on system capabilities, but also on the appearance of the system. Specifically, a capable agent should look “intelligent” so that users immediately understand that they can use a rich interaction, approaching the richness of human-human interaction. Otherwise, users may underestimate the system capabilities, and not use its full functionality. A less capable agent, on the other hand, should not look too “intelligent”, because otherwise users may overestimate its capabilities, and wonder why such a smart-looking system doesn’t understand their commands.

HCI or HTI people will see that I’m drawing a parallel here with Norman’s idea of feedforward (the appearance) and feedback (the actual system response) helping to establish a use image (the inferred intelligence). The trick is that users will use a “human-like” use image, and therefore human-like cues can be used as feedforward. In fact, the more human-like the appearance of the system, the more intelligent the system is believed to be.

I used a trick to prove this hypothesis: I had some systems in which cues and actual system capabilities matched, and some in which they didn’t match. You’ll have to read my paper for detailed results. But one result was very clear: 22% of the participants that used a system with low capabilities and very human-like cues got so confused by the mismatch between feedforward and feedback that they simply quit the experiment after a few minutes!

As I said, I put all this in a 6 page paper, which I submitted to the CHI 2008 student research competition. I will wait putting the paper online until I hear more about that (but you can ask me for it on email if you’re interested). I’ll keep you updated!

Interaction Designer!

I have been too busy lately to update my blog. I was working on a paper for the CHI conference (will talk about that later), and finishing my classes for this semester.

Besides that, I got a job! Last year, I worked on Aduna AutoFocus for the course SAUI. We made some improvements to the interactivity of the program. During my winter break in NL, I visited Aduna to show them our results. They were pretty enthusiastic about it, and so I asked them to let me know if they had any part-time job opportunities.

About a year later, I’m working on AutoFocus as an Interaction Designer for one day a week! I’m doing the standard procedures (Heuristic Evaluation, Concept Validation, Paper Prototyping, Functional Prototyping) but then in a super-fast lightweight manner. I’m focusing on usability and usefulness, and in a few months (never talk about real deadlines when you’re in IT) my contributions should be visible in AutoFocus 5.0.

Bart in the USA