I’m currently working on a project about “human-like responses to computer systems”. Many interaction design specialists say that human-like software (that is, software that elicits and understands human-like behavior) is the most promising development in human-computer interaction. Although some people cleverly assert that human-like interfaces may be more incomprehensible than their “dumb” counterparts (Hofstadter in A Coffeehouse Conversation on the Turing Test), most designers agree that human-like interfaces are more learnable, since we as humans already know how to interact in the human way (as supposed to interacting with computers, which is something we invariably have to learn).
Now this is interesting: Shechtman & Horowitz report that in a computerized cooperative task, when participants believe that they are dealing with a human instead of a computer (they are actually dealing with a computer in both conditions), they are more inclined to interact using hostile statements.
What does this mean? People are more aggressive to human-like interfaces. Therefore, if we believe that human-like interfaces are better, it seems that being mad at your software can be a sign of dealing with good software!
Sounds stupid? Well, I don’t know about you guys, but my most fruitful cooperative efforts were often clear, direct and open conversations… these sometimes have a hostile character, but that’s just the way it works.