Interview with Don
Norman, by Bjoern Hartmann, October 2005
Don Norman wears
many hats – Professor of Computer Science, Psychology and Cognitive
Science at Northwestern University; Visiting Professor at Stanford
University; principal of the Nielsen Norman user experience consulting
group; and columnist for both ACM Interactions and InteriorMotives
magazines. Bjoern Hartmann caught up with Don at his home in downtown Palo
Alto to find out how he ended up at Stanford and what he is working on at
Don: I spent most of my academic
career at UC San Diego, then 5 years at Apple, then Hewlett-Packard, then
my own company. I followed the lure of a startup to Chicago. The startup
folded; I went to Northwestern half-time so I could continue consulting
and writing. But one day I was here in Palo Alto having lunch with Tom
Kelley (of IDEO) when my wife called to tell me about some interesting
real estate - an hour later I had bought a condo. I went back to
Northwestern and said "To my great surprise we bought a condo in Palo
Alto , so I guess I'm resigning.”
But we worked out an agreement
where I teach the fall and the spring at Northwestern. Last Winter I also
taught a course with Terry Winograd at Stanford.
My goal is to put
structure to the field of design. Design has no real theoretical structure
and I'm trying to find one. This is the theme of my past books – “User
Centered System Design, “ “The Design of Everyday Things,” “Things
That make Us Smart,” and “Emotional Design.” Now I am working on a
new, important topic, the "design of intelligent everyday
things" or the "psychology of machines." In theory I am
writing a book on this topic - in practice it is still a mess. I love to
teach courses on topics I don't understand, then afterwards, once I have
figured things out, write the books. The best critics are graduate
students whose job is to rip apart whatever the professor says.
Nothing keeps a professor more honest, more up-to-date. More humble.
But once I finish a topic, I become bored with it, so off I go to some new
area that I do not (yet) understand.
have been focusing on technologies in the automobile – which I think of
primarily as a space. The car surrounds and contains me. How does that
make interaction different from a mobile device that I hold in my
Don: That is what's fascinating about the
automobile - is has so many facets. It is a space for transportation; but
there's also the driving space, which to the driver is sometimes a chore
or a burden, but sometimes the purpose of the whole trip. Then there are
the passengers – the car can be a social interaction space, an
entertainment space, or a cooperative work place. All of these functions
are known by the automobile designers who are adding more and more
technology to encompass them. There are 50 to 100 processors and 6 or 7
networks within the automobile, there are ad-hoc networks being studied so
cars can communicate with other cars or road infrastructure. It is getting
very complex and I am concerned that the people designing this technology
do not fully understand the ramifications from the point of view of
interaction. Years ago I studied aviation safety and the role that
automation played in aviation and I fear that the same errors are being
Bjoern: In previous books you wrote
about errors as applied to consumer products; you also mentioned errors in
aviation; it seems like the car occupies a middle space.
Don: There was a big effort to look at human error
after Three Mile Island which is when I got into the business. There have
been fewer studies of other industries. Aviation has been most effective
and most of the work comes out of NASA Ames right here [in Mountain View].
They funded me for a long time.
Recently, the medical profession
has started to pay attention, but they are still reluctant. Most of us in
the error business believe that most errors are system errors - that
system designs encourage errors and prevent users from understanding what
is going on. The medical system is still heavily involved in a blame
regime where you want to find the people who made the error and punish
them; but people should freely admit their errors without fear of
punishment because it's the only way we can improve the system. But, let
me make a different point:. people do make errors but it is because our
machinery asks us to behave in ways that are not appropriate.
top of that, when we automate we tend to automate the easy things. But
when it gets difficult, the automation gives up, which is the opposite of
what you want.
The question here is: what is attention? What are
The old model of attentional resources was
CPU cycles - if I am idling, I am only using a small percentage of cycles
and the rest is available anytime I need it. It's a model I helped develop
in the 1970s. There is some evidence though that this is not appropriate.
Maybe a better model would not be a fixed CPU but a mesh computer, where
multiple CPUs are scattered around in a community. If I think [the current
task] could be an easy job I get just enough CPUs to do the job. Now if I
suddenly need more, I'm screwed. In the first model, I was not using the
remaining capacity but it was available for me. But in the second model my
attentive resources diminish and when I suddenly need them, they are not
there. Some researchers in England are applying this model of
“underload” to driving, which I find very intriguing.
of automated automobiles is progressing far more rapidly than I have
thought possible. We have automated large amounts of the mechanical things
in the engine, including shifting, and now the braking and the stability
of the automobile, and the speed at which you're going - we have cruise
control and adaptive cruise control.
When do you keep these systems
on and when do you keep them off? I've heard of several accidents when
people thought they were off when they were on and vice versa. Does my
rental car have anti skid brakes or not? Do I hit the brakes hard or not?
Does the car have stability control? When is your cruise control on or
off? You don't have any way of knowing. There is a light that goes on, but
the light turns on not to say whether automation is controlling the car,
the light tells you whether the cruise control is armed or not which is
about as stupid as I can possibly imagine.
Now take the instance of
lane keeping control. Honda has a lane keeping control that, when it feels
you drifting out of the lane, applies 80% of the torque required to get
you back in. Why 80%? Because they want the driver to stay in the loop. Is
that really the correct way of doing it? We don't know. What happens when
the driver doesn't apply any force? Well the Honda will then warn the
driver. What happens when the driver still doesn't respond? The Honda will
then disconnect its lane keeping control. Which seems bizarre - isn't that
when you might need it most? Then again, if it didn't then driver could
just continue to ignore it. These are very complex issues.
Bjoern: You are generally advocating taking a larger
systems perspective - how do these automations change the driving for
other people? Through repetition I have built up a fairly reliable model
how other human drivers will behave in certain situations. Is that going
to change? Will I now have to keep different models in my head and judge
whether the car in front of me is controlled by a human driver, by
automation, or some hybrid?
Don: The most
dangerous situations are those with mixed systems, where only some cars
have automation capabilities. You can imagine a person gaming an adaptive
cruise control system - let’s say there is a big block of cars ahead
where he wants to go through so the driver just accelerates right through
the middle of the flock, trusting that the automatic system of the other
cars will make them move out of the way - unless one of them turns out not
to be automated.
Bjoern: This heterogeneous
situation seems to be unavoidable.
Don: So we better have really good performance
standards. But suppose that driving were indeed completely automated - you
wouldn't need lanes, stop signs, traffic lights; you might not need speed
Bjoern: You mentioned in your columns on
automobiles and home entertainment systems that these industries are
ignorant of lessons learned in other industries before them. One of
the positive counter-examples you brought up was user interfaces of
personal computers. What were the major lessons learned there and what was
the impetus to bring improvement to desktop computing? Can we hope that
it's enough to point car and home entertainment industries to these
achievements or will things get worse before they get better?
Don: Oh things always get worse before they get
better. It's amazing how few people are understanding of these issues. In
the computer industry there were two major players - Microsoft and Apple.
Both took usability very seriously - Bill Gates has a clear personal
interest in this - and Apple, of course, led the industry in insisting
upon easy to use, simple products. This emphasis was throughout the
company, from the CEO down. I joined Apple under John Scully, who really
cared about these things. And Steve Jobs today really cares about it.
Although I had a debate with my wife today how to turn the iPod on
or off. My explanation for this difficulty was "bad design." And
then my iPod shuffle, which has a really bad design of the on-off/mode
switch: not only can I not tell which position the mode switch is in, but
it is physically difficult to move the switch. I know it's appropriate to
praise Apple and damn Microsoft, but if you actually look at how both
work, you'll find that Microsoft spends a lot more effort and time to try
and make their systems better and I think they're actually doing quite
well given the horrible constraints they face.
military also has historically been concerned with usability, especially
the Air Force, because of a large number of aviation accidents due to
human error. The cockpit is one of the best designed places. It looks
complex, but quite often the thing that looks the simplest is the hardest
to use and the thing that looks most complicated is the easiest.
Complexity comes with new technology - the engineers or the original
creators are so proud that it works at all. For the first automobile, you
needed a mechanic to keep it running. Many people drove with the mechanic
as passenger. But with the modern computer, despite all its failings, we
never had to have the IT person sitting by our side. (Hmm. Maybe we did!)
We as consumers make things worse by saying "Wow, I really love
it, but why can't it do this other thing?" People who like a
technology, often request that it do even more. Eager companies rush to
respond – they listen to their customers. The result is the bewildering,
confusing state of many mature products: they suffer from extreme
But these are not technical problems, nor problems
of design theory. They are problems of marketing, of the political
struggle between competing companies and industries, and of market
pressures. So although these have huge impact upon product design,
they are not where I can make the most powerful contribution.
I'm only really interested in working on intellectual
challenges where I can make a unique difference because I can bring a
different point of view or bring together things people haven't thought
about. I want to forge ahead. I want to think hard about how we should
approach these problems. That’s why my current interest in the design
issues of the 21st century: smart, intelligent devices in the automobile
One of my standard approaches to a problem is to
take the fundamental, unexamined axioms and reverse them. I was just
interviewed by a reported doing a story on the dangers of interruption and
distraction. My take was to answer that we have evolved to be
interruptible and distractible. I think of it not as distraction, but as
attention to change. The problem isn’t that we are continually
interrupted, but rater that the technology of interruptions has far too
much overhead. The overhead is often far more onerous than the content. So
let’s make the interruptions have more content – make it easier to
resume afterwards. Someone just asked me how to prevent graffiti. My
response is why? Encourage it. It is wonderful folk art. The problem is
that it takes place in inappropriate places, so let us make designs that
encourage graffiti where it can be appreciated.
I also want to
reexamine whatever you think the fundamental axioms are - I like to
contradict because you often make great progress that way. Hence my recent
paper "human -centered design considered harmful."
Bjoern: So where is the intellectual frontier in home
Don: The frontier is in the use of
automation. I discovered an old literature on autonomous agents - how do
you instruct them, how do you trust them? Autonomous agents are now a hot
topic again in artificial intelligence. And it is happening in the home -
one of my old students - Michael Mozer has automated his home with a
neural network. He has pointed out that it's sometimes very disconcerting
when the home misreads him. Also, Mike may be working late some
night and say "Oh my house is expecting me" and feel somehow
compelled to get home because of his automated house which is an
Bjoern: How far should we
take this analogy with human behavior? What is the proper role for
projecting personality onto technology or designing technology to behave
as if it had certain personality traits?
us think about what a personality trait is. A personality trait
describes the way people behave in a given situation. So we
are automatically designing personalities into our machines, even if we
don’t realize it. soon as we design anything that behaves in a
particular way, it has a personality. Even if we never thought about it
that way. What is the personality of a machine? Is your car a relaxed car?
Or is it very tense? The same with my kitchen - how does it respond to my
Bjoern: So personality is inescapable. We
cannot but give personality to the things we create.
Don: Yes, exactly. Don't forget that we are very good
at interpreting the actions of animate objects and we anthropomorphize
what we see - we assign personality and emotion to devices even if they
don't have them.
If you think of the emotional system in humans as
the information processing system that makes value judgments, well, then a
machine should have that too if its autonomous. Now, these emotions don't
have to be at all like human ones, but if the machine is to interact with
humans, there have to be some commonalities. We display our emotions in
our body. And over the many years of evolutions we've evolved use the body
signs such as facial expressions, signs of muscle tenseness or relaxation,
posture, and signs of approach or avoidance as a communicative
device - so facial expressions are a rich communication device. There is
no reason for machines to have similar facial expressions or body
expressions - except that these might help communicate with people.
So if my vacuum cleaner is having trouble, why not communicate that by
something akin to facial expressions?
At the moment there is a
small robotics community - the best work in my opinion is from Cynthia
Breazealat MIT - where people are building real architectures based on
Bjoern: Where do you see the
difference between having a robot enact emotions in physical space versus
seeing a simulation of the robot or a virtual person on a computer
Don: I think physicality offers a
tremendous amount to us. I think what happened during the computer
revolution - and this was true of the design world as well – is that
people got carried away by virtuality. People were thinking that a
computer could do everything we want it to; that a screen
could display everything we need; that we wouldn’t need physical
knobs and sliders and buttons anymore - we would just draw them on the
screen and people would touch them. But it's not the same thing. You
cannot touch these without looking. You miss all the powerful haptic and
proprioceptive cues. That direction of design was a serious mistake.
Bjoern: I agree with you that we got carried
away by the promise of virtuality and convergence. Do you feel we are
swinging back towards realizing the importance of the physical? Is the
emphasis on virtuality waning?
Don: I think it is
part of the normal pendulum swing that we are more physical now. But there
are now 6 billion people on the earth and while we are swinging back –
while the community I interact with is swinging back - there are ever more
new people joining the field completely unaware of the history. They are
starting over and they are all going to repeat all the errors. Fields
rarely think of looking at other fields - the aviation industry refused to
learn from the nuclear power industry, the automobile industry doesn't
look at the aviation industry, the home automation industry doesn't look
at the others.
If people asked me: ‘are things better designed
now or worse?’ My answer is yes. Better and worse. Even though we've
made great progress, new people are building new things that are
worse than ever before.
I believe that there's a new era of design
happening which is about intelligent devices. So I think its time for
examining the way how we interact with the autonomous agents, what happens
when they fail, how we instruct them, and how we trust them.
Mozer, M. C. (2005). Lessons from an adaptive house. In D. Cook &
R. Das. (Eds.), Smart environments: Technologies, protocols, and
applications (pp. 273-294). Hoboken, NJ: J. Wiley & Sons. ftp://ftp.cs.colorado.edu/users/mozer/papers/smart_environments.pdf
Copyright 2005 Ambidextrous Magazine, Inc.