Recently, at a gas station convenience store where I had stepped to get some regular coffee, I found myself terminally confused. For speed and convenience they have ordering stations with touch-screens that should presumably let you print out a ticket for your order and in a couple of minutes you would be on your way without needing to talk to anyone. The navigation felt like a crossword puzzle without any clues.
My friend A and I stood pecking at the screen trying to order two small coffees and a small box of doughnut holes. Fortunately for us, the place was relatively empty so our incompetence was not being publicly broadcast. We were not holding up the progress of the nation as we struggled very hard at such a simple task. After a good fifteen minutes we managed to get our order number. When we went to collect it, mine turned out to be steamed milk not coffee at all. The woman at the counter gave me a withering look when I described my problem and asked me to go add some coffee to the milk that I had actually ordered.
The experience made me wonder about usability and human factors testing, This is not the first time that I have been completely stumped by a touch-screen interface driven by an abundance of visuals. Each that time that has happened, I have sneaked a peek at how others around me are faring with their orders. Generally, no one is zipping through the process though some may be doing better than me. There is something isolating and deflating about the experience. No one is asking another stranger to help them so we all struggle along alone.
The large number of visual cues meant to help with navigation create this subliminal suggestion that the system was designed for the lowest common denominator. If you have a pulse you should be able to figure it out. And yet many of us don't. A and I joked about it as we failed together. If I was alone there, it would have not been as funny. There has to be a way to help most of us order coffee and doughnuts by way of touch screen. We are all using elevators, driving cars, checking our email, throwing out the trash, riding public transportation - without requiring assistance. So it must be possible.
My friend A and I stood pecking at the screen trying to order two small coffees and a small box of doughnut holes. Fortunately for us, the place was relatively empty so our incompetence was not being publicly broadcast. We were not holding up the progress of the nation as we struggled very hard at such a simple task. After a good fifteen minutes we managed to get our order number. When we went to collect it, mine turned out to be steamed milk not coffee at all. The woman at the counter gave me a withering look when I described my problem and asked me to go add some coffee to the milk that I had actually ordered.
The experience made me wonder about usability and human factors testing, This is not the first time that I have been completely stumped by a touch-screen interface driven by an abundance of visuals. Each that time that has happened, I have sneaked a peek at how others around me are faring with their orders. Generally, no one is zipping through the process though some may be doing better than me. There is something isolating and deflating about the experience. No one is asking another stranger to help them so we all struggle along alone.
The large number of visual cues meant to help with navigation create this subliminal suggestion that the system was designed for the lowest common denominator. If you have a pulse you should be able to figure it out. And yet many of us don't. A and I joked about it as we failed together. If I was alone there, it would have not been as funny. There has to be a way to help most of us order coffee and doughnuts by way of touch screen. We are all using elevators, driving cars, checking our email, throwing out the trash, riding public transportation - without requiring assistance. So it must be possible.
Comments