The lightbulb came on for me today as to why the smartthings app is so difficult to use with a voice reader. It doesn’t follow the most basic UI design principles for voice use.
Let’s say there are two actionable elements, like two device tiles on a screen.
What I expect to hear are
Name of the actionable element
current status information
the type of element it is. Like “header,“ “button,“ “switch button,” etc.
But the smartthings app frequently puts two actionable elements into a single tile. That’s visually clear, but really confusing when you are listening. Particularly since one of the elements is often just given its generic name, like “action power on“ or “scene.“
Also in some cases you hear the type of element before you hear the current status information, which is also confusing.
Here is an example. There is a tile for bedtime scene. Apparently there are two actionable elements. The first one lets you edit the scene. The second one lets you run it. I don’t know how clear that is to someone just looking at it, but it’s totally confusing using a screen reader. What you hear is
A person using a voice reader has no idea that that “scene” is the bedtime scene. Nor that choosing “bedtime“ takes you to an edit screen and doesn’t actually run the scene. Nor that this particular “scene“ is an any way different from the other seven identical “scene“ buttons on the same page.
Here’s the image.
In some ways it’s even more confusing for devices. That’s because the devices have a “action power on“ generic control. But you can also tap the name. So this is what you hear:
“Atomic. off. Button.”
“Action Power on. Button.”
Someone using a screen reader has no way of knowing that that action button goes with the Atomic device. In fact, they have no way of knowing that the action button is actually off because part of its name is “power on.“ Seriously, that’s the generic name that is used a zillion times on the page.
Here’s the image:
It just seems really clear that they designed the UI for sighted people and then ran some auto tool to add voiceover controls. But they didn’t design it for use by someone with a screen
reader. and they definitely didn’t test it for accessibility.
A PRETTY GOOD DESIGN
In contrast, let’s take a look at the Hue app. It’s by no means a perfect voiceover implementation. But it’s a solid B, and miles better than smartthings.
Here’s a very similar kind of tile with two actionable elements. if you select just the name, it opens up the detail screen. Or you can use the switch button as a quick control. Here’s the picture.
What makes this better? The unique name and the status are repeated for the switch button. So what you hear is:
“Family Room. All lights on.
Family Room. All lights on. Switch button.”
And if you select the switch button, you will get a status update to “all lights off.“
Like I said, it’s not perfect. you do have to do a second gesture to realize that you could tap for the first element. But at least you aren’t confused by the second element and you get all the information you need to decide whether you want to activate it. As well as the status update if you do. And every element on the page has a unique spoken name.