VoxLens: adding a line of code can

image: Researchers at the University of Washington worked with screen reader users to design VoxLens, a JavaScript plug-in that, with an extra line of code, lets users interact with visualizations. Researchers evaluated VoxLens by recruiting 22 fully or partially blind screen reader users. Participants learned how to use VoxLens, then completed nine tasks (one of which is shown here), each involving answering questions about a visualization. Each task was divided into three pages. Page 1 (marked with “a”) presented the question a participant had to answer, page 2 (b) displayed the question and visualization, and page 3 (c) displayed the question with four multiple-choice answers.
see After

Credit: Sharif et al./CHI 2022

Interactive visualizations have changed the way we understand our lives. For example, they can present the number of coronavirus infections in every state.

But these graphics are often not accessible to people who use screen readers, software that scans the content of a computer screen and makes the content available through synthesized speech or Braille. Millions of Americans use screen readers for a variety of reasons, including blindness or partial blindness, learning disabilities, or motion sensitivity.

Researchers at the University of Washington worked with screen reader users to design VoxLens, a JavaScript plug-in that, with an extra line of code, lets users interact with visualizations. VoxLens users can get a high-level summary of the information described in a graph, listen to a graph translated into sound, or use voice commands to ask specific questions about the data, such as average or minimum value.

The team presented this project May 3 at CHI 2022 in New Orleans.

“If I look at a chart, I can pull out all the information I’m interested in, maybe it’s the general trend or maybe it’s the maximum,” the lead author said. Ather Sharif, a UW doctoral student at the Paul G. Allen School of Computer Science & Engineering. “Currently, screen reader users get very little or no information about online visualizations, which in light of the COVID-19 pandemic can sometimes be a matter of life or The goal of our project is to give screen-reading users a platform where they can extract as much or as little information as they want.”

Screen readers can tell users about the text displayed on the screen, because that’s what researchers call “one-dimensional information.”

“There’s a beginning and an end to a sentence and everything else comes in between,” said co-lead author Jacob O. Wobbrock, UW professor in the School of Information. “But as soon as you move things into two-dimensional spaces, like visualizations, there’s no clear beginning and end. It’s just not structured the same way, which means it there is no obvious entry point or sequencing for screen readers.”

The team began the project by working with five screen reader users who were partially or completely blind to understand how a potential tool might work.

“In the area of ​​accessibility, it’s really important to follow the principle of ‘nothing about us without us,'” Sharif said. “We’re not going to build something and then see how it works. We’re going to build it based on user feedback. We want to build what they need.”

To implement VoxLens, visualization designers only need to add a single line of code.

“We didn’t want people jumping from one visualization to another and getting inconsistent information,” Sharif said. “We’ve made VoxLens a public library, which means you’ll hear the same type of summary for all visualizations. Designers can just add that line of code and we do the rest.”

Researchers evaluated VoxLens by recruiting 22 fully or partially blind screen reader users. Participants learned how to use VoxLens and then completed nine tasks, each involving answering questions about a visualization.

Compared to participants of a previous study who did not have access to this tool, VoxLens users completed tasks with 122% increased accuracy and 36% reduced interaction time.

“We want people to interact with a graph as much as they want, but we also don’t want them to spend an hour trying to find the maximum,” Sharif said. “In our study, interaction time refers to the time it takes to extract information, and that’s why reducing it is a good thing.”

The team also interviewed six participants about their experiences.

“We wanted to make sure that those accuracy and interaction time numbers we saw were reflected in how participants felt about VoxLens,” Sharif said. “We received very positive feedback. Someone told us that he had been trying to access visualizations for 12 years and this was the first time he could do it easily.”

Currently, VoxLens only works for visualizations created using JavaScript libraries, such as D3, graph.js or Google Sheets. But the team is working on expanding to other popular viewing platforms. The researchers also acknowledged that the voice recognition system can be frustrating to use.

“This work is part of a much larger program for us – to eliminate bias in design,” said co-lead author Catherine Reinecke, UW associate professor at the Allen School. “When we build technology, we tend to think of people who look like us and have the same abilities as us. For example, D3 has really revolutionized access to online visualizations and improved the way people can understand information. But there are ingrained values ​​and people are being left behind. It’s really important that we start thinking more about how to make technology useful to everyone.

The other co-authors of this article are Olivia Wanga UW undergraduate student at the Allen School, and Alida Muongchan, a UW undergraduate student studying human-centered design and engineering. This research was funded by the Mani Charitable Foundation, University of Washington Center for an informed publicand the University of Washington Center for Research and Education in Accessible Technologies and Experiences.

###

For more information, contact Sharif at [email protected]Wobbrock to [email protected] and Reinecke [email protected]gton.edu.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of press releases posted on EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Comments are closed.