Adding a line of code can make some interactive visualizations accessible to screen reader users

Interactive visualizations have changed the way we understand our lives. For example, they can present the number of coronavirus infections in each state.

But these graphics are often not accessible to people who use screen readers, software that scans the content of a computer screen and makes the content available through synthesized speech or Braille. Millions of Americans use screen readers for a variety of reasons, including blindness or partial blindness, learning disabilities, or motion sensitivity.

Researchers at the University of Washington worked with screen reader users to design VoxLens, a JavaScript plugin that, with an extra line of code, lets users interact with visualizations. VoxLens users can get a high-level summary of the information described in a graph, listen to a graph translated into sound, or use voice commands to ask specific questions about the data, such as average or minimum value.

The team presented this project on May 3 at CHI 2022 in New Orleans.

“If I look at a graph, I can pull out all the information I’m interested in, maybe it’s the general trend or maybe it’s the maximum,” said lead author Ather Sharif, a PhD student. at Paul G. Allen University. computer science and engineering. “Currently, screen reader users get very little or no information about online visualizations, which in light of the COVID-19 pandemic can sometimes be a matter of life or The goal of our project is to give screen-reading users a platform where they can extract as much or as little information as they want.”

Screen readers can tell users about the text displayed on the screen, because that’s what researchers call “one-dimensional information.”

“There’s a beginning and an end of a sentence and everything else is in between,” said co-lead author Jacob O. Wobbrock, a UW professor at the Information School. “But as soon as you move things into two-dimensional spaces, like visualizations, there’s no clear beginning and end. It’s just not structured the same way, which means it there is no obvious entry point or sequencing for screen readers.”

The team began the project by working with five screen reader users who were partially or completely blind to understand how a potential tool might work.

“In the area of ​​accessibility, it’s really important to follow the principle of ‘nothing about us without us,'” Sharif said. “We’re not going to build something and then see how it works. We’re going to build it based on user feedback. We want to build what they need.”

To implement VoxLens, visualization designers only need to add a single line of code.

“We didn’t want people jumping from one visualization to another and getting inconsistent information,” Sharif said. “We’ve made VoxLens a public library, which means you’ll hear the same type of summary for all visualizations. Designers can just add that line of code and we do the rest.”

Researchers evaluated VoxLens by recruiting 22 fully or partially blind screen reader users. Participants learned how to use VoxLens and then completed nine tasks, each involving answering questions about a visualization.

Compared to participants in a previous study who did not have access to this tool, VoxLens users performed tasks with 122% increased accuracy and 36% reduced interaction time.

“We want people to interact with a graph as much as they want, but we also don’t want them to spend an hour trying to find the maximum,” Sharif said. “In our study, interaction time refers to the time it takes to extract information, and that’s why reducing it is a good thing.”

The team also interviewed six participants about their experiences.

“We wanted to make sure that those accuracy and interaction time numbers we saw were reflected in how participants felt about VoxLens,” Sharif said. “We received very positive feedback. Someone told us that he had been trying to access visualizations for 12 years and this was the first time he could do it easily.”

Currently, VoxLens only works for visualizations created using JavaScript libraries, such as D3, chart.js, or Google Sheets. But the team is working on expanding to other popular viewing platforms. The researchers also acknowledged that the voice recognition system can be frustrating to use.

“This work is part of a much larger agenda for us — eliminating bias in design,” said co-lead author Katharina Reinecke, UW associate professor at the Allen School. “When we build technology, we tend to think of people who look like us and have the same abilities as us. For example, D3 has really revolutionized access to online visualizations and improved the way people can understand information. But there are ingrained values ​​and people are being left behind. It’s really important that we start thinking more about how to make technology useful to everyone.

Additional co-authors of this article are Olivia Wang, a UW undergraduate student at the Allen School, and Alida Muongchan, a UW undergraduate student studying design and engineering centered on the human. This research was funded by the Mani Charitable Foundation, the Center for an Informed Public at the University of Washington, and the Center for Research and Education on Accessible Technology and Experiences at the University of Washington.

The code is available on GitHub: https://github.com/athersharif/voxlens

Comments are closed.