Skip to content

Reviews Of The New HomePod Reveal The Tech Media Has Work To Do In Appreciating Accessibility

The advent of the second generation HomePod brings with it yet another opportunity to acknowledge the smart speaker’s accessibility to people with disabilities. Besides ecosystem-centric amenities like Handoff, Apple supports a bevy of accessibility features in the device; they include VoiceOver, Touch Accommodations, and much more. This is an important distinction to point out, as I’ve done in this space before. This column is precisely the forum for it.

It’s important to mention because, quite frankly, most reviewers fail to do so.

As a lifelong stutterer who has always felt digital assistants—and by extension, smart speakers—are exclusionary due to its voice-first interface paradigm, it disheartens me to see my peers in the reviewer racket continually undervalue the actual speech component of using these devices . It’s understandable—it’s difficult, if not downright impossible, to consider a perspective which you cannot fully comprehend. Still there is room for empathy—and really, empathy is ultimately what earnest DEI initiatives are meant to reflect—with regards to how privileged it is for the majority of journalists (and their readers) to effortlessly shout into the ether and have Alexa or Siri or the Google Assistant quickly spring into action.

Look no further than the embargoed HomePod 2 reviews that dropped earlier this week ahead of the product’s general availability starting on Friday. Every single one of them, whether in print or on YouTube, focuses solely on the sound quality. While perfectly sensible to do so, it’s cringeworthy to watch everyone utter not a single word about the speaker’s accessibility features or how verbally accessible Siri may be to someone with a speech delay. Again, expertise is hard—but empathy is not. Put another way, there are very real and very important characteristics of Apple’s new smart speaker that largely go ignored because it’s presumed (albeit rightly so, given how language models are typically trained) that a person is able to competently communicate with the thing. The elephant in the room is there’s far more to tell regarding the HomePod’s story. It’s counterintuitive to most, but it isn’t all about sound quality or smarts or computational audio or ecosystem.

Of course, the responsibility rests not on the tech press alone. Smart speaker makers in Apple, Amazon, Google, Sonos, and others all have to do their part on a technical level such that using a HomePod is a more accessible experience for those with speech impairments. Back in early October, I reported on tech heavyweights Amazon, Apple, Google, Meta, and Microsoft coming together “in a way that would make Voltron blush” on an initiative with the University of Illinois to help make voice-centric products more accessible to people with speech disabilities. The project, called the Speech Accessibility Project, is described as “a new research initiative to make voice recognition technology more useful for people with a range of diverse speech patterns and disabilities.” The essential idea here is that current speech models favor typical speech, which makes sense for the masses, but which critically leaves out those who speak using atypical speech patterns. Thus, it’s imperative for engineers to make the technology as inclusive as possible by feeding the artificial intelligence the most diverse dataset possible.

“There are millions of Americans who have speech differences or disabilities. Most of us interact with digital assistants fairly seamlessly, but for folks with less intelligible speech, there can be a barrier to access,” Clarion Mendes, a clinical professor in speech and hearing science and a speech-language pathologist, told me in an interview ahead of my report from October. “This initiative [the Speech Accessibility Project] lessens the digital divide for individuals with disabilities. Increasing access and breaking down barriers means improved quality of life and increased independence. As we embark on this project, the voices and needs of folks in the disability community will be paramount as they share their feedback.”

Astute readers will note what Mendes ultimately expresses: empathy!

It should be stressed the thrust of this piece is not to throw my colleagues and friends under the bus and denigrate their work. They aren’t unfeeling people. The thrust here is simply that, as a stutterer, I feel extremely marginalized and underrepresented when I watch, say, MKBHD hurl rapid-fire commands at Siri or another without trouble. By and large, the smart speaker category has long felt exclusionary to me for the speech issue alone. The uneasiness doesn’t go away just because Apple’s HomePod line sounds great and fits nicely with my use of HomeKit. These are issues Apple (and its contemporaries) must reckon with in the long-term to create the most well-rounded digital assistant experience possible. Software tools like Siri Pause Time, a feature new to iOS 16 that allows users to tell Siri how long to wait until a person stops speaking to respond, is limited in its true effectiveness. The problem is, it sidesteps the problem rather than meeting it at the source. It puts a band-aid on something that requires more intricate treatment.

All told, what the new HomePod reviews illustrate so well is the fact that the technology media still has a ways to go yet—despite making big strides in recent times—in truly embracing accessibility as a core component of everyday coverage. The expectation shouldn’t be to ask mainstream reviewers to suddenly become experts at assistive technologies to assess stuff; that’s unrealistic. What is highly realistic, however, is to carry an expectation that editors and writers would seek the knowledge they don’t have. It’s conceptually (and practically) no different than an outlet investing in other social justice reporting—in the AAPI and Black communities, for example, especially important nowadays given recent events.

If reviewers can endlessly lament the perceived idiocy of Siri, it isn’t a stretch to acknowledge the adjacency of Siri’s lack of gracefulness in parsing atypical speech. Moreover, it shouldn’t be akin to pulling teeth to ask newspeople to consider regularly running more nuanced takes on products alongside the more overview ones. The disability viewpoint is not esoteric; it matters. It’s long past time disability inclusion (and disabled reporters) figure prominently at the tech desks of newsrooms the world over. Accessibility deserves a seat at the table too.

.