Speech recognition proving its worth
June 20, 2014 in Medical Technology
While wary clinicians remain a big hurdle, nine out of 10 hospitals plan to expand their use of front-end speech deployment, according to a new KLAS report.
The study, “Front-End Speech 2014: Functionality Doesn’t Trump Physician Resistance,” found that 50 percent of providers polled cited skeptical end-users as one of the biggest barriers to more successful uptake of speech recognition.
Nonetheless, the ROI from the technology was clear for these hospitals, according to KLAS. Facilities interviewed saw a higher impact in nearly every category measured in the report: reduced transcription costs, reduced documentation time and more complete patient-narratives.
“Physicians are resistant to changes in their workflow,” says report author Boyd Stewart, in a statement. “While hospital leadership sees the value of FES, many end users are frustrated that they are now being asked to do the work of transcriptionists.”
Speech recognition can improve and enhance clinical documentation in many ways — especially nowadays, as the demand for more documentation of every encounter is on the rise, and there aren’t enough experienced medical transcriptionists to meet current and future demands, according to a practice brief published by AHIMA.
Front-end speech recognition refers to the process where the dictator, or end user, speaks into a microphone or headset attached to a PC, according to the brief. “The recognized words are displayed as they are recognized, and the dictator is expected to correct misrecognitions.”
The upside is that “the dictator is in control of the entire process: The document is dictated, corrected and authenticated all in one sitting,” the report points out. “When dictation is done, the document is ready for distribution.”
Proponents say front-end speech is the most effective way to interface voice recognition with an EHR, allowing clinicians to respond to prompts from the EHR for more complete and accurate documentation.
The downside, however, is that speech recognition “may affect a dictator’s billable activities,” AHIMA points out. “Training the speech recognition engine is a time-consuming process that takes time away from patient care.”
Indeed, assessing “the readiness of the medical staff in terms of their receptiveness to a transition of this magnitude,” is essential to a successful deployment, according to the practice brief. “If they are proponents of full application of the technology, which means a commitment of learning to use the system and allocating resources to apply this in practical applications, ROI can be structured around an objective analysis of both the benefits and the risks.”
As part of its study, KLAS reviewed three of the biggest vendors in the speech recognition field: Dolbey, M*Modal and Nuance. The latter, with a market capitalization of about $5.5 billion, continues to lead the sector “by an extensive margin,” although M*Modal and Dolbey have gained ground in recent years.
Earlier this week, it was reported by the Wall Street Journal that Burlington, Mass.-based Nuance — whose technology powers the Siri app on Apple’s iPhone — has “held discussions with potential suitors regarding a sale of the company.”
Chief among the potential buyers were Samsung Electronics and private-equity firms, according to June 16 article, which noted that “it isn’t clear where sale talks, some of which happened earlier this year, currently stand or if they will lead to a deal.”