Clinical NotesVoice recognition software versus a traditional transcription service for physician charting in the ED☆,☆☆,★,★★,♢
Section snippets
Methods
A total of 47 charts were dictated by 2 ED physicians at a suburban level 1 trauma center with an annual census of 45,000. One of the physicians was an “advanced” user, having several years of experience with and having dictated several hundred charts with the software. The second physician was a “basic” user with approximately 2 weeks experience and having dictated approximately 20 charts with the software.
Dragon NaturallySpeaking Medical suite version 4 was installed onto a 450 MHz Pentium II
Results
Our data comparing the voice recognition program to the traditional transcription service with regard to accuracy, average number of errors, average turnaround time and word per minute dictation time is listed in Table 1.Empty Cell Voice Recognition (95% CI) Transcription (95% CI) Difference (95% CI) Accuracy (%) 98.5 (98.2-98.9) 99.7 (99.6-99.8) 1.2 (0.8-1.5) Average no. errors/chart 2.5 (2-3) 1.2 (0.9-1.5) 1.3 (.67-1.88) Average
Discussion
There are several ways to create ED records: handwritten charts, handwritten templates, traditional dictation services, and now computer-generated voice recognition systems. Handwritten charts are time-consuming, fatiguing, and often difficult to read. Marill, et al found that handwritten template charts (ie, the T system ) is associated with higher gross billing and physician satisfaction, but no significant decrease in emergency physician total evaluation time.1 Traditionally transcribed
Conclusion
Computer voice recognition transcription using real-time voice recognition software is an economical and accurate way to generate ED records. The software is nearly as accurate as traditional transcription, it has a much shorter turnaround time and it is less expensive. We recommend it's use as a tool for physician charting in the ED.
Acknowledgements
Special thanks to Douglas Propp, MD for obtaining the hardware for this study, and also special thanks to Myrna Greenfield and Dragon Systems Inc. for donating the software for use in this study.
References (4)
- et al.
Prospective randomized trial of template-assisted versus undirected written recording of physician records in the emergency department
Ann Emerg Med
(1999) - et al.
Status of voicetype dictation for windows for the emergency physician
J Emerg Med
(1996)
Cited by (58)
Automatic documentation of professional health interactions: A systematic review
2023, Artificial Intelligence in MedicinePhysician use of speech recognition versus typing in clinical documentation: A controlled observational study
2020, International Journal of Medical InformaticsCitation Excerpt :However, the research remains largely heterogenous and often uses different evaluation metrics, making it challenging to compare findings over time and across studies. Many observational studies were conducted over a decade ago and as such may have limited applicability to modern SR software, which shows improved speed and accuracy over earlier systems [18–24]. Among more recent studies, findings regarding the impact of SR on accuracy, efficiency and provider satisfaction remain mixed [25–29].
A clinician survey of using speech recognition for clinical documentation in the electronic health record
2019, International Journal of Medical InformaticsCitation Excerpt :In this study, most users estimated seeing fewer than 10 errors per document, with less than 25% of those being clinically significant. These estimates are lower than the error rates reported in many formal evaluations of SR accuracy [28–30]. In previous studies, SR error rates (i.e., number of errors divided by the total number of words in the document) have been found to be as low as 0.3% with traditional dictation (i.e., with editing/revision by professional transcriptionists) but as high as 23–52% when clinicians use front-end SR systems (i.e. with editing/revision by clinicians) [23,24].
Incidence of speech recognition errors in the emergency department
2016, International Journal of Medical InformaticsCitation Excerpt :Despite the advantages of SR technology, high error rates ranging from 10 to 23% have been observed in clinical documents generated by this technology [2], raising concern about the number of errors being entered into the permanent medical record, their impact on quality of care and the medical liability that may arise. To date, there have been few studies published on the use SR in ED [3–5]. A recent study by Zick et al. evaluated the accuracy and cost savings of traditional voice dictation as compared to a real-time SR software and observed high accuracies of 99.7% and 98.5% respectively [5].
How artificial intelligence is changing health and health care
2023, Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril (2019)
- ☆
Returned November 26, 2000.
- ☆☆
Supported in part by software donated by Dragon Systems, Inc., Newton, MA. Hardware for the study was donated by Lutheran General Hospital's Department of Emergency Medicine.
- ★
Address reprint requests to Robert G. Zick, MD, MBA, 355 Ridge Ave, Evanston, IL, 60202.
- ★★
Am J Emerg Med 2001; 19:295-298. Copyright © 2001 by W.B. Saunders Company
- ♢
0735-6757/01/1904-0011$35.00/0