BY Fast Company Contributor 3 MINUTE READ

For doctors, taking notes and inputting them into electronic medical records is so cumbersome that they often have to use human medical scribes to do it for them. That’s changing as more hospital systems turn to artificial intelligence-based transcription tools.

However, some doctors feel the tools available today are just not accurate enough. “If there were a really smart voice transcription service that was 99% accurate, I would definitely use it,” says Bon Ku, an emergency room doctor at Thomas Jefferson Hospital University and director of the university’s Health Design Lab. “A lot of times, I feel like I’m a data-entry clerk.”

For the last several years, big tech companies have been jockeying to be the one who finally delivers the kinds of tools doctors have been craving.

This week, Google launched open source machine learning software to help doctors make sense of patient medical records. The platform is composed of two programs. One, an API for healthcare-related natural language processing, scans medical documents for key information about a patient’s journey, puts it into a standard format, and summarizes it for the doctor. It can pull from multiple sources of information like medical records as well as transcribed doctors’ notes. The goal is to create an easy way for doctors to review a patient’s past care. The second, called AutoML Entity Extraction for Healthcare, is a low-code tool kit that helps doctors to pull out specific data from a patient’s record, like information about a genetic mutation. Both tools will be available for free until December 10, 2020 for doctors, insurers, and biomedical companies.

Much of Big Tech’s enthusiasm for medicine is focused on building a better way for doctors to record their interactions with patients without having to type into a computer. Amazon, Microsoft, and Google have all created software to this effect and are increasingly creating tools for healthcare settings, likely in a quest for new sources of recurring revenue.

Even Nvidia, which has traditionally focused more on its imaging technology, has started offering medical transcription. Earlier this year, Nvidia launched a service called BioMegatron, which is built to recognize conversational speech. The data set is trained on over six billion medical terms and is 92% accurate. There are also a host of smaller companies like Dragon, MModal, Suki Ai, and Saykara providing transcription for doctors.

AI-powered transcription is the latest push toward automating medical processes. Much of doctors’ work is already electronic: many use a computer system to pull up patient data. A 2013 paper found that over the course of a 10-hour shift, emergency room doctors made 4,000 clicks during a busy shift. Doctors who use the EPIC electronic health record system also have a program called “dot phrases” to help make it faster to write notes and pull information about patients (EPIC also has an AI transcription module). The problem some doctors have with dot phrases is that it enables quick pre-written entries about an ailment or symptom to be inserted into patient records. Such shorthand is fine for medical billing, but it leads patient records to be overly generalized. As a result, doctors reviewing patient’s history often don’t get the context surrounding a patient’s last visit.

“Most of patient records are garbage—they’re full of templates,” says Ku. “Ninety percent of our diagnoses come from the interview; it doesn’t come from diagnostic imaging or lab tests. It’s about me being able to get the story from my patient—but that becomes hindered because there’s this insane pressure to enter data into a computer.”

Doctors also spend an enormous amount of time entering data in the electronic health record. Ku says that doctors have what’s called “pajama time,” which refers to the hours they spend at home recalling patient information into the system. This is why doctors would love a notetaking experience that was more akin to talking to Alexa. A system that extracts patient data from a conversation or the ability to order tests by voice would be a game changer, says Ku. He’d be able to spend a lot more time with patients. However, the technology needs to get to a place where it is more accurate so doctors don’t have to spend more time updating what the AI got wrong.

“There has to be some safety mechanism,” says Ku.

____________________________________________________________________________________

Article originally published on fastcompany.com.