A groundbreaking study from Stanford Medicine has unveiled an innovative language processing model specifically crafted to dive deep into medical charts.
This tool focuses on a critical aspect of pediatric care: ensuring that children diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) receive proper follow-up care after being prescribed new medications.
By analyzing vast electronic medical records (EMRs), this advanced AI can comb through thousands of physician notes to identify patterns that could significantly improve patient outcomes.
Streamlining the Process
Traditionally, medical professionals dedicated countless hours to manually examine extensive charts, tackling questions surrounding patient care.
However, the findings from this recent study suggest that large language models could streamline that daunting process, shedding light on vital insights for healthcare providers.
For instance, this AI could spot potential harmful drug interactions or help identify which patients are likely to respond positively or negatively to specific treatments.
The research results were shared in a publication in the journal Pediatrics on December 19.
The objective was clear: to investigate whether children with ADHD received sufficient follow-up after initiating new medications, utilizing valuable data from their medical histories.
AI Model Development
Dr. Yair Bannett, the study’s lead author and an assistant professor in pediatrics, pointed out the potential of this AI model to highlight gaps in ADHD management.
The research team tapped into the AI’s abilities to unveil strategies aimed at enhancing how healthcare professionals interact with ADHD patients and their families.
Bannett believes that the application of such AI tools could have far-reaching benefits across multiple areas of healthcare.
While structured data like lab results and vital signs are conveniently available in electronic medical records, about 80% of the valuable information exists in unstructured notes made by healthcare providers.
This text, though rich in insights, presents significant hurdles for large-scale analysis.
Traditionally, extracting information from these freeform notes involved labor-intensive manual sorting by professionals on the lookout for specific details.
The new study explored how AI could simplify this process.
To conduct their research, the team analyzed medical records for 1,201 children aged 6 to 11 at 11 pediatric primary care practices within a healthcare network, all of whom had been prescribed ADHD medication.
This scrutiny was essential since these medications can lead to significant side effects, such as appetite suppression, necessitating regular inquiries during the initial treatment months.
The researchers trained an established language model to sift through doctors’ notes, specifically searching for mentions of conversations about side effects within the first three months of starting medication.
The model was fine-tuned with a set of 501 notes reviewed by the team, which included references to side effects as well as notes that lacked this information, allowing for the assessment of follow-up care.
Of those, 411 notes aided in the training, and 90 were used to validate the model’s accuracy.
A manual review of an additional 363 notes confirmed the model’s classification accuracy at approximately 90%.
Once the model displayed reliable performance, it was applied to analyze all 15,628 notes tied to the patients’ records.
This task, which would have consumed over seven months of full-time effort without AI, unveiled insights that may have otherwise gone unnoticed.
For instance, the AI revealed variations in how often different pediatric practices engaged parents in discussions about medication side effects during phone calls.
Looking Ahead
According to Bannett, these findings are crucial; they indicate patterns that human reviewers might miss given the sheer volume of notes involved.
Additionally, the AI pointed out that pediatricians tended to inquire more frequently about side effects related to stimulant medications compared to non-stimulant medications.
However, Bannett cautioned against overreliance on AI for drawing conclusions, noting that while the model can identify relevant data patterns, it lacks the capacity to explain the reasons behind these trends.
Engaging in discussions with pediatricians revealed that their familiarity with managing stimulant side effects likely influenced these patterns.
The researchers acknowledged that some pertinent inquiries regarding medication side effects might not have been documented in the electronic medical records reviewed, especially those involving care from other healthcare providers like psychiatrists.
They also noted that the AI occasionally misclassified notes about side effects unrelated to ADHD treatment.
As the development of AI tools progresses in the realm of medical research, a nuanced understanding of their capabilities and limitations is vital, according to Bannett.
While AI excels at processing extensive medical records, the complexities of ethical considerations in healthcare still require human oversight.
Bannett and his colleagues addressed potential biases in AI training and application in an editorial published in Hospital Pediatrics.
Bannett expressed serious concern regarding the data underpinning AI models, highlighting a pressing need to confront disparities in healthcare.
He urged researchers to actively minimize biases as they create and implement AI technologies to maximize their potential benefits.
With the right considerations, Bannett is optimistic about AI’s promise to bolster clinical decision-making, enabling healthcare providers to access data from broader populations and individual patients.
In the future, AI may offer clinicians the tools to better anticipate potential side effects based on various patient characteristics such as age, race, genetics, and pre-existing conditions, paving the way for more tailored approaches to healthcare management through natural language processing.
Source: ScienceDaily