<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2694-1473</journal-id>
<journal-title-group>
<journal-title>Journal of the Society for Clinical Data Management</journal-title>
</journal-title-group>
<issn pub-type="epub">2694-1473</issn>
<publisher>
<publisher-name>Society for Clinical Data Management</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.47912/jscdm.438</article-id>
<article-categories>
<subj-group>
<subject>Opinion paper</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Leveraging Large Language Models to Streamline Clinical Trial Data Administration: Balancing Efficiency with Ethical Responsibility</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-8865-8215</contrib-id>
<name>
<surname>Ghosh</surname>
<given-names>Amrita</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
<xref ref-type="aff" rid="aff-2">*</xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-5620-303X</contrib-id>
<name>
<surname>Huang</surname>
<given-names>Caroline J.</given-names>
</name>
<email>carolineh999@gmail.com</email>
<xref ref-type="aff" rid="aff-1">1</xref>
<xref ref-type="aff" rid="aff-2">*</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Department of Radiology, Stanford University, Stanford, CA, US</aff>
<aff id="aff-2"><label>*</label>These authors contributed equally as co-first authors.</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-10-31">
<day>31</day>
<month>10</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>5</volume>
<issue>1</issue>
<elocation-id>18</elocation-id>
<history>
<date date-type="received" iso-8601-date="2025-06-27">
<day>27</day>
<month>06</month>
<year>2025</year>
</date>
<date date-type="accepted" iso-8601-date="2025-09-24">
<day>24</day>
<month>09</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2025 The Author(s)</copyright-statement>
<copyright-year>2025</copyright-year>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc-sa/4.0/">
<license-p>SCDM publishes JSCDM content in an open access manner under a Attribution-Non-Commercial-ShareAlike (CC BY-NC-SA) license. This license lets others remix, adapt, and build upon the work non-commercially, as long as they credit SCDM and the author and license their new creations under the identical terms. See <uri xlink:href="https://creativecommons.org/licenses/by-nc-sa/4.0/">https://creativecommons.org/licenses/by-nc-sa/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://www.jscdm.org/articles/10.47912/jscdm.438/"/>
<abstract>
<p>Clinical trials are critical for advancing medical knowledge and developing new treatments, yet they often involve substantial administrative burdens that can impede progress and reduce efficiency. Effective clinical trial data management requires seamless participant engagement, accurate data collection, strict regulatory compliance, and coordinated efforts among multidisciplinary teams. These demands can strain resources and lead to workflow inefficiencies.</p>
<p>This research paper examines the potential of artificial intelligence (AI)-based tools, such as large language models (LLMs), to enhance clinical coordination by optimizing administrative processes in clinical trials. We assess LLM applications in streamlining standard operating procedures (SOPs), clinical data management, automating documentation, and supporting regulatory compliance. To ensure responsible implementation, we also examine key challenges related to ethical considerations, data biases, and safety concerns, while proposing strategies for mitigating these risks.</p>
<p>Our findings indicate that integrating AI-driven solutions like LLMs can significantly improve operational efficiency, reduce administrative workloads, and allow clinical trial teams to dedicate more time to patient care and scientific inquiry. By leveraging AI responsibly, we can make clinical research more agile, adaptive, and focused on advancing medical innovation.</p>
</abstract>
<kwd-group>
<kwd>Artificial Intelligence</kwd>
<kwd>Clinical Administrative Burden</kwd>
<kwd>Large Language Models</kwd>
<kwd>Health AI</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>Clinical trials are essential for advancing medical science, yet they impose considerable administrative burdens on clinical coordination efforts. Managing participant engagement, regulatory compliance, data collection, and documentation requires seamless collaboration among clinical research staff, investigators, and regulatory teams. These challenges often result in delays, increased errors, and operational inefficiencies, ultimately impacting trial timelines and data integrity.</p>
<p>The integration of artificial intelligence (AI) tools and large language models (LLMs) presents an opportunity to streamline administrative tasks, optimize workflows, and enhance overall clinical coordination. Approximately 900 articles were found on PubMed mentioning &#8220;ChatGPT,&#8221; with most of the articles being published in the past two years, and the scientific community has expressed great interest in using LLMs for research purposes.<sup><xref ref-type="bibr" rid="B1">1</xref>,<xref ref-type="bibr" rid="B2">2</xref></sup> It is apparent that LLMs are being vastly utilized in the scientific community. In this paper, we examine methods of using LLMs to assist in drafting study documents, automating data management, and ensuring regulatory compliance, thereby allowing clinical teams to focus on patient safety, trial oversight, and scientific rigor.</p>
</sec>
<sec>
<title>2. Administrative Challenges in Clinical Trials</title>
<p>The complexity of modern clinical trials places significant pressure on clinical coordination research teams. A 2022 study reported that approximately 61% of clinical researchers experienced signs of burnout,<sup><xref ref-type="bibr" rid="B3">3</xref></sup> which is defined by the World Health Organization as exhaustion, increased mental distance from one&#8217;s job, and reduced professional efficacy.<sup><xref ref-type="bibr" rid="B4">4</xref></sup> 66% of clinical researchers believed that investing in technology could mitigate administrative burdens.<sup><xref ref-type="bibr" rid="B3">3</xref></sup> Administrative duties, which include drafting informed consent forms, managing regulatory documentation, developing training materials, and preparing grant proposals, require significant time and effort.</p>
<p>In 2023, a systematic review examined ChatGPT as an example of an LLM in the context of healthcare education and research.<sup><xref ref-type="bibr" rid="B5">5</xref></sup> This review included a total of 60 records, in which 85% of the records cited the benefits of using ChatGPT in healthcare and research by improving scientific writing, enhancing research equity, highlighting ChatGPT&#8217;s utility in healthcare research, and providing benefits in healthcare education.<sup><xref ref-type="bibr" rid="B5">5</xref></sup> When these responsibilities are inefficiently managed, it may compromise trial completion, data quality, and participant retention. Effective clinical coordination is critical to ensuring seamless communication among stakeholders, compliance with ethical and regulatory requirements, and efficient data processing. Given these challenges, leveraging LLMs to streamline administrative processes is a logical step in improving the efficiency of clinical trials.</p>
</sec>
<sec>
<title>3. Applications of LLMs in Clinical Trials</title>
<p>LLMs are trained using reinforcement learning from human feedback, enabling them to generate structured, context-specific, and user-friendly responses.<sup><xref ref-type="bibr" rid="B6">6</xref></sup> Unlike traditional search engines that deliver vast amounts of information based on keyword searches, LLMs provide tailored answers in the desired format, making them particularly useful for managing time-intensive administrative tasks in clinical trials. Traditional clinical workflow tools such as Electronic Data Capture (EDC) platforms and Clinical Trial Management Systems (CTMS) efficiently manage various aspects of clinical trials but focus on oversight and data collection, while LLMs have capabilities that extend beyond these functions and reduce the administrative burden.</p>
<p>LLMs are not substitutes for professional expertise and must be verified against authoritative sources. They serve as a valuable assistant in clinical coordination by reducing administrative workload and improving efficiency in key trial processes. <xref ref-type="table" rid="T1">Table 1</xref> compares LLM features that are useful in reducing the researchers&#8217; workload in comparison to the traditional clinical workflow tools.</p>
<table-wrap id="T1">
<label>Table 1</label>
<caption>
<p>LLMs vs. Traditional Workflow Solutions: Reducing Clinical Coordination Burden.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top" rowspan="2"><bold>Feature</bold></td>
<td align="left" valign="top" rowspan="2"><bold>LLMs</bold></td>
<td align="left" valign="top" colspan="2"><bold>Traditional Workflow Solutions</bold></td>
</tr>
<tr>
<td align="left" valign="top"><bold>Electronic Data Capture (EDC) platforms</bold></td>
<td align="left" valign="top"><bold>Clinical Trial Management Systems (CTMS)</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top"><bold>Real-time natural language interaction</bold></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Human-like responses, customizable for different workflows</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>No conversational capability</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>No conversational capability</p></list-item>
</list></td>
</tr>
<tr>
<td align="left" valign="top"><bold>Document drafting</bold></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Fast, accurate, context-aware drafting</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Not a core feature</p></list-item>
<list-item><p>Mostly requires manual data entry</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Requires manual data entry</p></list-item>
</list></td>
</tr>
<tr>
<td align="left" valign="top"><bold>Data validation</bold></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Requires prompt customization for structured validation</p></list-item>
<list-item><p>Needs to be validated by a human</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Built-in validation rules and logic</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>May have advanced data validation workflows</p></list-item>
</list></td>
</tr>
<tr>
<td align="left" valign="top"><bold>Plain language integration (conversion of medical jargon)</bold></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Converts medical jargon into patient-friendly language when paired with the right prompt</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Typically, heavy with medical terms</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Limited plain language outputs</p></list-item>
</list></td>
</tr>
<tr>
<td align="left" valign="top"><bold>Set-up and training</bold></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>No setup and training needed*</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Requires user onboarding and role-based training</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Requires technical training and customization</p></list-item>
</list></td>
</tr>
<tr>
<td align="left" valign="top"><bold>Cost</bold></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>Free version available</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>The institution should purchase and provide it</p></list-item>
</list></td>
<td align="left" valign="top"><list list-type="bullet"><list-item><p>The institution should purchase and provide it</p></list-item>
</list></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p>*Prompt training may be needed, but is optional.</p></fn>
</table-wrap-foot>
</table-wrap>
<sec>
<title>3.1 Key Areas of Clinical Coordination Supported by LLMs</title>
<sec>
<title>Documentation and Consent Forms</title>
<p>Documentation is the backbone of clinical trials, with consent forms being among the most crucial documents in the clinical research process. Researchers at LifeSpan Healthcare System (LHS) analyzed 798 federally funded clinical trial consent forms and found that dropout rates increased by almost 16% due to the high-grade reading level of the documents. To address this issue, the research team utilized ChatGPT-4 as an example of an LLM to transform a surgical consent form from a 12.6 Flesch-Kincaid reading level to a 6.7 reading level.<sup><xref ref-type="bibr" rid="B7">7</xref></sup> We have seen how the ability of LLMs to simplify complex medical terminology and generate clear, patient-friendly consent forms makes clinical information more accessible, promoting informed decision-making.</p>
<p>Beyond consent forms, LLMs can support various documentation tasks, including summarizing data from research articles, clinical records, and patient records. For instance, a cardiovascular study demonstrated the potential of LLMs to generate structured data from unstructured text, showcasing their capability to enhance data management processes.<sup><xref ref-type="bibr" rid="B8">8</xref></sup> By reframing technical jargon and making medical information more understandable, LLMs help reduce barriers and facilitate better patient engagement.</p>
</sec>
<sec>
<title>Regulatory Compliance</title>
<p>LLMs can significantly support regulatory compliance by assisting in drafting and reviewing documents, as well as summarizing guidelines from governmental agencies such as the US Food and Drug Administration (FDA). In this context, LLMs can generate detailed descriptions of medical devices, risk management plans, general safety and performance requirements checklists, and technical reports.<sup><xref ref-type="bibr" rid="B9">9</xref></sup></p>
<p>One of the strengths of LLMs lies in generating standardized templates for Institutional Review Board (IRB) submissions, minimizing redundant efforts, and saving time. Furthermore, with constantly evolving regulations, LLMs can help keep documentation up to date by scanning files for compliance and indicating specific regulatory guidelines present. For instance, when drafting sections on drug risks and side effects, specific LLM prompts can identify areas that may require clearer explanations or more prominent disclosures, thereby promoting patient safety.<sup><xref ref-type="bibr" rid="B10">10</xref></sup></p>
</sec>
<sec>
<title>Recruitment Strategies</title>
<p>Recruitment tasks in clinical trials often involve crafting targeted materials that resonate with specific demographics while being culturally sensitive and engaging. LLMs can help reduce the recruitment burden by generating such materials efficiently.</p>
<p>A notable example is the National Institutes of Health&#8217;s development of TrialGPT, an AI algorithm designed to match potential volunteers to clinical trials by identifying relevant trials for which individuals are eligible. This technology enhances outreach efforts and has the potential to improve participant enrollment rates by making recruitment strategies more targeted and accessible.<sup><xref ref-type="bibr" rid="B11">11</xref></sup></p>
</sec>
<sec>
<title>Grant Writing and Funding Applications</title>
<p>Grant writing is a critical yet time-consuming aspect of clinical research. LLMs can expedite the development of grant proposals by structuring content, aligning it with funding agency requirements, and refining scientific arguments. This allows researchers to concentrate on core study design aspects.</p>
<p>LLMs can also assist in conducting literature reviews by automating the extraction and summarization of relevant information from extensive collections of scientific articles and publications.<sup><xref ref-type="bibr" rid="B12">12</xref></sup> Additionally, when developing grant budgets, LLMs can help identify potential expenses, distinguish between direct and indirect costs, and suggest appropriate buffers for unforeseen expenditures.<sup><xref ref-type="bibr" rid="B13">13</xref></sup> By supporting researchers in articulating evaluation methods and sustainability strategies, LLMs can make grant proposals more robust and comprehensive.</p>
</sec>
<sec>
<title>Training and Onboarding Support</title>
<p>Training and onboarding clinical staff efficiently are essential for maintaining high standards in clinical trials. LLMs can generate training materials, including FAQs, compliance checklists, and interactive learning tools. Studies have shown that using LLMs as a learning assistant can enhance the acquisition of medical terminology, and can provide adaptive, personalized training experiences.<sup><xref ref-type="bibr" rid="B14">14</xref></sup></p>
<p>By generating diverse scenarios quickly and cost-effectively, LLMs accommodate specific learning objectives and skill development needs. These training approaches are particularly suitable for individual, group, or remote training sessions. Furthermore, standardized training resources help maintain consistency across study sites, while LLM-based simulations provide a more affordable and scalable alternative to traditional training methods.<sup><xref ref-type="bibr" rid="B15">15</xref></sup></p>
</sec>
</sec>
<sec>
<title>3.2 Prompt Generation</title>
<p>The effectiveness of LLMs largely depends on how prompts are written and structured. Prompts can be questions, statements, or combinations of both that are designed to generate information, reframe data, or analyze content. LLM responses are based on language patterns learned during pre-training and user interactions.</p>
<p>However, LLMs are sensitive to subtle changes in prompt phrasing, which can result in different outputs.<sup><xref ref-type="bibr" rid="B6">6</xref></sup> To generate accurate and useful responses&#8212;especially for tasks like regulatory guidance, protocol summaries, or document templates&#8212;prompts must be clear and precise. <xref ref-type="table" rid="T2">Table 2</xref> provides examples of optimized prompts tailored for clinical coordination tasks.</p>
<table-wrap id="T2">
<label>Table 2</label>
<caption>
<p>LLM Prompt Generation Guide for Clinical Studies.</p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Prompt Requirement</bold></td>
<td align="left" valign="top"><bold>Prompt Example</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Define the research question and objective</td>
<td align="left" valign="top">Example: &#8220;Explain how a randomized controlled trial (RCT) can be used to compare two COVID-19 vaccines. Focus on safety outcomes like adverse events and include trial design considerations.&#8221;</td>
</tr>
<tr>
<td align="left" valign="top">Define the area and agencies the information focuses on</td>
<td align="left" valign="top">Example: &#8220;What FDA regulatory guidelines should be followed when designing a Phase III clinical trial for a new antiviral drug?&#8221;</td>
</tr>
<tr>
<td align="left" valign="top">Define boundaries</td>
<td align="left" valign="top">Example: &#8220;Summarize the safety data from COVID-19 vaccine clinical trials conducted between January 2020 and December 2021, focusing on Phase III trials in adults aged 18&#8211;65. Limit the summary to studies conducted in North America and Europe.&#8221;</td>
</tr>
<tr>
<td align="left" valign="top">Define writing style</td>
<td align="left" valign="top">Example: &#8220;Explain the safety profile of the Pfizer COVID-19 vaccine to a non-medical audience in bullet points.&#8221;</td>
</tr>
<tr>
<td align="left" valign="top">Do not enter patient health information (PHI)</td>
<td align="left" valign="top">Example: &#8220;Provide an example of a treatment plan for an adult male in his 40s diagnosed with Type 2 diabetes in 2015, considering general treatment guidelines.&#8221;</td>
</tr>
<tr>
<td align="left" valign="top">Keep it focused but flexible</td>
<td align="left" valign="top">Example: &#8220;Discuss the differences in adverse event profiles between mRNA and adenovirus-based COVID-19 vaccines, focusing on the general adult population.&#8221;</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>By incorporating LLMs into these processes, clinical coordination teams can improve workflow efficiency, reduce administrative errors, and allocate more time to patient care and trial oversight.</p>
</sec>
</sec>
<sec>
<title>4. Ethical Challenges of LLMs</title>
<p>While LLMs offer promising benefits for clinical coordination, their implementation raises several ethical and practical concerns. Ensuring data integrity, regulatory compliance, and ethical AI use is critical for maintaining trust among trial stakeholders.</p>
<sec>
<title>4.1 Key Challenges and Mitigation Strategies for Ethical Considerations of LLMs</title>
<sec>
<title>Accuracy and Reliability</title>
<p>LLMs generate responses based on their training data, which can occasionally lead to &#8220;hallucinations,&#8221; which are factually incorrect statements.<sup><xref ref-type="bibr" rid="B16">16</xref></sup> Hallucinations cause LLMs to frequently generate non-existent or incorrect academic references, which is a particularly concerning issue in scientific and medical contexts.<sup><xref ref-type="bibr" rid="B16">16</xref></sup> For example, evaluations of LLMs in a study examining urinary tract infection diagnosis and management found that it fabricated 12 of 24 provided citations, creating entirely fictional medical references that appeared credible.<sup><xref ref-type="bibr" rid="B17">17</xref></sup></p>
<p>Other studies evaluating LLMs&#8217; responses to common medical questions found numerous instances where the AI provided incorrect drug information or failed to mention important contraindications.<sup><xref ref-type="bibr" rid="B18">18</xref></sup> Users should be wary of information that contradicts common knowledge, is unsupported by citations, or source material.<sup><xref ref-type="bibr" rid="B19">19</xref></sup> Thus, human oversight is essential to validate AI-generated regulatory guidance and clinical documentation.</p>
</sec>
<sec>
<title>Data Privacy and Security</title>
<p>LLMs are not inherently compliant with privacy regulations such as the Health Insurance Portability and Accountability Act (HIPAA) or General Data Protection Regulation (GDPR). LLMs are susceptible to jailbreaking, which is caused when the model&#8217;s built-in safeguards are bypassed.<sup><xref ref-type="bibr" rid="B20">20</xref></sup> In medical research, jailbreaking causes a confidentiality breach.<sup><xref ref-type="bibr" rid="B20">20</xref></sup> Organizations must establish protocols to anonymize sensitive data before using AI tools in clinical workflows.</p>
</sec>
<sec>
<title>Bias in AI Output</title>
<p>Since LLMs learn from a diverse range of data sources, inherent biases may be present. As LLMs are trained from generalized sources, they may not provide accurate region and population-based information. Moreover, there is a black box issue associated with LLMs like ChatGPT, in which the decision-making processes are opaque, and the logic that produces the output is hidden.<sup><xref ref-type="bibr" rid="B21">21</xref></sup> The combination of sampling and exclusion biases in LLMs has significant implications for clinical administration, potentially impacting resource allocation, clinical guideline development, and healthcare access.<sup><xref ref-type="bibr" rid="B22">22</xref></sup> Periodic audits and validation processes can help identify and mitigate these biases.<sup><xref ref-type="bibr" rid="B6">6</xref></sup></p>
</sec>
<sec>
<title>Consistency in AI Responses</title>
<p>LLMs&#8217; output can vary depending on prompt phrasing. Different prompts cause different outputs. Erroneous outputs may contain incorrect usage of medical terms, errors in diagnosis/treatment/management, and provide irrelevant information. For example, although there is evidence that the ChatGPT 4.0 model version shows better information processing capabilities than the 3.5 model version, variability in outputs can lead to errors in clinical contexts, risking patient safety and causing disruptions in clinical tasks due to unreliable recommendations.<sup><xref ref-type="bibr" rid="B23">23</xref></sup> Standardizing prompts and evaluating response consistency across different iterations can improve reliability.</p>
</sec>
<sec>
<title>Ethical Use and Oversight</title>
<p>AI tools should complement, not replace, human expertise. Their roles should be limited to providing support for routine tasks, with all critical decisions remaining the responsibility of qualified professionals. Establishing Standard Operating Procedures (SOPs) for AI use in clinical trials can help define appropriate applications and boundaries while ensuring alignment with ethical and regulatory standards.</p>
<p>Organizations integrating LLMs into clinical coordination processes must implement risk assessment frameworks to monitor AI use and mitigate potential issues. <xref ref-type="fig" rid="F1">Figure 1</xref> outlines the recommended steps for risk assessment, mitigation, and monitoring.</p>
<fig id="F1">
<label>Figure 1</label>
<caption>
<p>Risk Analysis Framework for LLMS in Clinical Trials.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jscdm-5-1-438-g1.png"/>
</fig>
</sec>
</sec>
</sec>
<sec>
<title>5. Risk Analysis of LLMs</title>
<p>The integration of LLMs into clinical study tasks presents both opportunities and challenges. While the potential for enhancing efficiency and reducing administrative burdens is promising, the use of AI-driven tools also carries inherent risks that must be carefully managed. This section outlines a comprehensive risk analysis framework, addressing key considerations such as human oversight, data integrity, regulatory compliance, and continuous monitoring to safeguard patient welfare and clinical accuracy (<bold>Figure 1</bold>).</p>
<sec>
<title>Risk Assessment</title>
<list list-type="bullet">
<list-item><p><underline>Nature of the risk:</underline></p>
<p>Analyze the nature of each identified risk and assess how frequently they are likely to occur when using LLMs for different tasks.</p>
<list list-type="bullet">
<list-item><p>Example: When generating a report or summary from medical data, the primary risks may involve inconsistencies and hallucinations. Similarly, when using LLMs to suggest patient communication, the risk lies in using technical jargon or an inappropriate tone. Adjusting the prompt can help mitigate these risks.<sup><xref ref-type="bibr" rid="B24">24</xref></sup></p></list-item></list></list-item>
<list-item><p><underline>Impact of the risk:</underline></p>
<p>Evaluate the potential consequences if these risks materialize and establish procedures to address such a situation if it arises.</p>
<list list-type="bullet">
<list-item><p>Example: The impact of a data privacy breach can be severe, resulting in legal repercussions and reputational damage.</p></list-item></list></list-item>
<list-item><p><underline>Risk-sensitive prompts:</underline></p>
<p>Identify the most sensitive prompts within clinical studies.</p>
<list list-type="bullet">
<list-item><p>Example: Interpreting study results requires precision, as errors can have significant consequences. In these cases, physician oversight is essential.</p></list-item></list></list-item>
</list>
</sec>
<sec>
<title>Risk Mitigation</title>
<list list-type="bullet">
<list-item><p><underline>Human oversight:</underline></p>
<p>LLMs should be considered a tool to assist clinical research coordinators, not a replacement for medical professionals. Qualified professionals must review any content generated by ChatGPT, as the success of a clinical study depends on clinical decision-making and accurate data interpretation.</p>
<list list-type="bullet">
<list-item><p>Context understanding: AI tools like LLMs may not fully comprehend the scientific context or the intricate medical complexities involved.<sup><xref ref-type="bibr" rid="B24">24</xref></sup> Human oversight ensures that the content addresses the intended audience accurately.</p></list-item>
<list-item><p>Accuracy assurance: As noted in the challenges section of this paper, ChatGPT can produce inaccurate or nonsensical data due to hallucinations.<sup><xref ref-type="bibr" rid="B19">19</xref></sup> Human reviewers are needed to validate data credibility and relevance.</p></list-item>
<list-item><p>Structural integrity: While prompt engineering can control language, tone, and text format to an extent, human oversight ensures the response meets regulatory standards and specific requirements.</p></list-item></list></list-item>
<list-item><p><underline>Data anonymization:</underline></p>
<p>Before inputting data into LLMs, remove all patient-identifiable information to comply with privacy regulations. The responsibility for this lies with the individual structuring and inputting the prompt.<sup><xref ref-type="bibr" rid="B11">11</xref></sup></p></list-item>
<list-item><p><underline>Validation:</underline></p>
<p>Implement mechanisms to validate the accuracy of LLM outputs by comparing them to established clinical guidelines, literature, or expert opinions. This task should be performed by data management personnel or clinical managers.</p>
<list list-type="bullet">
<list-item><p>Prompt evaluation: Develop a scoring guideline from test prompts to categorize responses as <italic>pass, fail, inappropriate</italic>, or <italic>need human assistance</italic>.</p></list-item>
<list-item><p>Bias audits: Regularly audit the model&#8217;s outputs to check for biases, especially when working with diverse patient populations or varied study data. <sup><xref ref-type="bibr" rid="B20">20</xref></sup></p></list-item></list></list-item>
<list-item><p><underline>User access controls:</underline></p>
<p>Restrict access to LLMs by assigning specific roles to users and monitoring usage to prevent unauthorized data input or sharing.<sup><xref ref-type="bibr" rid="B20">20</xref></sup></p></list-item>
<list-item><p><underline>Adherence to regulations:</underline></p>
<p>Ensure that LLM usage aligns with clinical research regulatory frameworks. Engage with regulatory bodies such as the FDA and IRB to clarify acceptable AI usage in clinical trials.<sup><xref ref-type="bibr" rid="B9">9</xref></sup></p></list-item>
<list-item><p><underline>Training and guidelines:</underline></p>
<p>Provide clear guidelines and SOPs for clinical coordinators on the appropriate use of LLMs. Clearly specify which tasks it can assist with and which require human intervention.</p></list-item></list>
</sec>
<sec>
<title>Risk Monitoring</title>
<list list-type="bullet">
<list-item><p><underline>Incident reporting:</underline></p>
<p>Establish a clear Corrective and Preventive Action (CAPA) process for reporting any errors, inaccuracies, or data breaches related to LLM involvement in clinical study tasks.<sup><xref ref-type="bibr" rid="B20">20</xref></sup></p></list-item>
<list-item><p><underline>Audits:</underline></p>
<p>Conduct periodic audits performed by data management or clinical research managers to ensure LLM usage complies with both internal policies and external regulations.</p></list-item>
<list-item><p><underline>Feedback loop:</underline></p>
<p>Collect feedback from clinical research coordinators on their experiences using LLMs, including any errors or concerns. Use this feedback to update SOPs and improve guidelines regularly.</p></list-item></list>
</sec>
</sec>
<sec>
<title>6. An Organizational Framework for Implementation of LLMs</title>
<p>To maximize the benefits of LLMs while ensuring responsible use, organizations should adopt a structured approach:</p>
<sec>
<title>Develop SOPs</title>
<p>SOPs should define appropriate AI applications and establish validation protocols for maintaining data privacy and regulatory compliance. Organizations should develop policies to ensure patient privacy when using LLMs, which may require going beyond existing regulatory requirements, such as HIPAA.</p>
</sec>
<sec>
<title>Train Clinical Staff</title>
<p>Clinical teams should be trained to utilize AI-based tools and to identify tasks that require human oversight. Organizations should engage healthcare educators and professionals in the development and review process to enhance the quality and accuracy of AI-generated training content.<sup><xref ref-type="bibr" rid="B15">15</xref></sup></p>
</sec>
<sec>
<title>Implement Continuous Monitoring and Audits</title>
<p>Regular performance audits and feedback loops should be established to refine the integration of LLMs or AI tools into clinical workflows.</p>
</sec>
<sec>
<title>Engage with Regulatory Bodies</title>
<p>Researchers must acknowledge any use of AI-assisted technologies, including LLMs, in research and manuscript preparation, typically in the Methods or Acknowledgments section.<sup><xref ref-type="bibr" rid="B12">12</xref>, <xref ref-type="bibr" rid="B25">25</xref></sup> This acknowledgment will enable agencies like the FDA and NIH to clarify best practices for AI use in clinical research and ensure compliance with industry regulations. For example, scientific journals such as <italic>Nature</italic> have added LLM-use rules in their author guide, including such directions as no LLMs will be accepted as a credited author on a research paper, and researchers using LLMs should document this in the methods or acknowledgments section.<sup><xref ref-type="bibr" rid="B26">26</xref></sup></p>
</sec>
</sec>
<sec>
<title>7. The Future of AI in Clinical Research</title>
<p>The adoption of LLMs in clinical trials reflects a broader trend toward integrating AI into healthcare. Numerous studies have demonstrated that models like ChatGPT-4 exhibit remarkable proficiency in medical knowledge tasks.<sup><xref ref-type="bibr" rid="B27">27</xref></sup> These models have shown varying levels of accuracy across diverse topics, suggesting potential applications in specific administrative domains where their knowledge base is most robust.<sup><xref ref-type="bibr" rid="B27">27</xref></sup> Additionally, LLMs have proven highly effective as a tool for academic research, prompting institutions such as the University of Michigan, Harvard University, Washington University, the University of California-Irvine, New York University, and Stanford to develop their own versions of LLMs.<sup><xref ref-type="bibr" rid="B28">28</xref></sup> This trend is likely to continue as AI technologies advance.</p>
<p>However, in recent years, safety concerns regarding LLMs have emerged, including significant legal challenges when ChatGPT was banned in Italy. In 2023, the Italian Data Protection Authority accused OpenAI of violating EU data protection rules.<sup><xref ref-type="bibr" rid="B29">29</xref></sup> Despite these challenges, the future points toward the global inclusion of ChatGPT and other LLMs within regulatory frameworks. As AI continues to evolve in healthcare settings, the development of robust frameworks for responsible use will become increasingly vital across administrative functions.</p>
<p>Several regulations have been established to address the rapid growth of AI while ensuring stringent controls on its use, including the European Union&#8217;s GDPR,<sup><xref ref-type="bibr" rid="B30">30</xref></sup> California&#8217;s Health Care Services: Artificial Intelligence Act,<sup><xref ref-type="bibr" rid="B31">31</xref></sup> and China&#8217;s Personal Information Protection Law.<sup><xref ref-type="bibr" rid="B32">32</xref></sup> These frameworks aim to strike a balance between innovation and safeguarding data privacy. As Robert M. Califf, former Commissioner of the US FDA, emphasized, regulating large language models is crucial to harnessing their potential while mitigating risks.<sup><xref ref-type="bibr" rid="B33">33</xref></sup> Furthermore, the European Medicines Regulatory Network envisions AI systems functioning as personal assistants to support users with daily workplace tasks, significantly enhancing productivity while maintaining compliance with data protection legislation.<sup><xref ref-type="bibr" rid="B34">34</xref></sup></p>
<p>The ability of AI-based tools, such as LLMs, to improve clinical coordination marks a critical step toward more advanced applications, including AI-assisted patient monitoring and personalized medicine. As the field progresses, responsible innovation and regulatory vigilance will be paramount to realizing the full potential of AI in healthcare.</p>
</sec>
<sec>
<title>8. Conclusion</title>
<p>LLMs present a promising solution for addressing administrative burdens in clinical coordination, including streamlining documentation, regulatory compliance, and patient engagement. An example of this is an exploratory case study that assessed the quality of radiology reports simplified using ChatGPT.<sup><xref ref-type="bibr" rid="B35">35</xref></sup> In the study, ChatGPT was prompted with the instruction: &#8220;<italic>Explain this medical report to a child using simple language</italic>.&#8221; The simplified radiology reports were subsequently evaluated by 15 radiologists, who rated their quality based on factual correctness, completeness, and the potential for patient harm. The results indicated that most reports were complete and factually accurate, with no identified potential for patient harm. However, there were instances in which potential harm could be caused by the reports&#8217; inclusion of incorrect data or the omission of pertinent medical information.<sup><xref ref-type="bibr" rid="B35">35</xref></sup></p>
<p>These findings underscore the need for careful planning, robust oversight, and strict adherence to ethical standards when integrating LLMs into clinical administration. Healthcare organizations must strike a balance between leveraging the capabilities of AI and implementing appropriate safeguards to prevent biases from undermining administrative decision-making, compromising patient care, or exacerbating health disparities.<sup><xref ref-type="bibr" rid="B36">36</xref></sup></p>
<p>By implementing AI responsibly, clinical teams can enhance efficiency, reduce errors, and improve trial outcomes. As AI technologies continue to evolve, LLMs will undoubtedly play an increasingly pivotal role in shaping the future of clinical research and reducing administrative burdens.</p>
</sec>
</body>
<back>
<sec>
<title>Acknowledgements</title>
<p>The authors acknowledge the use of LLMs, such as ChatGPT, to assist in refining research prompts. All content was subsequently reviewed, verified, and edited by the authors for accuracy and appropriateness.</p>
</sec>
<sec>
<title>Competing Interests</title>
<p>The authors have no competing interests to declare.</p>
</sec>
<sec>
<title>Author contributions</title>
<p>The authors contributed equally as co-first authors.</p>
</sec>
<ref-list>
<ref id="B1"><mixed-citation publication-type="journal"><label>1.&#160;</label><string-name><surname>Grillo</surname> <given-names>R</given-names></string-name>. <article-title>The rising tide of artificial intelligence in scientific journals: A profound shift in research landscape</article-title>. <source>European Journal of Therapeutics</source>. <year>2023</year>;<volume>29</volume>(<issue>3</issue>):<fpage>686</fpage>&#8211;<lpage>688</lpage>. DOI: <pub-id pub-id-type="doi">10.58600/eurjther1735</pub-id></mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="journal"><label>2.&#160;</label><string-name><surname>Khalifa</surname> <given-names>AA</given-names></string-name>, <string-name><surname>Ibrahim</surname> <given-names>M</given-names></string-name>. <article-title>Artificial intelligence (AI) and ChatGPT involvement in scientific and medical writing: A new concern for researchers. A scoping review</article-title>. <source>Arab Gulf Journal of Scientific Research</source>. <year>2024</year>;<volume>42</volume>(<issue>4</issue>):<fpage>1770</fpage>&#8211;<lpage>1787</lpage>. DOI: <pub-id pub-id-type="doi">10.1108/AGJSR-09-2023-0423</pub-id></mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="webpage"><label>3.&#160;</label><collab>OpenClinica</collab>. <article-title>Clinical trial researchers are burned out, too</article-title>. <source>OpenClinica Blog</source>; <year>2022</year>, <month>April</month> <day>28</day>. <uri>https://www.openclinica.com/blog/clinical-trial-researchers-are-burned-out-too/</uri> Accessed September 26, 2024.</mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="webpage"><label>4.&#160;</label><collab>World Health Organization</collab>. <article-title>Burn-out an &#8220;occupational phenomenon&#8221;: International Classification of Diseases</article-title>. <source>WHO Newsroom</source>; <year>2019</year>, <month>May</month> <day>28</day>. <uri>https://www.who.int/news/item/28-05-2019-burn-out-an-occupational-phenomenon-international-classification-of-diseases</uri> Accessed September 26, 2024.</mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="journal"><label>5.&#160;</label><string-name><surname>Sallam</surname> <given-names>M</given-names></string-name>. <article-title>ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns</article-title>. <source>Healthcare</source>. <year>2023</year>;<volume>11</volume>(<issue>6</issue>):<elocation-id>887</elocation-id>. DOI: <pub-id pub-id-type="doi">10.3390/healthcare11060887</pub-id></mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="webpage"><label>6.&#160;</label><collab>OpenAI</collab>. <source>ChatGPT</source>. <uri>https://openai.com/index/chatgpt/</uri> Accessed September 26, 2024.</mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="journal"><label>7.&#160;</label><string-name><surname>Mirza</surname> <given-names>FN</given-names></string-name>, <string-name><surname>Tang</surname> <given-names>OY</given-names></string-name>, <string-name><surname>Connolly</surname> <given-names>ID</given-names></string-name>, et al. <article-title>Using ChatGPT to facilitate truly informed medical consent</article-title>. <source>NEJM AI</source>. <year>2024</year>;<volume>1</volume>(<issue>2</issue>). DOI: <pub-id pub-id-type="doi">10.1056/AIcs2300145</pub-id></mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="journal"><label>8.&#160;</label><string-name><surname>Moons</surname> <given-names>P</given-names></string-name>, <string-name><surname>Van Bulck</surname> <given-names>L</given-names></string-name>. <article-title>ChatGPT: Can artificial intelligence language models be of value for cardiovascular nurses and allied health professionals</article-title>. <source>European Journal of Cardiovascular Nursing</source>. <year>2023</year>;<volume>22</volume>(<issue>7</issue>):<fpage>e55</fpage>&#8211;<lpage>e59</lpage>. DOI: <pub-id pub-id-type="doi">10.1093/eurjcn/zvad022</pub-id></mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="journal"><label>9.&#160;</label><string-name><surname>Di Bello</surname> <given-names>F</given-names></string-name>, <string-name><surname>Russo</surname> <given-names>E</given-names></string-name>, <string-name><surname>Sartori</surname> <given-names>M</given-names></string-name>. <article-title>Enhancing regulatory affairs in the market placing of new medical devices: How LLMs like ChatGPT may support and simplify processes</article-title>. <source>AboutOpen</source>. <year>2024</year>;<volume>11</volume>(<issue>1</issue>):<fpage>77</fpage>&#8211;<lpage>88</lpage>. DOI: <pub-id pub-id-type="doi">10.33393/ao.2024.3302</pub-id></mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="webpage"><label>10.&#160;</label><collab>GlobalVision</collab>. <article-title>10 ChatGPT prompts to enhance pharmaceutical proofreading</article-title>. <source>GlobalVision Blog</source>; <year>2025</year>, <month>March</month> <day>25</day>. <uri>https://www.globalvision.co/blog/10-chatgpt-prompts-to-enhance-pharmaceutical-proofreading</uri> Accessed March 30, 2025.</mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="journal"><label>11.&#160;</label><string-name><surname>Jin</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Floudas</surname> <given-names>CS</given-names></string-name>, et al. <article-title>Matching patients to clinical trials with large language models</article-title>. <source>Nature Communications</source>. <year>2024</year>;<volume>15</volume>:<elocation-id>9074</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1038/s41467-024-53081-z</pub-id></mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><label>12.&#160;</label><string-name><surname>Lee</surname> <given-names>PY</given-names></string-name>, <string-name><surname>Salim</surname> <given-names>H</given-names></string-name>, <string-name><surname>Abdullah</surname> <given-names>A</given-names></string-name>, <string-name><surname>Teo</surname> <given-names>CH</given-names></string-name>. <article-title>Use of ChatGPT in medical research and scientific writing</article-title>. <source>Malaysian Family Physician</source>. <year>2023</year>;<volume>18</volume>:<elocation-id>58</elocation-id>. DOI: <pub-id pub-id-type="doi">10.51866/cm0006</pub-id></mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="webpage"><label>13.&#160;</label><string-name><surname>Morrow</surname> <given-names>SP</given-names></string-name>. <article-title>3 examples of grant budgets that will win over funders (with examples)</article-title>. <source>Instrumentl Blog</source>; <year>2024</year>. <uri>https://www.instrumentl.com/blog/grant-budget-examples</uri> Accessed March 30, 2025.</mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><label>14.&#160;</label><string-name><surname>Hsu</surname> <given-names>M-H</given-names></string-name>. <article-title>Mastering medical terminology with ChatGPT and Termbot</article-title>. <source>Health Education Journal</source>. <year>2024</year>;<volume>83</volume>(<issue>4</issue>):<fpage>352</fpage>&#8211;<lpage>358</lpage>. DOI: <pub-id pub-id-type="doi">10.1177/00178969231197371</pub-id></mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="webpage"><label>15.&#160;</label><string-name><surname>Tully</surname> <given-names>MJ</given-names></string-name>. <article-title>Revolutionizing healthcare simulation: ChatGPT&#8217;s impact and potential</article-title>. <source>Healthy Simulation</source>; <year>2023</year>. <uri>https://www.healthysimulation.com/healthcare-simulation-chatgpt/</uri> Accessed March 15, 2025.</mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="journal"><label>16.&#160;</label><string-name><surname>Masters</surname> <given-names>K</given-names></string-name>. <article-title>Medical Teacher&#8217;s first ChatGPT&#8217;s referencing hallucinations: Lessons for editors, reviewers, and teachers</article-title>. <source>Medical Teacher</source>. <year>2023</year>;<volume>45</volume>(<issue>7</issue>):<fpage>673</fpage>&#8211;<lpage>675</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/0142159X.2023.2208731</pub-id></mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><label>17.&#160;</label><string-name><surname>Gupta</surname> <given-names>K</given-names></string-name>, <string-name><surname>O&#8217;Brien</surname> <given-names>W</given-names></string-name>, <string-name><surname>Strymish</surname> <given-names>J</given-names></string-name>. <article-title>104. Accuracy of ChatGPT for UTI diagnosis and management questions</article-title>. <source>Open Forum Infectious Diseases</source>. <year>2023</year>;<volume>10</volume>(<issue>Suppl 2</issue>):<elocation-id>ofad500.020</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1093/ofid/ofad500.020</pub-id></mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="journal"><label>18.&#160;</label><string-name><surname>Gravina</surname> <given-names>AG</given-names></string-name>, <string-name><surname>Pellegrino</surname> <given-names>R</given-names></string-name>, <string-name><surname>Cipullo</surname> <given-names>M</given-names></string-name>, et al. <article-title>May ChatGPT be a tool producing medical information for common inflammatory bowel disease patients&#8217; questions? An evidence-controlled analysis</article-title>. <source>World Journal of Gastroenterology</source>. <year>2024</year>;<volume>30</volume>(<issue>1</issue>):<fpage>17</fpage>&#8211;<lpage>33</lpage>. DOI: <pub-id pub-id-type="doi">10.3748/wjg.v30.i1.17</pub-id></mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="journal"><label>19.&#160;</label><string-name><surname>Alkaissi</surname> <given-names>H</given-names></string-name>, <string-name><surname>McFarlane</surname> <given-names>SI</given-names></string-name>. <article-title>Artificial hallucinations in ChatGPT: Implications in scientific writing</article-title>. <source>Cureus</source>. <year>2023</year>;<volume>15</volume>(<issue>2</issue>):<elocation-id>e35179</elocation-id>. DOI: <pub-id pub-id-type="doi">10.7759/cureus.35179</pub-id></mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="webpage"><label>20.&#160;</label><string-name><surname>Zhang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Lou</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>. <article-title>Towards safe AI clinicians: A comprehensive study on large language model jailbreaking in healthcare</article-title>. <source>arXiv Preprint</source>; <year>2025</year>. <uri>https://arxiv.org/abs/2501.18632</uri> Accessed April 20, 2025.</mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="journal"><label>21.&#160;</label><string-name><surname>Schopow</surname> <given-names>N</given-names></string-name>, <string-name><surname>Osterhoff</surname> <given-names>G</given-names></string-name>, <string-name><surname>Baur</surname> <given-names>D</given-names></string-name>. <article-title>Applications of the natural language processing tool ChatGPT in clinical practice: Comparative study and augmented systematic review</article-title>. <source>JMIR Medical Informatics</source>. <year>2023</year>;<volume>11</volume>: <elocation-id>e48933</elocation-id>. DOI: <pub-id pub-id-type="doi">10.2196/48933</pub-id></mixed-citation></ref>
<ref id="B22"><mixed-citation publication-type="journal"><label>22.&#160;</label><string-name><surname>Adam</surname> <given-names>H</given-names></string-name>, <string-name><surname>Balagopalan</surname> <given-names>A</given-names></string-name>, <string-name><surname>Alsentzer</surname> <given-names>E</given-names></string-name>, et al. <article-title>Mitigating the impact of biased artificial intelligence in emergency decision-making</article-title>. <source>Communications Medicine</source>. <year>2022</year>;<volume>2</volume>:<elocation-id>149</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1038/s43856-022-00214-4</pub-id></mixed-citation></ref>
<ref id="B23"><mixed-citation publication-type="journal"><label>23.&#160;</label><string-name><surname>Mu</surname> <given-names>L J</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>TT</given-names></string-name>, <string-name><surname>Miao</surname> <given-names>YD</given-names></string-name>. <article-title>Advancements in AI-driven oncology: Assessing ChatGPT&#8217;s impact from GPT-3.5 to GPT-4o</article-title>. <source>International Journal of Surgery</source>. <year>2025</year>;<volume>111</volume>(<issue>1</issue>):<fpage>1669</fpage>&#8211;<lpage>1670</lpage>. DOI: <pub-id pub-id-type="doi">10.1097/JS9.0000000000001989</pub-id></mixed-citation></ref>
<ref id="B24"><mixed-citation publication-type="journal"><label>24.&#160;</label><string-name><surname>Biassoni</surname> <given-names>F</given-names></string-name>, <string-name><surname>Gnerre</surname> <given-names>M</given-names></string-name>. <article-title>Exploring ChatGPT&#8217;s communication behaviour in healthcare interactions: A psycholinguistic perspective</article-title>. <source>Patient Education and Counseling</source>. <year>2025</year>;<volume>134</volume>:<elocation-id>108663</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1016/j.pec.2025.108663</pub-id></mixed-citation></ref>
<ref id="B25"><mixed-citation publication-type="journal"><label>25.&#160;</label><string-name><surname>Ganjavi</surname> <given-names>C</given-names></string-name>, <string-name><surname>Eppler</surname> <given-names>MB</given-names></string-name>, <string-name><surname>Pekcan</surname> <given-names>A</given-names></string-name>, <string-name><surname>Biedermann</surname> <given-names>B</given-names></string-name>, <string-name><surname>Abreu</surname> <given-names>A</given-names></string-name>, <string-name><surname>Collins</surname> <given-names>GS</given-names></string-name>, <string-name><surname>Gill</surname> <given-names>IS</given-names></string-name>, <string-name><surname>Cacciamani</surname> <given-names>GE</given-names></string-name>. <article-title>Publishers&#8217; and journals&#8217; instructions to authors on use of generative artificial intelligence in academic and scientific publishing: Bibliometric analysis</article-title>. <source>BMJ</source>. <year>2024</year>;<volume>384</volume>:<elocation-id>e077192</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1136/bmj-2023-077192</pub-id></mixed-citation></ref>
<ref id="B26"><mixed-citation publication-type="journal"><label>26.&#160;</label><collab>Nature Editorial</collab>. <article-title>Tools such as ChatGPT threaten transparent science; Here are our ground rules for their use</article-title>. <source>Nature</source>. <year>2023</year>;<volume>613</volume>(<issue>7945</issue>):<elocation-id>612</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1038/d41586-023-00191-1</pub-id></mixed-citation></ref>
<ref id="B27"><mixed-citation publication-type="journal"><label>27.&#160;</label><string-name><surname>Angel</surname> <given-names>MC</given-names></string-name>, <string-name><surname>Rinehart</surname> <given-names>JB</given-names></string-name>, <string-name><surname>Cannesson</surname> <given-names>MP</given-names></string-name>, <string-name><surname>Baldi</surname> <given-names>P</given-names></string-name>. <article-title>Clinical knowledge and reasoning abilities of AI large language models in anesthesiology: A comparative study on the American Board of Anesthesiology examination</article-title>. <source>Anesthesia &amp; Analgesia</source>. <year>2024</year>;<volume>139</volume>(<issue>2</issue>):<fpage>349</fpage>&#8211;<lpage>356</lpage>. DOI: <pub-id pub-id-type="doi">10.1213/ANE.0000000000006892</pub-id></mixed-citation></ref>
<ref id="B28"><mixed-citation publication-type="webpage"><label>28.&#160;</label><string-name><surname>Coffey</surname> <given-names>L</given-names></string-name>. <article-title>Universities build their own ChatGPT-like AI tools</article-title>. <source>Inside Higher Ed</source>; <year>2024</year>, <month>March</month> <day>21</day>. <uri>https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/03/21/universities-build-their-own-chatgpt-ai</uri> Accessed April 20, 2025.</mixed-citation></ref>
<ref id="B29"><mixed-citation publication-type="webpage"><label>29.&#160;</label><string-name><surname>Vincent</surname> <given-names>J</given-names></string-name>. <article-title>OpenAI&#8217;s regulatory troubles are just beginning</article-title>. <source>The Verge</source>; <year>2023</year>, <month>May</month> <day>5</day>. <uri>https://www.theverge.com/2023/5/5/23709833/openai-chatgpt-gdpr-ai-regulation-europe-eu-italy</uri> Accessed April 20, 2025.</mixed-citation></ref>
<ref id="B30"><mixed-citation publication-type="webpage"><label>30.&#160;</label><collab>European Union</collab>. <source>General Data Protection Regulation (GDPR)</source>. EUR-Lex. <uri>https://eur-lex.europa.eu/EN/legal-content/summary/general-data-protection-regulation-gdpr.html</uri> Accessed April 20, 2025.</mixed-citation></ref>
<ref id="B31"><mixed-citation publication-type="webpage"><label>31.&#160;</label><collab>California Legislative Information</collab>. <source>AB-3030 Health care services: Artificial intelligence</source>. <uri>https://leginfo.legislature.ca.gov/</uri> Accessed April 20, 2025.</mixed-citation></ref>
<ref id="B32"><mixed-citation publication-type="webpage"><label>32.&#160;</label><collab>Personal Information Protection Law</collab>. <source>Personal Information Protection Law (PIPL)</source>. <uri>https://personalinformationprotectionlaw.com/</uri> Accessed April 20, 2025.</mixed-citation></ref>
<ref id="B33"><mixed-citation publication-type="webpage"><label>33.&#160;</label><collab>U.S. Food and Drug Administration</collab>. <source>Speech by Robert M. Califf, M.D. at the National Health Council&#8217;s 2023 Science of Patient Engagement Symposium: The patient at the center</source>; <year>2023</year>, <month>May</month> <day>9</day>. <uri>https://www.fda.gov/news-events/speeches-fda-officials/speech-robert-m-califf-md-national-health-councils-2023-science-patient-engagement-symposium-patient</uri> Accessed September 26, 2024.</mixed-citation></ref>
<ref id="B34"><mixed-citation publication-type="webpage"><label>34.&#160;</label><collab>European Medicines Agency</collab>. <article-title>Artificial intelligence: Workplan to guide use of AI in medicines regulation</article-title>. <source>EMA News</source>; <year>2023</year>, <month>December</month> <day>18</day>. <uri>https://www.ema.europa.eu/en/news/artificial-intelligence-workplan-guide-use-ai-medicines-regulation</uri> Accessed April 20, 2025.</mixed-citation></ref>
<ref id="B35"><mixed-citation publication-type="journal"><label>35.&#160;</label><string-name><surname>Jeblick</surname> <given-names>K</given-names></string-name>, <string-name><surname>Schachtner</surname> <given-names>B</given-names></string-name>, <string-name><surname>Dexl</surname> <given-names>J</given-names></string-name>, et al. <article-title>ChatGPT makes medicine easy to swallow: An exploratory case study on simplified radiology reports</article-title>. <source>Eur Radiol</source>. <year>2024</year>;<volume>34</volume>:<fpage>2817</fpage>&#8211;<lpage>2825</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s00330-023-10213-1</pub-id></mixed-citation></ref>
<ref id="B36"><mixed-citation publication-type="journal"><label>36.&#160;</label><string-name><surname>Nair</surname> <given-names>A</given-names></string-name>, <string-name><surname>Dsouza</surname> <given-names>KD</given-names></string-name>, <string-name><surname>Reshmi</surname> <given-names>B</given-names></string-name>. <article-title>Artificial intelligence &amp; sustainable development goals in health care</article-title>. <source>Indian Journal of Community Medicine</source>. <year>2024</year>;<volume>49</volume>(<issue>Suppl 1</issue>):<elocation-id>S121</elocation-id>. DOI: <pub-id pub-id-type="doi">10.4103/ijcm.ijcm_abstract422</pub-id></mixed-citation></ref>
</ref-list>
</back>
</article>