Education and professional development

The implementation of a data learning series focused on clinical development teams in a contract research organization

Authors: , ,

Abstract

Effective management of a clinical trial requires having real time access to information that provides useful insights into trial progress and that lends itself to collaborative decision making.  Data visualizations using data from multiple source systems employed during the conduct of a clinical trial have become an essential tool in the recent past as support for collaborative decision making by project teams. Having the ability to access, analyze, read, work with, and present data to support an argument are important skills that ensure data visualizations fulfill their purpose in clinical trial management. There is an expectation that members of the clinical trial team either possess or develop the data literacy skill sets necessary to collaborate on the successful execution of a clinical drug development trial. Here we describe the development of a Data Learning Series program targeted to increase the data literacy skills within a Contract Research Organization in support of the digital evolution of the drug development industry.

Keywords: Collect data, Abstract Data, Measure / Observe, Record Data, Data Collection Structure, Measurement Method, Identify Data, Measure Data Quality, Data Entry, Export, Integrate (reconcile), Transform

How to Cite: Weil, S. A. , Crumpler, A. & Medendorp, S. V. (2022) “The implementation of a data learning series focused on clinical development teams in a contract research organization”, Journal of the Society for Clinical Data Management. 2(1). doi: https://doi.org/10.47912/jscdm.39

Introduction

Clinical trials have grown in operational complexity, or “the aspects of a clinical trial that may be difficult to implement according to the timeline or procedures outlined.”1,2,3,4 The digitization of healthcare has contributed to this complexity due to the exponential growth in data available during a clinical trial.5 With this increased complexity comes increased expense. While key cost drivers of clinical trials in the United States vary between therapeutic areas, they can include clinical procedure costs, administrative staff costs, and study monitoring.6 Containing costs through effective management of study initiation, patient recruitment, data capture, and safety monitoring is a key role of clinical trial sponsors and their contract research organization (CRO) partners. Clinical trial management has been found to benefit from tools designed to enhance timely decision making focused on outcomes, quality, and patient safety.7,8

Enhanced data capabilities, such as real time access to information that provides useful insights into trial progress, may increase clinical trial success as measured by time, cost, and quality. Dashboards have been developed and implemented for decision making with varying results in both effectiveness and adoption.9,10 Optimal dashboard design that includes graphical visualizations provides support for teams by enabling them to make decisions with a high level of accuracy and confidence.11 An advantage of graphical visualizations is that, in general, the human brain is better at pattern recognition than it is at comprehending complex data and statistical models.12 While limits exist in human information processing capacity, following good design principles for visualizations can improve the user experience. Experts in the field of information visualization describe the appropriate use of pre-attentive attributes in data display design, including form, position, and color.13,14 Other experts in the field propose ways that visualizations can amplify cognition, including reducing the search for information and using visual representations to enhance the detection of patterns.15,16

In clinical trials, decision making has been studied from the trial team perspective.17,18,19 Delays in effective decision making can result in negative effects to the time, cost, and quality of a clinical trial.20,6,21 Current practice dictates that more than half of all clinical trial sponsors rely on spreadsheets to assess progress compared to one third of CROs.21 The proliferation of spreadsheets as the primary tool for study management makes the aggregation and visualization of data across programs difficult and results in missed opportunities for signal detection and trends that are evident in dashboard visualizations. The static nature of a spreadsheet means that individual team members must forage for information that instantly becomes outdated by the time they have extracted, entered, and examined it.22 Frequent use of spreadsheets also lends itself to data quality issues due to transcription errors or inadvertent changes due to lack of control. For these reasons, improvement in the provision of information to teams has become a focus of the industry. In comparison to spreadsheets, data visualizations also increase situation awareness and can offer an opportunity to advance analytic proficiency gradually through the seamless introduction of more advanced analytics.

However, a user’s level of data and graphical literacy challenges the effectiveness of providing information through visualization techniques. While the clinical research data management profession has continued to examine and refine their competencies and professional certification,23 data-related competencies for the broader clinical trial management team do not include basic data literacy or the use of data for trial management decision-making, which are required to achieve success in their roles.24 Data literacy includes the ability to access, analyze, interrogate, and communicate with data. Graphical literacy is the ability to understand, produce, and use relevant images and objects to promote actions.25 With recent movements toward reconfiguring and renaming existing trial support teams into data science teams,1 there is a greater need for both a general understanding of the lifecycle of clinical trial data points collected during a trial and a more comprehensive approach to addressing data competencies. This paper describes the development, implementation, and initial evaluation of a data learning series for clinical trial teams.

Background

Evidence exists demonstrating the benefit of data training, whether it is specific to data analytics or data literacy in general.26,27,28,29 A large volume of available information and recommendations also exists on data literacy curricula. Searching the key terms “data literacy” and “curriculum recommendations” from 2017 to the present yields 20,000 results on Google Scholar and 116 results in a more refined search in the PubMed healthcare domain. Searching for universities that offer healthcare analytics, data analytics, or health data science programs results in a robust list that grows larger when including certificate programs. However, there is minimal evidence specific to the clinical trial domain where employees are often expected to already have data knowledge or to have gained it somewhere other than a formal organization supported program. While opportunities for formal upskilling in the public domain are plentiful, in the drug development industry it can be difficult for organizations to consider investing in a company sponsored program to enhance the skills of their staff in a rapidly changing environment. Tying the program to the organization’s data maturity model goals and setting expectations for future benefits is critical to gaining support.

In the healthcare sector in general, it has been recognized that data literacy and analytics enables the use of multiple forms of organizational data and enhances quality and risk management scoring within those organizations.30,31,32 Recent work in healthcare data analytics demonstrates that employees enrolled in a data analytics training program demonstrated improved skill over time, which suggests that organizations striving to successfully implement a data analytics infrastructure could benefit from the implementation of a well-designed training program.27 Collaborating across seven institutions, Kreuter and colleagues developed and implemented a training program focused on teaching public sector staff data analytics.28 Findings suggest that the program accelerates the technical and analytical development of the trainees.28 The researchers point out that collaboration with outside consultants cannot replace investment in an internal program as data continues to become more complex and specific to particular domains.28 Kreuter and colleague’s program appears well designed and, as described in their paper summarizing the program, included both academic and hands-on working sessions like the program described in this paper.28 Measurements described in the paper were also similar, including applicability to work roles and overall satisfaction. A summary statement indicates the tracking of long-term effects over time is important and matches our program’s model.

Description of the program

The need for a data learning program intervention stemmed from our organization’s Data Governance Council (the Council) (Figure 1), in concert with the Council’s continuous monitoring of our organization’s data maturity through use of a formal index. The Council includes leaders most closely aligned with the organization’s master data. The organization discussed here is a mid-size CRO focused primarily on assisting biotechnology clients with the development of their assets. Key to advancing our maturity was a combination of the organization and understanding of our data assets as well as the meaning of the information generated from those assets. The change in cultural expectations related to information as an asset, in combination with organizational leadership support focused on the changing industry expectations of data expertise, drove us to develop and implement our data learning series.

Figure 1
Figure 1

Data Governance Council Structure.

Note: Gray boxed denote DLS team members.

A companion instruction strategy incorporated standard data related materials from a concept and theory perspective and then organized and arranged the materials to best relate them to the work being done within a CRO. Current circumstances, mainly related to the global pandemic, dictated that the program be built as a virtual offering.

To establish a baseline for data literacy within the organization, the first class of participants was randomly chosen by function in proportion to the size of each department. The participant list was prepared by the organization’s human resources department using a funnel technique that initially targeted all global employees that were members of a function supporting clinical development services. The list was narrowed down to a proposed class size of 100 participants using the parameters of equal distribution between regions while keeping the number of participants proportional to function size. No other criteria, such as time in organization, gender, level of education, or hierarchy, were considered. The selection methodology allowed for a sample of participants that mirrored the organization’s structure and composition. Classes after the pilot session were designed to be filled by a nomination process.

The pilot program emerged as a six-week, six-module virtual offering (Table 1) consisting of mainly self-directed materials. The modules were supported with live interactive application sessions the participants could choose from based on their schedule, and recordings were available if their schedules did not permit live attendance. The self-directed materials included a combination of short videos, pre-recorded presentations, and activity assignments. Each module was followed by a knowledge check to capture the effectiveness of the materials as well as the participant’s progress in the course. The presentations delivered industry-related data concepts, while the application sessions focused on using data within case study scenarios to achieve an outcome. The case study discussions were focused on the use of the situation, background, assessment, and recommendation (SBAR) tool, which evolved across the modules as more information was presented.33,34 The SBAR facilitates quick, concise, and effective transmission of information between team members. Gaining proficiency with the method requires practice, including role play, which lends itself to a program focused on data literacy. A final case study activity was collected after the last module to support effective assessment of the program. Additional program evaluation, along with participant progress, was completed through the administration of a pre- and post-course assessment that included both objective and subjective measures further discussed in the results section below. The company’s learning management system (Cornerstone OnDemand) was utilized to administer the eLearning modules, schedule instructor-led sessions, facilitate knowledge checks, and report learning data. The program also included technical, administrative, and learning support provided through a dedicated SharePoint™ site and a mentor program designed to augment the formal materials.

Table 1

Data Learning Series Module Summary.

Module Objectives
Module 1: Data and Organizational Value
  • Understanding the drivers of clinical transformation

  • Measuring value from the customer and patient perspective

  • How we measure quality

  • How analytics can transform the industry

Module 2: The Data Driven Organization
  • Data and knowledge

  • Information as an asset

  • Effective analysis of performance

  • Attributes of quality

Module 3: Understanding the Mechanics of Data
  • Contextual properties of data

  • Data types

  • Introductory statistical measures

  • Numerical measures

  • Graphical representations of data important to determining our success

Module 4: Systems and Technology Supporting a Data Infrastructure
  • Data and systems terminology

  • Data attributes

  • The process of data analysis

  • Data bases and data types common in the clinical research environment

  • The data lifecycle in a Contract Research Organization

  • Data visualizations to evaluate study progress

Module 5: Using Data to Measure Effectiveness
  • Key Performance Indicators

  • Benchmarking

  • Measuring process cycle times in clinical research

  • Process methodologies common in the industry

Module 6: Communicating with Data to Achieve Intended Outcomes
  • Effective data displays

  • Achieving outcomes using data

  • Communication for different audiences using data

Methods

The team used the analyze, design, develop, implement, and evaluate (ADDIE) model, traditionally used by instructional designers and training developers, to build the program.35 The team first identified the goal and objectives of the program. Its goal was to raise the bar on organizational data literacy. Its objectives included the following:

Evaluation and assessment of our program effectiveness was performed using the Kirkpatrick model (Figure 2).36 The first two levels of a program’s effectiveness are easier to measure than levels three and four. Although there can be some challenges with response rates for levels one and two, the measurements are relatively easy to collect in a timely manner and do not require special training for those involved. Most organizations seek a larger return on a formal training investment, such as significant changes in culture or positive organizational outcomes resulting from application of the learned information as assessed in Kirkpatrick levels three and four.

Figure 2
Figure 2

Measurement of Success. Figure adapted from the Kirkpatrick Model of Evaluation.36

Level three involves behaviour change, which is largely dependent on targeted manager feedback or observations, such as team members using data to support recommendations. Level four involves organizational measures that are indicative of a broader change and are associated with a positive impact on culture and specific business goals. An example of level four results is Brynjolfsson and colleagues’ paper indicating that organizations that employ data driven decision making demonstrate a productivity rate 5–6% higher than other firms.30 In addition, the authors state their results demonstrate that data driven decision making enhances measures related to asset utilization, return on equity, and market value.30 Another example of a high level result is Trkman and colleagues, who found a relationship between analytical capabilities and performance when measuring 310 companies from different industries across 5 countries, including the US.32

Careful consideration must go into linking program goals to organizational outcomes, along with how those will be measured and compared to the organization’s business goals. A formal return on investment (ROI) analysis was completed, which enabled the development team to focus on which measurements would be monitored over a specific period to assess the impact of the program. The evaluation of the program is continuous and in sync with the completion of each class. Classes are expected to be delivered once per quarter, leaving time for evaluation and adjustment if necessary

Results

During the pilot period, 276 participants across 3 classes were enrolled in the 6-week course. Of those, 172 (62%) completed all 6 weeks, with 143 (52%) completing their final activity. Each module of the course built on the previous one and included a synchronous application session. If participants missed sessions due to vacation or work requirements it was difficult to make that portion up and to continue in the course. The original class (randomly chosen) was representative of the departmental proportion in the company, but classes two and three (nominated) had strong representation from the clinical and biometrics departments based on the requirements of those positions. Class representation was balanced across age groups and years of experience and proportionally by size of office in each region (North America and Europe) (Figure 3).

Figure 3
Figure 3

Number of Program Participants by Age Category, Region, and Years of Experience.

To capture level one (reaction) outcomes, a survey (Figure 4) was sent electronically to the participants asking them to provide in free text what they most appreciated about the program as well as what could be improved. The free text responses were examined initially using a word cloud (Figure 5, Figure 6) and then discussed in more detail. The survey also captured ratings across 6 domains using a 5-level Likert scale (Figure 7). The levels of the scale included strongly disagree = 1, disagree = 2, neutral = 3, agree = 4, and strongly agree = 5. Results were generally positive although compliance with completing the survey was not as robust as planned. Participants received one reminder to complete the survey. The frequency of the surveys (weekly) may have been disruptive to workflow and were thus ignored. Future classes will include one survey at the end of the program.

Figure 4
Figure 4

Participant Survey-Data Learning Program.

Figure 5
Figure 5

Word Cloud Data Analysis Illustrating Participant Opinion of Most Liked Course Elements. (Larger type represents most mentioned topics. Color is added for visual interest).

Figure 6
Figure 6

Word Cloud Data Analysis Illustrating Participant Opinion on Course Improvement. (Larger type represents most mentioned topics. Color is added for visual interest).

Figure 7
Figure 7

Average Scoring by Category by Module.

Level two (learning) evaluation of the program consisted of the six end-of-module knowledge checks, the pre- and post-course assessment, and the final case study written activity. Details for the pre- and post-course assessment included a validated numeracy skill level measurement from the health literature from both objective and subjective perspectives.37,38 Neither the knowledge check nor the pre- and post-course assessment were proctored or timed. Further, participants were given multiple chances to reach a scoring threshold on the knowledge checks. In general, the participants scored high on both the pre- and post-course numeracy assessment, leaving little room for demonstrated improvement. Evidence in the literature suggests that the subjective assessment alone is a good substitute for the objective assessment.37 For the pilot, our interest was in knowing whether the participants’ perception of their ability matched the objective assessment, which proved true (Figure 8). For subsequent classes, we will pilot an alternative instrument that is more sensitive to visualization literacy rather than numeracy scores. While it would be useful to use both instruments, workload and time constraints may be prohibitive.

Figure 8
Figure 8

Relationship Between Subjective and Objective Scores.

Level three (behaviour) evaluation relied on line manager participation. Engagement was accomplished through an initial meeting between the program owners and line managers to review course details. Weekly guided learning agendas were provided detailing the objectives for each module along with examples and suggested activities to reinforce learning. Example questions for managers to use included the following: “What was your biggest takeaway from this week’s module?” “What is new learning for you from this module?” “How are you applying your learning?” Included in the agendas were knowledge check questions for line managers to check their own knowledge. The managers were also provided access to the data learning series SharePoint site where they could review materials, engage in discussion groups with participants, or take advantage of the private manager area on the site that contained tools and materials relevant to supporting the participants in their learning journey. Line manager feedback was captured through both a periodic survey and a post-course focus group interview. The survey asked about behaviour change, such as using data to make recommendations or to solve problems. The focus group feedback centred on the program participants’ desire to have content learning relate directly to on-the-job application. Additional feedback included praise for the program and its value to individuals and the organization. Variation in manager engagement was not examined but would be an interesting outcome to measure against improvement in course participant scores or career progress within the organization.

Finally, level four (results) evaluation is based on measures included in the formal ROI. In general, level four is captured over a period that exceeds the time frame during which the written description of the project was completed. Level four measures are also difficult to isolate as responses to one single intervention. The measures we will periodically examine include a reduction in employee turnover, improvement in employee data presentation skills at key meetings related to business outcomes, dashboard user metrics, and quality deliverable measurements, such as project key performance indicators (KPIs).

Discussion and Recommendations

The pilot program has prompted some revisions based on timing, participant feedback, workload, and effectiveness of the materials as determined by the module knowledge checks and assessments. For example, due to time off conflicts, summer is typically not the best time for employees to enrol in a six-week program. Additionally, module content has been re-evaluated and either reduced or expanded depending on course outcomes and to better accommodate participants’ work schedules. It is expected that after several iterations the course will still be evaluated but less frequently and with fewer adjustments, depending on the direction of the industry in general. Regular course adjustments also mean each class can only be evaluated in isolation. Once the course is standardized and not experiencing measurable change, we anticipate being able to combine class evaluations into a larger data set. We also anticipate future development of advanced coursework based on employee requests and job performance.

Completion rates reflect prioritization of workload

Due to the nature and necessity of delivering a self-directed and online program, completion rates with this learning program matched with our team’s previous combined years of industry learning and development experience and can be expected to hover somewhere around 60%. Based on their workload or other competing priorities, both the randomly selected pilot class and the subsequent self-selected or nominated participants struggled in some cases to keep up with the program or to complete the program as designed. While a self-paced on-demand program may prove to be the most reasonable future direction, the sacrifice comes with the rapidly evolving digitization of the industry and employees’ abilities to keep up with that change to remain relevant in their roles. We found that participants varied in their valuation of the program, with some declining to participate or dropping out early, while others were willing to keep up by completing modules outside of work hours, including during time off. Value perception did not necessarily align with any demographic or role across participants, with most functions represented in all non-completion reason categories (Figure 9).

Figure 9
Figure 9

Data Learning Program Functional Group Non-completers by Reason as Percentage of Whole.

Establish a robust evaluation process to prove effectiveness

Using preliminary data captured based on the Kirkpatrick model, we found good acceptance based on level one (reaction) as shown in Figure 4. Each module was rated by the participants on a scale from 1 (lowest) to 5 (highest) in the following categories: relevance to their position and the company, knowledge transfer, motivating potential, course satisfaction, and whether they would they recommend it to others. The average ratings across all modules and categories were consistently between 4 and 4.5. The highest rated module (with average ratings across categories from 4.5 to 4.75) was the last module, which outlined how to tell a story with data. The most challenging module (with average ratings between 3.5 and 4) was the module that discussed data as an organizational asset (Figure 7). A key limitation was the number of participants who chose to complete the module evaluations and to respond with both ratings and free text comments. Workload priorities and time management were noted as challenges to completion rates.

Level two (learning) data clarified that the pre- and post-course assessment measured data literacy incompletely because participants scored high from the outset. The final activity, which required both synthesis of information and communication skills, indicated an opportunity for future course work and suggested a more relevant pre- and post-course assessment was required.

Level three (behaviour) data has been somewhat sparse and difficult to obtain at this stage in the program. Less than one third of the managers surveyed or invited to the focus groups participated. Although we did establish a process for measuring behaviour change (explained in the Methods section), we do not yet have enough data to come to a reliable conclusion other than a weak positive trend based on manager comments. It is unknown whether the lack of robust feedback was due to increasing demands for time in their roles or to managers who were not invested in the coursework. We have had a substantial increase in nominations and requests for additional classes. As part of our future direction, we will conduct an evaluation of our manager engagement process to improve outcomes data collection. Another important consideration for improving feedback related to participant behaviours would be prioritization of the program administration to line managers. Tying the program to increases in data maturity index levels would demonstrate a positive effect. For example, capturing awareness of the program and increasing numbers of managers and leaders enrolling and completing the program is a measurement that would contribute to specific areas of the maturity index. Our intention was to balance time required to attend the training with a focus on prioritization of increasing data skills across the organization, but we may have overestimated either the level of data expertise or the ability to support an evolution of skill sets within the industry with those we depended on for program support.

Level four (results) will require significantly more time as we monitor the business goals and organizational outcomes tied to the program’s ROI. It is expected that career growth and development opportunities could increase employee satisfaction and engagement and thus reduce turnover. The cost of the training program will be compared to an expected reduction in turnover, offset against the costs of onboarding and training a new employee. Participants in the program will be tracked over time for both tenure and promotion opportunities. Other level four measurements are related to (1) the ability of project teams to communicate key study progress using analytics and (2) the cost of hiring an experienced data analyst versus training existing personnel, including the cost of recruitment, disruption in services, and continuity of the team’s mission. More intangible effects include culture change and the increase in client opportunities related to data experience and expertise.

Support for a diverse audience

The program must support all employees globally, either through adjustments, compromise in working hours, or through alternative means, such as recorded sessions with support available during working hours (Figure 10). A global organization is also likely to see greater variance in existing skill sets and expectations based on cultural differences, language, and available educational opportunities. While we did not observe significant differences in the pre- and post-course assessment within or across our global workforce, we did observe variations in completion of the scenario work that focused on synthesis of data sets and communication in relation to expected outcomes. An optional feedback request was available to participants, with the majority asking to receive feedback on their written responses from a content perspective.

Figure 10
Figure 10

Participant Location by Enrollment.

In addition to different geographies, program participants were comprised of employees from different functions, each bringing different perspectives to the series. The interactive application sessions enabled cross-functional discussion and debate that enriched project team interactions. Special attention and planning were given to the session activities to promote making the connection with everyday tasks. For example, one of the first activities involved understanding how value is generated through the work being done within a CRO, along with how that value is measured. The participants were provided with maps of the drug development process and an oncology patient journey. Participants were then asked to consider their contribution to each of these processes from the perspective of the customer and the patient. The high value points in each process were considered from a data generation and measurement standpoint, with subsequent discussion on how teams could improve that value for the customer and the patient. As the modules progressed, activities became more granular, such as determining what type of data visualization is most useful for which type of data, before circling back to looking at data sets in the aggregate and making a decision or recommendation, such as whether to incorporate telemedicine into the clinical trial process.

Leadership

Achieving the desired outcomes from a data learning initiative can be challenging no matter how well thought out the process is. A critical success factor is executive support that serves as a champion for the project. The champion must have a clear understanding of the investment and return expected and be willing to bear both the costs of the program and the time participants spend on the learning activities in exchange for establishing a data driven culture. When soliciting the support of a senior executive, communication and timing are key. Having a clear understanding of the organization’s goals, strategy, and other demands on the executive’s time are important in tailoring a message.

Internal expertise

Beyond the program champion, internal expertise is necessary from a data, information, and technology perspective. There are multiple touchpoints within a CRO that involve the data lifecycle, and it is to the advantage of the program to both incorporate those concepts and be supported by those involved to promote a data driven culture. Perhaps most important, expertise from a learning and development, leadership, and communication perspective is necessary to ensure the program is both promoted appropriately and measured and monitored for continuous improvement.

Conclusion

In summary, while the industry continues to evolve rapidly with new technology and data offerings coming from non-traditional sectors, we have seen early success with our data learning series program that could help fill a gap in data knowledge and support the changing work structure that is emerging across our industry. The program is meeting the goals and objectives outlined in the business proposal for the program. We anticipate seeing additional value in the upcoming results of a recent formal organizational engagement survey. While we have made every attempt to develop this program using established principles for instructional design, including outcomes measures, it is important to obtain agreement and funding for this approach prior to embarking on the task. Time commitment is a critical priority. Future considerations include offering the program externally or offering collaborative training across organizations or through a consortium to support standardization. A limitation to this approach would be consensus on training content and support in the form of resources for program development. Scalability related to need is an additional point to consider. While the program assessed and served participants from only one organization, it is not uncommon for resources to transition across organizations, indicating that the need is likely more widespread.

Acknowledgements

The authors express deepest gratitude to the following people for their work in development and implementation of the data learning series program: LaRae Bennet, Debra Jendrasek, Priscilla Pierre, Brianna Brewer, William Griffin, Megan Peters, and Diana Ritchie. The authors also express appreciation for the organization’s leadership, particularly Michael O. Wilkinson, for his advice and support of the program.

Competing Interests

The authors are currently employed at the organization of interest in senior leadership roles within Clinical Development.

References

1. Society for Clinical Data Management. The evolution of CDM to Clinical data Science: A reflection paper on the impact of the clinical research industry trend on clinical data management. 2019. Accessed November 6, 2020. https://scdm.org/white-paper/

2. Harper B, Wilkinson M, Indupuri R, Rocchio S, Getz K. Evolving Clinical Data Strategies and Tactics in Response to Digital Transformation. Therapeutic Innovation & Regulatory Science. 2020: 1–10. DOI:  http://doi.org/10.1007/s43441-020-00213-4

3. Getz K, Campo R. New benchmarks characterizing growth in protocol design complexity. Therapeutic innovation & regulatory science. 2018; 52(1): 22–28. DOI:  http://doi.org/10.1177/2168479017713039

4. Smith S, Siegel G, Kennedy A. Assessing the operational complexity of a clinical trial: The experience of the National Institute of Mental Health. Clinical Researcher. 2020; 34(3). Accessed July 15, 2020. https://acrpnet.org/2020/03/10/assessing-the-operational-complexity-of-a-clinical-trial-the-experience-of-the-national-institute-of-mental-health/

5. Atasoy H, Greenwood B, McCullough JS. The digitization of patient care: A review of the effects of electronic health records on health care quality and utilization. Annual review of public health. 2019; 40: 487–500. DOI:  http://doi.org/10.1146/annurev-publhealth-040218-044206

6. Sertkaya A, Wong H, Jessup A, Beleche T. Key cost drivers of pharmaceutical clinical trials in the United States. Clinical Trials. 2016; 13(2): 117–126. DOI:  http://doi.org/10.1177/1740774515625964

7. Farnum M, Lobanov V, Brennan M, et al. Clinical case: Enhancing medical monitoring with visualization and analytics. In: 2012 IEEE International Conference on Bioinformatics and Biomedicine. IEEE; 2012: 1–1. DOI:  http://doi.org/10.1109/BIBM.2012.6392639

8. Yang E, O’Donovan C, Phillips J, Atkinson L, Krishnendu G, Agrafiotis D. Quantifying and visualizing site performance in clinical trials. Contemporary clinical trials communications. 2018; 9: 108–114. DOI:  http://doi.org/10.1016/j.conctc.2018.01.005

9. Concannon D, Herbst K, Manley E. Developing a data dashboard framework for population health surveillance: Widening access to clinical trial findings. JMIR formative research. 2019; 3(2): e11342. DOI:  http://doi.org/10.2196/11342

10. Toddenroth D, Sivagnanasundaram J, Prokosch H-U, Ganslandt T. Concept and implementation of a study dashboard module for a continuous monitoring of trial recruitment and documentation. Journal of Biomedical Informatics. 2016; 64: 222–231. DOI:  http://doi.org/10.1016/j.jbi.2016.10.010

11. Sim LL, Ban KH, Tan T, Sethi S, Loh T. Development of a clinical decision support system for diabetes care: A pilot study. PloS one. 2017; 12(2): e0173021. DOI:  http://doi.org/10.1371/journal.pone.0173021

12. Rasmussen J, Vicente K. Coping with human errors through system design: Implications for ecological interface design International Journal of Man-machine Studies. 1989; 31(5): 517–534. DOI:  http://doi.org/10.1016/0020-7373(89)90014-X

13. Few S. Information dashboard design: The effective visual communication of data. Vol. 2. Sebastopol, CA: O’reilly; 2006.

14. Tufte E. The visual display of quantitative information; 2001.

15. Card M. Readings in information visualization: Using vision to think. Morgan Kaufmann; 1999.

16. Shneiderman B. The eyes have it: A task by data type taxonomy for information visualizations. In The craft of information visualization. Morgan Kaufmann; 2003: 364–371. DOI:  http://doi.org/10.1016/B978-155860915-0/50046-9

17. Juraskova I, Butow P, Bonner C, et al. Improving decision making about clinical trial participation–a randomised controlled trial of a decision aid for women considering participation in the IBIS-II breast cancer prevention trial. British journal of cancer. 2014; 111(1): 1–7. DOI:  http://doi.org/10.1038/bjc.2014.144

18. Barakat L, Schwartz L, Reilly A, Deatrick J, Balis F. A qualitative study of phase III cancer clinical trial enrollment decision-making: Perspectives from adolescents, young adults, caregivers, and providers. Journal of Adolescent and Young Adult Oncology. 2014; 3(1): 3–11. DOI:  http://doi.org/10.1089/jayao.2013.0011

19. Chorpita B, Bernstein A, Daleiden E, Research Network on Youth Mental Health. Driving with roadmaps and dashboards: Using information resources to structure the decision models in service organizations. Administration and Policy in Mental Health and Mental Health Services Research. 2008; 35(1–2): 114–123. DOI:  http://doi.org/10.1007/s10488-007-0151-x

20. Harrison R. Phase II and phase III failures: 2013–2015. Nature Reviews Drug Discovery. 2016; 15: 817. DOI:  http://doi.org/10.1038/nrd.2016.184

21. Centerwatch. Tufts CSDD: Clinical Trial Startup Process Takes Longer Than 10 Years. 2018. Accessed November 2020. https://www.centerwatch.com/cwweekly/2018/03/12/tufts-csdd-clinical-trial-startup-process-takes-longer-10-years-ago.

22. Weil S. Improving decision making using data visualizations in the complex domain of clinical trial management. University of Texas Health, School of Biomedical Informatics, BMI 7351; 2021. Accessed December 30, 2022.

23. Zozus M, Lazarov A, Smith L, et al. Analysis of professional competencies for the clinical research data management profession: implications for training and professional certification. Journal of the American Medical Informatics Association. 2017; 24(4): 737–745. DOI:  http://doi.org/10.1093/jamia/ocw179

24. Sonstein S, Jones C. Joint task force for clinical trial competency and clinical research professional workforce development. Frontiers in pharmacology. 2018; 9: 1148. DOI:  http://doi.org/10.3389/fphar.2018.01148

25. Felten P. Visual literacy. Change: The magazine of higher learning. 2008; 40(6): 60–64. DOI:  http://doi.org/10.3200/CHNG.40.6.60-64

26. Carlson J, Nelson M, Johnston L, Koshoffer A. Developing data literacy programs: Working with faculty, graduate students and undergraduates. Bulletin of the Association for Information Science and Technology. 2015; 41(6): 14–17. DOI:  http://doi.org/10.1002/bult.2015.1720410608

27. Kim BJ, Tomprou M. The Effect of Healthcare Data Analytics Training on Knowledge Management: A Quasi-Experimental Field Study. Journal of Open Innovation: Technology, Market, and Complexity. 2021; 7(1): 60. DOI:  http://doi.org/10.3390/joitmc7010060

28. Kreuter F, Ghani R, Lane J. Change Through Data: A Data Analytics Training Program for Government Employees. Harvard Data Science Review; 2019. DOI:  http://doi.org/10.1162/99608f92.ed353ae3

29. McEligot AJ, Cuajungco M, Behseta S, et al. Big Data Science Training Program at a Minority Serving Institution: Processes and Initial Outcomes. Californian journal of health promotion. 2018; 16(1): 1. DOI:  http://doi.org/10.32398/cjhp.v16i1.2118

30. Brynjolfsson E, Hitt L, Kim, HH. Strength in numbers: How does data-driven decisionmaking affect firm performance? 2011. Available at SSRN 1819486. DOI:  http://doi.org/10.2139/ssrn.1819486

31. Elbashir M, Collier P, Davern M. Measuring the effects of business intelligence systems: The relationship between business process and organizational performance. International journal of accounting information systems. 2008; 9(3): 135–153. DOI:  http://doi.org/10.1016/j.accinf.2008.03.001

32. Trkman P, McCormack K, Valadares de Oliveira M, Ladeira M. The impact of business analytics on supply chain performance. Decision Support Systems. 2010; 49(3): 318–327. DOI:  http://doi.org/10.1016/j.dss.2010.03.007

33. Pope BB, Rodzen L, Spross G. Raising the SBAR: How better communication improves patient outcomes. Nursing. 2008; 38(3): 41–43. PMID 18418189. DOI:  http://doi.org/10.1097/01.NURSE.0000312625.74434.e8

34. Institute for Healthcare Improvement. SBAR Tool: Situation-Background-Assessment-Recommendation. 2021. Accessed June 4, 2021. http://www.ihi.org/resources/Pages/Tools/SBARToolkit.aspx

35. Peterson C. Bringing ADDIE to life: Instructional design at its best. Journal of Educational Multimedia and Hypermedia. 2003; 12(3): 227–241.

36. Kirkpatrick J, Kirkpatrick W. Kirkpatrick’s four levels of training evaluation. Association for Talent Development, 2016.

37. Fagerlin A, Zikmund-Fisher B, Ubel P, Jankovic A, Derry H, Smith D. Measuring numeracy without a math test: Development of the Subjective Numeracy Scale. Medical Decision Making. 2007; 27(5): 672–680. DOI:  http://doi.org/10.1177/0272989X07304449

38. Schapira M, Walker C, Cappaert K, et al. The numeracy understanding in medicine instrument: A measure of health numeracy developed using item response theory. Medical Decision Making. 2012; 32(6): 851–865. DOI:  http://doi.org/10.1177/0272989X12447239