By creating efficiencies with technology, CROs have an opportunity to build clinical trials of the future.
For many years, artificial intelligence (AI), machine learning (ML) and natural language processing (NLP) had a reputation in the biopharmaceutical industry as flashy buzzwords, with little concrete evidence to back up their promise. Over time, this lack of understanding has created drawn-out anticipation, leaving many skeptical about the true value of these tools.
As the potential of this technology continues to evolve, however, we are seeing dramatic shifts in adoption. Today, organizations in most sectors are developing (sometimes sizable) AI research and implementation groups. Within life sciences especially, major research investments are driving the entire industry forward, improving and automating processes to create efficiencies across our healthcare system. These applications are no longer just an idea or a buzzword — they are demonstrating real value.
To ensure this progress continues, clinical research organizations (CROs) must have a firm grasp on the dynamic capabilities of this new technology to adopt it in ways that enhance customer partnerships and better aid patients.
When operating at the scale clinical trials demand, data requires technology.
In the field of clinical research, it’s one thing to manage the massively expanding volumes of data we’re responsible for, and another to execute this process with efficiency. AI is enabling scientists and engineers to use their time as productively as possible by automating aspects of the clinical trial process to process greater amounts of data at scale.
Imagine the difficulties of managing this data without technology and it’s immediately obvious why this matters. Today, pharmacovigilance (PV) workflows remain highly manual, requiring human assessment of large volumes of evidence. But as volume increases, so complexity also grows. Already available applications of AI/ ML can streamline these high-repetition, labor-intensive tasks that humans are known to be poor at, allowing experts the freedom to focus on actions that require human judgment — the unusual, the difficult, the never-seen-before.
The automation of this data comes in several forms. The most basic is robotic process automation (RPA) — rules-based software designed to precisely repeat tasks. More complex automation are increasingly likely to rely on AI/ML to provide decision support, where recommendations are generated and presented to a human operator to determine the next steps.
Related: The Many Faces of AI in Clinical Trials
There are also many opportunities for automation beyond PV that are already beginning to emerge across clinical trials. For instance, in study feasibility, site selection, oversight of study conduct and data monitoring, predictive analytics are increasingly important to providing forecasts based on data and experience (i.e., ML). There are also significant opportunities associated with biometrics workflows, where downstream auto-generation of a variety of objects (documents, table shells, etc.) can be derived from a source protocol.
The growing use of sensors and wearables in clinical trials enables passive and reliable data collection, enabling patients to provide direct, subjective in-the-moment feedback. This is already beginning to transform our understanding of what it means to the patient to experience disease and treatment. Collectively, these technologies are beginning to deliver on the promise to help us see the woods through the trees, enabling human decision-makers to analyze far more data than could ever be handled manually and improve the quality of outcomes.
Related: 77% of Sponsors, CROs Plan to Run Agile Clinical Trials in Next Months
Despite progress, opportunities for innovation lie ahead.
While much progress has been made, there is still more work to do if we are to move past prior skepticism and embrace the true potential of these tools. To mention just a couple, by way of illustration:
- More investment is needed in real-time data extraction from electronic medical records (EMRs). Commercial data aggregators have done much to enable healthcare providers to optimize operations and patient care, in turn selling that data to life science research organizations, including CROs. Extraction and abstraction tools needed to parse the information embedded in EMRs are available, and in some cases quite advanced, but the piping to enable that data to flow smoothly from source to use case remains a significant challenge.
- Elsewhere, it is clear that no single institution can hope to own more than a small fraction of the relevant data available to us today — data needed to power the AI/ML tools of the future. So it is imperative we find ways to improve the way we share access to data so all can benefit.
Our industry is moving fast, and AI, ML and NLP technology is beginning to live up to its promise. While progress is never easy, CROs have an opportunity to implement these tools in a way that facilitates efficiencies across our industry and helps enable tomorrow’s clinical trials. After all, ultimately this is about the one thing that most powerfully unites us all — improving patient care.
About Stephen Pyke
Stephen Pyke is Chief Digital Data Officer and EVP, Clinical Data and Digital Services at Parexel. In his current role, Mr. Pyke is principally responsible for leading and directing the strategy, operational execution, and development of all facets of Parexel’s enterprise patient data strategy.
Mr. Pyke trained as a statistician, and began his career in academia (London), where he held various research and teaching positions. Mr. Pyke subsequently joined the pharmaceutical industry where he was fortunate to have the chance to take on a number of global leadership roles, at Pfizer and GSK, including: Head Research Statistics, Head Clinical Statistics, Head Quantitative Sciences, Head Clinical Operations, and Head Development DDA (Digital, Data & Analytics).