This blog is the first in a 2-part series that discusses the increasing variety of data types and sources. The first installment describes the increasing variability of sources, types, and formats. The second installment discusses the impact on the people, processes, and technology involved in clinical trials.
Emerging technologies are enabling clinical research teams to address the speed at which high amounts of varied data are being evaluated in clinical trials, as myriad, decentralized sources bring in increasingly complex clinical trial data to them. The digital transformation brought by the COVID-19 pandemic emphasizes the need for effective strategies to access reliable, accurate data gleaned from decentralized trials. The clinical trials industry must leverage technology and implement automated solutions, which incorporate artificial intelligence and machine learning, to stay on course with trial timelines.
Considerations based on the Type of Data, and its Attributes:
Structured and Unstructured Data
Structured data is typically data that conforms to a pre-defined data model, like a spreadsheet with defined rows and columns. Unstructured data is variable and does not have a consistent underlying data model. Unstructured data includes data that is found within the text such as emails, documents, some survey results, presentations, and even social media posts. As we see more new sources of data coming directly from patients, we can assume that clinical research teams will need the capability to manage greater volumes of unstructured and structured data, as well as essential metadata that helps to understand and access the data.
Batch, Stream, Micro-batch Processing
Digitization enables new data to be operationalized quickly and prepares it for analysis. Data can be processed as it arrives, in real-time. It can also accumulate before being processed.
Batch processing runs on a set schedule, allowing data to accrue or reach a specific threshold before it is processed.
Stream processing is about processing data as soon as it arrives. This could be almost as soon as (or milliseconds after) it is generated if the aggregation happens in real-time. When we have data that is generated in a continuous stream, stream processing is the best option.
In micro-batch processing, processes are run on accumulations of data – typically a minute’s worth or less. This makes data available in near real-time.
Legacy Data
Drug sponsors have a growing need to convert legacy data into CDISC compliant formats, prior to submission to meet regulatory requirements if their submissions are to be accepted.
Data Quality & Cleansing
As noted previously, clinical data can come in from disparate sources in the form of a recordset, table, or database and numerous other unstructured formats. It’s important to have a processing and cleansing strategy for each data type to boost its consistency, reliability, and value. These processes vary for each type of data but must produce consistently accurate quality data. By detecting and correcting corrupt, incomplete, irrelevant, and inaccurate records early, clinical trial teams can get quicker access to higher quality data. This driving principle forms an essential and critical foundation for all data management strategies.
Results
Gone are the days when clinical research teams were using Excel sheets and SAS to collect and integrate clinical data. Clinical researchers are taking full advantage of the data available to them, using sophisticated, tech-enabled tools. As automated processes ingest data from disparate sources as soon as it gets generated, research teams get to access large volumes of high-quality data for analysis and to derive insights. This calls for technology with the capability to capture, organize, analyze, and report clinical trial data, or use new and improved visualizations to make better sense of the data and to enable research teams to derive greater insights than ever before.
About MaxisIT
At MaxisIT, we clearly understand strategic priorities within clinical R&D, as we implement solutions for improving Clinical Development Portfolios via an integrated platform-based approach. For over 17 years, MaxisIT’s Clinical Trial Oversight System (CTOS) has been synonymous with timely access to study-specific, standardized and aggregated operational, trial, and patient data, enabling efficient trial oversight. MaxisIT’s platform is a purpose-built solution, which helps the Life Sciences industry by empowering business stakeholders. Our solution optimizes the clinical ecosystem; and enables in-time decision support, continuous monitoring over regulatory compliance, and greater operational efficiency at a measurable rate.
Fill out the form or email info@maxisIT.com to speak with an Expert
Resolve Clinical Development Data Challenges with RBQM
Webinars
Customized FSP Models to Improve Processes and Promote Clinical Outcomes
Case Study
Have you outgrown your Statistical Computing Environment (SCE)?
Videos
MaxisIT helps pharmaceutical and biopharmaceutical companies of all sizes plan and execute effective clinical trials. Our source-agnostic integrated clinical data analytics platform unifies data across clinical systems and CROs to enable real-time monitoring and on-time insights.