Sequential monitoring of time-to-event safety endpoints in clinical trials

  • 0Division of Biostatistics, Medical College of Wisconsin, Milwaukee, WI, USA.

|

|

Summary

This summary is machine-generated.

This study introduces time-to-event analysis for clinical trial safety monitoring, outperforming traditional binary methods by reducing expected toxicities up to 20%. The developed R package "stoppingrule" aids in designing effective safety stopping rules.

Area Of Science

  • Clinical Trials
  • Biostatistics
  • Pharmacovigilance

Background

  • Safety monitoring is critical in Phase II and III clinical trials to protect patients from treatment toxicity.
  • Current stopping rules often use binary toxicity data, potentially missing opportunities for more efficient risk identification.
  • Time-to-event data, when available, may offer increased statistical power and reduced trial duration for detecting excess risks.

Purpose Of The Study

  • To investigate and compare statistical methods for safety monitoring using time-to-event data in clinical trials.
  • To develop and illustrate an R software package, "stoppingrule", for designing and evaluating these novel stopping rules.
  • To assess the performance of time-to-event based stopping rules against traditional binary approaches.

Main Methods

  • Evaluated performance metrics of safety stopping rules based on Wang-Tsiatis tests, Bayesian Gamma-Poisson models, and sequential probability ratio tests.
  • Developed and utilized the R package "stoppingrule" for designing and assessing time-to-event based stopping rules.
  • Applied these methods to a Phase II clinical trial scenario (NCT01998633) for bone marrow transplant.

Main Results

  • Time-to-event methods demonstrated meaningful reductions in expected toxicities, up to 20%, compared to binary toxicity approaches.
  • Aggressive early monitoring methods (e.g., Gamma-Poisson with weak priors) generally showed lower toxicity counts and power.
  • Certain tests (Pocock, maximized sequential probability ratio test) exhibited high early stopping but reduced overall power and increased toxicities.

Conclusions

  • Time-to-event based safety monitoring offers superior performance in minimizing patient toxicity exposure.
  • Recommends the Gamma-Poisson model with weak priors or truncated sequential probability ratio test for constructing robust safety stopping rules.
  • The "stoppingrule" R package provides a valuable tool for researchers to design and assess advanced safety monitoring procedures in clinical trials.

Related Concept Videos

Comparing the Survival Analysis of Two or More Groups 01:20

113

Survival analysis is a cornerstone of medical research, used to evaluate the time until an event of interest occurs, such as death, disease recurrence, or recovery. Unlike standard statistical methods, survival analysis is particularly adept at handling censored data—instances where the event has not occurred for some participants by the end of the study or remains unobserved. To address these unique challenges, specialized techniques like the Kaplan-Meier estimator, log-rank test, and...

Assumptions of Survival Analysis 01:15

81

Survival models analyze the time until one or more events occur, such as death in biological organisms or failure in mechanical systems. These models are widely used across fields like medicine, biology, engineering, and public health to study time-to-event phenomena. To ensure accurate results, survival analysis relies on key assumptions and careful study design.

Survival Times Are Positively Skewed
 Survival times often exhibit positive skewness, unlike the normal distribution assumed...

Introduction To Survival Analysis 01:18

150

Survival analysis is a statistical method used to study time-to-event data, where the "event" might represent outcomes like death, disease relapse, system failure, or recovery. A unique feature of survival data is censoring, which occurs when the event of interest has not been observed for some individuals during the study period. This requires specialized techniques to handle incomplete data effectively.
The primary goal of survival analysis is to estimate survival time—the time...

Clinical Trials 01:16

6.6K

Clinical trials are prospective experimental studies conducted on humans to determine the safety and efficacy of treatments, drugs, diet methods, and medical devices. Using statistics in clinical trials enables researchers to derive reasonable and accurate conclusions from the collected data, allowing them to make wise decisions in uncertain situations. In medical research, statistical methods are crucial for preventing errors and bias.
There are four phases in a clinical trial. A phase one...

Censoring Survival Data 01:09

55

Survival analysis is a statistical method used to analyze time-to-event data, often employed in fields such as medicine, engineering, and social sciences. One of the key challenges in survival analysis is dealing with incomplete data, a phenomenon known as "censoring." Censoring occurs when the event of interest (such as death, relapse, or system failure) has not occurred for some individuals by the end of the study period or is otherwise unobservable, and it might have many different...

Kaplan-Meier Approach 01:24

74

The Kaplan-Meier estimator is a non-parametric method used to estimate the survival function from time-to-event data. In medical research, it is frequently employed to measure the proportion of patients surviving for a certain period after treatment. This estimator is fundamental in analyzing time-to-event data, making it indispensable in clinical trials, epidemiological studies, and reliability engineering. By estimating survival probabilities, researchers can evaluate treatment effectiveness,...