Найдено 300
Randomization in Pre‐Clinical Studies: When Evolution Theory Meets Statistics
Weigle S., Sargsyan D., Cabrera J., Diya L., Sendecki J., Lubomirski M.
Q1
Wiley
Pharmaceutical Statistics, 2025, цитирований: 0, doi.org, Abstract
ABSTRACTRandomization is a statistical procedure used to allocate study subjects randomly into experimental groups while balancing continuous variables. This paper presents an alternative to random allocation for creating homogeneous groups by balancing experimental factors. The proposed algorithms, inspired by the Theory of Evolution, enhance the benefits of randomization through partitioning. The methodology employs a genetic algorithm that minimizes the Irini criterion to partition datasets into balanced subgroups. The algorithm's performance is evaluated through simulations and dataset examples, comparing it to random allocation via exhaustive search. Results indicate that the experimental groups created by Irini are more homogeneous than those generated by exhaustive search. Furthermore, the Irini algorithm is computationally more efficient, outperforming exhaustive search by more than three orders of magnitude.
A Tipping Point Method to Evaluate Sensitivity to Potential Violations in Missing Data Assumptions
Torres C., Levin G., Rubin D., Koh W., Chiu R., Permutt T.
Q1
Wiley
Pharmaceutical Statistics, 2025, цитирований: 0, doi.org, Abstract
ABSTRACTIt is critical to evaluate the sensitivity of conclusions from a clinical trial to potential violations in the missing data assumptions of the statistical analysis. Sensitivity analyses should not consist of a few methods that might have been reasonable alternatives to the chosen analysis method, nor should they explore only a limited space of violations in the assumptions of the analysis. Instead, sensitivity analyses should target the same estimand as that targeted in the main analysis, and they should systematically and comprehensively explore the space of possible assumptions to evaluate whether the key conclusions hold up under all plausible scenarios. In a randomized, controlled trial, this can be achieved by tipping point analyses that vary assumptions about missing outcomes on the experimental and control arms to identify and discuss the plausibility of scenarios under which there is no longer evidence of a treatment effect. We introduce a simple, novel tipping point approach in which, for a variable that is quantitative or can be analyzed as if it is quantitative, inference on the treatment effect is based on the observed data and two sensitivity parameters, with minimal assumptions and no need for imputation. The sensitivity parameters to be varied are the mean differences between outcomes in dropouts and outcomes in completers on each of the two treatment arms. We derive the asymptotic properties of the proposed statistic and illustrate the utility of such an approach with two examples of drug reviews in which the methodology was utilized to inform regulatory decision‐making.
Bayesian Sample Size Calculation in Small n, Sequential Multiple Assignment Randomized Trials (snSMART)
Fang F., Tamura R.N., Braun T.M., Kidwell K.M.
Q1
Wiley
Pharmaceutical Statistics, 2025, цитирований: 0, doi.org, Abstract
ABSTRACTA recent study design for clinical trials with small sample sizes is the small n, sequential, multiple assignment, randomized trial (snSMART). An snSMART design has been previously proposed to compare the efficacy of two dose levels versus placebo. In such a trial, participants are initially randomized to receive either low dose, high dose or placebo in stage 1. In stage 2, participants are re‐randomized to either dose level depending on their initial treatment and a dichotomous response. A Bayesian analytic approach borrowing information from both stages was proposed and shown to improve the efficiency of estimation. In this paper, we propose two sample size determination (SSD) methods for the proposed snSMART comparing two dose levels with placebo. Both methods adopt the average coverage criterion (ACC) approach. In the first approach, the sample size is calculated in one step, taking advantage of the explicit posterior variance of the treatment effect. In the other two step approach, we update the sample size needed for a single‐stage parallel design with a proposed adjustment factor (AF). Through simulations, we demonstrate that the required sample sizes calculated using the two SSD approaches both provide the desired power. We also provide an applet to allow for convenient and fast sample size calculation in this snSMART setting.
Taylor Series Approximation for Accurate Generalized Confidence Intervals of Ratios of Log‐Normal Standard Deviations for Meta‐Analysis Using Means and Standard Deviations in Time Scale
Chen P., Dexter F.
Q1
Wiley
Pharmaceutical Statistics, 2025, цитирований: 0, doi.org, Abstract
ABSTRACTWith contemporary anesthetic drugs, the efficacy of general anesthesia is assured. Health‐economic and clinical objectives are related to reductions in the variability in dosing, variability in recovery, etc. Consequently, meta‐analyses for anesthesiology research would benefit from quantification of ratios of standard deviations of log‐normally distributed variables (e.g., surgical duration). Generalized confidence intervals can be used, once sample means and standard deviations in the raw, time, scale, for each study and group have been used to estimate the mean and standard deviation of the logarithms of the times (i.e., “log‐scale”). We examine the matching of the first two moments versus also using higher‐order terms, following Higgins et al. 2008 and Friedrich et al. 2012. Monte Carlo simulations revealed that using the first two moments 95% confidence intervals had coverage 92%–95%, with small bias. Use of higher‐order moments worsened confidence interval coverage for the log ratios, especially for coefficients of variation in the time scale of 50% and for larger sample sizes per group, resulting in 88% coverage. We recommend that for calculating confidence intervals for ratios of standard deviations based on generalized pivotal quantities and log‐normal distributions, when relying on transformation of sample statistics from time to log scale, use the first two moments, not the higher order terms.
A Commensurate Prior Model With Random Effects for Survival and Competing Risk Outcomes to Accommodate Historical Controls
Khanal M., Logan B.R., Banerjee A., Fang X., Ahn K.W.
Q1
Wiley
Pharmaceutical Statistics, 2025, цитирований: 0, doi.org, Abstract
ABSTRACTClinical trials (CTs) often suffer from small sample sizes due to limited budgets and patient enrollment challenges. Using historical data for the CT data analysis may boost statistical power and reduce the required sample size. Existing methods on borrowing information from historical data with right‐censored outcomes did not consider matching between historical data and CT data to reduce the heterogeneity. In addition, they studied the survival outcome only, not competing risk outcomes. Therefore, we propose a clustering‐based commensurate prior model with random effects for both survival and competing risk outcomes that effectively borrows information based on the degree of comparability between historical and CT data. Simulation results show that the proposed method controls type I errors better and has a lower bias than some competing methods. We apply our method to a phase III CT which compares the effectiveness of bone marrow donated from family members with only partially matched bone marrow versus two partially matched cord blood units to treat leukemia and lymphoma.
WATCH: A Workflow to Assess Treatment Effect Heterogeneity in Drug Development for Clinical Trial Sponsors
Sechidis K., Sun S., Chen Y., Lu J., Zhang C., Baillie M., Ohlssen D., Vandemeulebroecke M., Hemmings R., Ruberg S., Bornkamp B.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTThis article proposes a Workflow for Assessing Treatment effeCt Heterogeneity (WATCH) in clinical drug development targeted at clinical trial sponsors. WATCH is designed to address the challenges of investigating treatment effect heterogeneity (TEH) in randomized clinical trials, where sample size and multiplicity limit the reliability of findings. The proposed workflow includes four steps: analysis planning, initial data analysis and analysis dataset creation, TEH exploration, and multidisciplinary assessment. The workflow offers a general overview of how treatment effects vary by baseline covariates in the observed data and guides the interpretation of the observed findings based on external evidence and the best scientific understanding. The workflow is exploratory and not inferential/confirmatory in nature but should be preplanned before database lock and analysis start. It is focused on providing a general overview rather than a single specific finding or subgroup with a differential effect.
A Phase I Dose‐Finding Design Incorporating Intra‐Patient Dose Escalation
Guo B., Liu S.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTConventional Phase I trial designs assign a single dose to each patient, necessitating a minimum number of patients per dose to reliably identify the maximum tolerated dose (MTD). However, in many clinical trials, such as those involving pediatric patients or patients with rare cancers, recruiting an adequate number of patients can pose challenges, limiting the applicability of standard trial designs. To address this challenge, we propose a new Phase I dose‐finding design, denoted as IP‐CRM, that integrates intra‐patient dose escalation with the continual reassessment method (CRM). In the IP‐CRM design, intra‐patient dose escalation is allowed, guided by both individual patients' toxicity outcomes and accumulated data across patients, and the starting dose for each cohort of patients is adaptively updated. We further extend the IP‐CRM design to address carryover effects and/or intra‐patient correlations. Due to the potential for each patient to contribute multiple data points at varying doses owing to intra‐patient dose escalation, the IP‐CRM design offers the advantage of determining the MTD with a considerably reduced sample size compared to standard Phase I dose‐finding designs. Simulation studies show that our IP‐CRM design can efficiently reduce sample size while concurrently enhancing the probability of identifying the MTD when compared with standard CRM designs and the 3 + 3 design.
A Likelihood Perspective on Dose‐Finding Study Designs in Oncology
Zhang Z.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTDose‐finding studies in oncology often include an up‐and‐down dose transition rule that assigns a dose to each cohort of patients based on accumulating data on dose‐limiting toxicity (DLT) events. In making a dose transition decision, a key scientific question is whether the true DLT rate of the current dose exceeds the target DLT rate, and the statistical question is how to evaluate the statistical evidence in the available DLT data with respect to that scientific question. This article introduces generalized likelihood ratios (GLRs) that can be used to measure statistical evidence and support dose transition decisions. Applying this approach to a single‐dose likelihood leads to a GLR‐based interval design with three parameters: the target DLT rate and two GLR cut‐points representing the levels of evidence required for dose escalation and de‐escalation. This design gives a likelihood interpretation to each existing interval design and provides a unified framework for comparing different interval designs in terms of how much evidence is required for escalation and de‐escalation. A GLR‐based comparison of commonly used interval designs reveals important differences and motivates alternative designs that reduce over‐treatment while maintaining MTD estimation accuracy. The GLR‐based approach can also be applied to a joint likelihood based on a nonparametric (e.g., isotonic regression) model or a parametric model. Simulation results indicate that the isotonic GLR performs similarly to the single‐dose GLR but the GLR based on a parsimonious model can improve MTD estimation when the underlying model is correct.
Real Effect or Bias? Good Practices for Evaluating the Robustness of Evidence From Comparative Observational Studies Through Quantitative Sensitivity Analysis for Unmeasured Confounding
Faries D., Gao C., Zhang X., Hazlett C., Stamey J., Yang S., Ding P., Shan M., Sheffield K., Dreyer N.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTThe assumption of “no unmeasured confounders” is a critical but unverifiable assumption required for causal inference yet quantitative sensitivity analyses to assess robustness of real‐world evidence remains under‐utilized. The lack of use is likely in part due to complexity of implementation and often specific and restrictive data requirements for application of each method. With the advent of methods that are broadly applicable in that they do not require identification of a specific unmeasured confounder—along with publicly available code for implementation—roadblocks toward broader use of sensitivity analyses are decreasing. To spur greater application, here we offer a good practice guidance to address the potential for unmeasured confounding at both the design and analysis stages, including framing questions and an analytic toolbox for researchers. The questions at the design stage guide the researcher through steps evaluating the potential robustness of the design while encouraging gathering of additional data to reduce uncertainty due to potential confounding. At the analysis stage, the questions guide quantifying the robustness of the observed result and providing researchers with a clearer indication of the strength of their conclusions. We demonstrate the application of this guidance using simulated data based on an observational fibromyalgia study, applying multiple methods from our analytic toolbox for illustration purposes.
Pre‐Posterior Distributions in Drug Development and Their Properties
Grieve A.P.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTThe topic of this article is pre‐posterior distributions of success or failure. These distributions, determined before a study is run and based on all our assumptions, are what we should believe about the treatment effect if we are told only that the study has been successful, or unsuccessful. I show how the pre‐posterior distributions of success and failure can be used during the planning phase of a study to investigate whether the study is able to discriminate between effective and ineffective treatments. I show how these distributions are linked to the probability of success (PoS), or failure, and how they can be determined from simulations if standard asymptotic normality assumptions are inappropriate. I show the link to the concept of the conditional introduced by Temple and Robertson in the context of the planning of multiple studies. Finally, I show that they can also be constructed regardless of whether the analysis of the study is frequentist or fully Bayesian.
Bayesian Solutions for Assessing Differential Effects in Biomarker Positive and Negative Subgroups
Jackson D., Zhang F., Burman C., Sharples L.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTThe number of clinical trials that include a binary biomarker in design and analysis has risen due to the advent of personalised medicine. This presents challenges for medical decision makers because a drug may confer a stronger effect in the biomarker positive group, and so be approved either in this subgroup alone or in the all‐comer population. We develop and evaluate Bayesian methods that can be used to assess this. All our methods are based on the same statistical model for the observed data but we propose different prior specifications to express differing degrees of knowledge about the extent to which the treatment may be more effective in one subgroup than the other. We illustrate our methods using some real examples. We also show how our methodology is useful when designing trials where the size of the biomarker negative subgroup is to be determined. We conclude that our Bayesian framework is a natural tool for making decisions, for example, whether to recommend using the treatment in the biomarker negative subgroup where the treatment is less likely to be efficacious, or determining the number of biomarker positive and negative patients to include when designing a trial.
Subgroup Identification Based on Quantitative Objectives
Sun Y., Hedayat A.S.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTPrecision medicine is the future of drug development, and subgroup identification plays a critical role in achieving the goal. In this paper, we propose a powerful end‐to‐end solution squant (available on CRAN) that explores a sequence of quantitative objectives. The method converts the original study to an artificial 1:1 randomized trial, and features a flexible objective function, a stable signature with good interpretability, and an embedded false discovery rate (FDR) control. We demonstrate its performance through simulation and provide a real data example.
A Model‐Based Trial Design With a Randomization Scheme Considering Pharmacokinetics Exposure for Dose Optimization in Oncology
Zhang J., Takeda K., Takeuchi M., Komatsu K., Zhu J., Yamaguchi Y.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTThe primary purpose of an oncology dose‐finding trial for novel anticancer agents has been shifting from determining the maximum tolerated dose to identifying an optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. In 2022, the FDA Oncology Center of Excellence initiated Project Optimus to reform the paradigm of dose optimization and dose selection in oncology drug development and issued a draft guidance. The guidance suggests that dose‐finding trials include randomized dose–response cohorts of multiple doses and incorporate information on pharmacokinetics (PK) in addition to safety and efficacy data to select the OD. Furthermore, PK information could be a quick alternative to efficacy data to predict the minimum efficacious dose and decide the dose assignment. This article proposes a model‐based trial design for dose optimization with a randomization scheme based on PK outcomes in oncology. A simulation study shows that the proposed design has advantages compared to the other designs in the percentage of correct OD selection and the average number of patients assigned to OD in various realistic settings.
A Bayesian Dynamic Model‐Based Adaptive Design for Oncology Dose Optimization in Phase I/II Clinical Trials
Qiu Y., Li M.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTWith the development of targeted therapy, immunotherapy, and antibody‐drug conjugates (ADCs), there is growing concern over the “more is better” paradigm developed decades ago for chemotherapy, prompting the US Food and Drug Administration (FDA) to initiate Project Optimus to reform dose optimization and selection in oncology drug development. For early‐phase oncology trials, given the high variability from sparse data and the rigidity of parametric model specifications, we use Bayesian dynamic models to borrow information across doses with only vague order constraints. Our proposed adaptive design simultaneously incorporates toxicity and efficacy outcomes to select the optimal dose (OD) in Phase I/II clinical trials, utilizing Bayesian model averaging to address the uncertainty of dose–response relationships and enhance the robustness of the design. Additionally, we extend the proposed design to handle delayed toxicity and efficacy outcomes. We conduct extensive simulation studies to evaluate the operating characteristics of the proposed method under various practical scenarios. The results demonstrate that the proposed designs have desirable operating characteristics. A trial example is presented to demonstrate the practical implementation of the proposed designs.
Optimizing Sample Size Determinations for Phase 3 Clinical Trials in Type 2 Diabetes
Cambon A., Travis J., Sun L., Idokogi J., Kettermann A.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTAn informed estimate of subject‐level variance is a key determinate for accurate estimation of the required sample size for clinical trials. Evaluating completed adult Type 2 diabetes studies submitted to the FDA for accuracy of the variance estimate at the planning stage provides insights to inform the sample size requirements for future studies. From the U.S. Food and Drug Administration (FDA) database of new drug applications containing 14,106 subjects from 26 phase 3 randomized studies submitted to the FDA in support of drug approvals in adult type 2 diabetes studies reviewed between 2013 and 2017, we obtained estimates of subject‐level variance for the primary endpoint—change in glycated hemoglobin (HbA1c) from baseline to 6 months. In addition, we used nine additional studies to examine the impact of clinically meaningful covariates on residual standard deviation and sample size re‐estimation. Our analyses show that reduced sample sizes can be used without interfering with the validity of efficacy results for adult type 2 diabetes drug trials. This finding has implications for future research involving the adult type 2 diabetes population, including the potential to reduce recruitment period length and improve the timeliness of results. Furthermore, our findings could be utilized in the design of future endocrinology clinical trials.
PKBOIN‐12: A Bayesian Optimal Interval Phase I/II Design Incorporating Pharmacokinetics Outcomes to Find the Optimal Biological Dose
Sun H., Tu J.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTImmunotherapies and targeted therapies have gained popularity due to their promising therapeutic effects across multiple treatment areas. The focus of early phase dose‐finding clinical trials has shifted from finding the maximum tolerated dose (MTD) to identifying the optimal biological dose (OBD), which aims to balance the toxicity and efficacy outcomes, thus optimizing the risk–benefit trade‐off. These trials often collect multiple pharmacokinetics (PK) outcomes to assess drug exposure, which has shown correlations with toxicity and efficacy outcomes but has not been utilized in the current dose‐finding designs for OBD selection. Moreover, PK outcomes are usually available within days after initial treatment, much faster than toxicity and efficacy outcomes. To bridge this gap, we introduce the innovative model‐assisted PKBOIN‐12 design, which enhances BOIN12 by integrating PK information into both the dose‐finding algorithm and the final OBD determination process. We further extend PKBOIN‐12 to TITE‐PKBOIN‐12 to address the challenges of late‐onset toxicity and efficacy outcomes. Simulation results demonstrate that PKBOIN‐12 more effectively identifies the OBD and allocates a greater number of patients to it than BOIN12. Additionally, PKBOIN‐12 decreases the probability of selecting inefficacious doses as the OBD by excluding those with low drug exposure. Comprehensive simulation studies and sensitivity analysis confirm the robustness of both PKBOIN‐12 and TITE‐PKBOIN‐12 in various scenarios.
Bayesian Response Adaptive Randomization for Randomized Clinical Trials With Continuous Outcomes: The Role of Covariate Adjustment
Aslanyan V., Pickering T., Nuño M., Renfro L., Pa J., Mack W.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTStudy designs incorporate interim analyses to allow for modifications to the trial design. These analyses may aid decisions regarding sample size, futility, and safety. Furthermore, they may provide evidence about potential differences between treatment arms. Bayesian response adaptive randomization (RAR) skews allocation proportions such that fewer participants are assigned to the inferior treatments. However, these allocation changes may introduce covariate imbalances. We discuss two versions of Bayesian RAR (with and without covariate adjustment for a binary covariate) for continuous outcomes analyzed using change scores and repeated measures, while considering either regression or mixed models for interim analysis modeling. Through simulation studies, we show that RAR (both versions) allocates more participants to better treatments compared to equal randomization, while reducing potential covariate imbalances. We also show that dynamic allocation using mixed models for repeated measures yields a smaller allocation proportion variance while having a similar covariate imbalance as regression models. Additionally, covariate imbalance was smallest for methods using covariate‐adjusted RAR (CARA) in scenarios with small sample sizes and covariate prevalence less than 0.3. Covariate imbalance did not differ between RAR and CARA in simulations with larger sample sizes and higher covariate prevalence. We thus recommend a CARA approach for small pilot/exploratory studies for the identification of candidate treatments for further confirmatory studies.
An Adaptive Three‐Arm Comparative Clinical Endpoint Bioequivalence Study Design With Unblinded Sample Size Re‐Estimation and Optimized Allocation Ratio
Hinds D., Sun W.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 1, doi.org, Abstract
ABSTRACTA three‐arm comparative clinical endpoint bioequivalence (BE) study is often used to establish bioequivalence (BE) between a locally acting generic drug (T) and reference drug (R), where superiority needs to be established for T and R over Placebo (P) and equivalence needs to be established for T vs. R. Sometimes, when study design parameters are uncertain, a fixed design study may be under‐ or over‐powered and result in study failure or unnecessary cost. In this paper, we propose a two‐stage adaptive clinical endpoint BE study with unblinded sample size re‐estimation, standard or maximum combination method, optimized allocation ratio, optional re‐estimation of the effect size based on likelihood estimation, and optional re‐estimation of the R and P treatment means at interim analysis, which have not been done previously. Our proposed method guarantees control of Type 1 error rate analytically. It helps to reduce the average sample size when the original fixed design is overpowered and increases the sample size and power when the original study and group sequential design are under‐powered. Our proposed adaptive design can help generic drug sponsors cut cost and improve success rate, making clinical study endpoint BE studies more affordable and more generic drugs accessible to the public.
Generalizing Treatment Effect to a Target Population Without Individual Patient Data in a Real‐World Setting
Quan H., Li T., Chen X., Li G.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTThe innovative use of real‐world data (RWD) can answer questions that cannot be addressed using data from randomized clinical trials (RCTs). While the sponsors of RCTs have a central database containing all individual patient data (IPD) collected from trials, analysts of RWD face a challenge: regulations on patient privacy make access to IPD from all regions logistically prohibitive. In this research, we propose a double inverse probability weighting (DIPW) approach for the analysis sponsor to estimate the population average treatment effect (PATE) for a target population without the need to access IPD. One probability weighting is for achieving comparable distributions in confounders across treatment groups; another probability weighting is for generalizing the result from a subpopulation of patients who have data on the endpoint to the whole target population. The likelihood expressions for propensity scores and the DIPW estimator of the PATE can be written to only rely on regional summary statistics that do not require IPD. Our approach hinges upon the positivity and conditional independency assumptions, prerequisites to most RWD analysis approaches. Simulations are conducted to compare the performances of the proposed method against a modified meta‐analysis and a regular meta‐analysis.
Comparative Analyses of Bioequivalence Assessment Methods for In Vitro Permeation Test Data
Leon S., Rantou E., Kim J., Choi S., Choi N.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTFor topical, dermatological drug products, an in vitro option to determine bioequivalence (BE) between test and reference products is recommended. In particular, in vitro permeation test (IVPT) data analysis uses a reference‐scaled approach for two primary endpoints, cumulative penetration amount (AMT) and maximum flux (Jmax), which takes the within donor variability into consideration. In 2022, the Food and Drug Administration (FDA) published a draft IVPT guidance that includes statistical analysis methods for both balanced and unbalanced cases of IVPT study data. This work presents a comprehensive evaluation of various methodologies used to estimate critical parameters essential in assessing BE. Specifically, we investigate the performance of the FDA draft IVPT guidance approach alongside alternative empirical and model‐based methods utilizing mixed‐effects models. Our analyses include both simulated scenarios and real‐world studies. In simulated scenarios, empirical formulas consistently demonstrate robustness in approximating the true model, particularly in effectively addressing treatment–donor interactions. Conversely, the effectiveness of model‐based approaches heavily relies on precise model selection, which significantly influences their results. The research emphasizes the importance of accurate model selection in model‐based BE assessment methodologies. It sheds light on the advantages of empirical formulas, highlighting their reliability compared to model‐based approaches and offers valuable implications for BE assessments. Our findings underscore the significance of robust methodologies and provide essential insights to advance their understanding and application in the assessment of BE, employed in IVPT data analysis.
Sample Size Reestimation in Stochastic Curtailment Tests With Time‐to‐Events Outcome in the Case of Nonproportional Hazards Utilizing Two Weibull Distributions With Unknown Shape Parameters
Sharma P., Phadnis M.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTStochastic curtailment tests for Phase II two‐arm trials with time‐to‐event end points are traditionally performed using the log‐rank test. Recent advances in designing time‐to‐event trials have utilized the Weibull distribution with a known shape parameter estimated from historical studies. As sample size calculations depend on the value of this shape parameter, these methods either cannot be used or likely underperform/overperform when the natural variation around the point estimate is ignored. We demonstrate that when the magnitude of the Weibull shape parameters changes, unblinded interim information on the shape of the survival curves can be useful to enrich the final analysis for reestimation of the sample size. For such scenarios, we propose two Bayesian solutions to estimate the natural variations of the Weibull shape parameter. We implement these approaches under the framework of the newly proposed relative time method that allows nonproportional hazards and nonproportional time. We also demonstrate the sample size reestimation for the relative time method using three different approaches (internal pilot study approach, conditional power, and predictive power approach) at the interim stage of the trial. We demonstrate our methods using a hypothetical example and provide insights regarding the practical constraints for the proposed methods.
Bayesian Methods for Quality Tolerance Limit (QTL) Monitoring
Poythress J. ., Lee J., Takeda K., Liu J.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 1, doi.org, Abstract
ABSTRACTIn alignment with the ICH guideline for Good Clinical Practice [ICH E6(R2)], quality tolerance limit (QTL) monitoring has become a standard component of risk‐based monitoring of clinical trials by sponsor companies. Parameters that are candidates for QTL monitoring are critical to participant safety and quality of trial results. Breaching the QTL of a given parameter could indicate systematic issues with the trial that could impact participant safety or compromise the reliability of trial results. Methods for QTL monitoring should detect potential QTL breaches as early as possible while limiting the rate of false alarms. Early detection allows for the implementation of remedial actions that can prevent a QTL breach at the end of the trial. We demonstrate that statistically based methods that account for the expected value and variability of the data generating process outperform simple methods based on fixed thresholds with respect to important operating characteristics. We also propose a Bayesian method for QTL monitoring and an extension that allows for the incorporation of partial information, demonstrating its potential to outperform frequentist methods originating from the statistical process control literature.
A Personalized Dose‐Finding Algorithm Based on Adaptive Gaussian Process Regression
Park Y., Chang W.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTDose‐finding studies play a crucial role in drug development by identifying the optimal dose(s) for later studies while considering tolerability. This not only saves time and effort in proceeding with Phase III trials but also improves efficacy. In an era of precision medicine, it is not ideal to assume patient homogeneity in dose‐finding studies as patients may respond differently to the drug. To address this, we propose a personalized dose‐finding algorithm that assigns patients to individualized optimal biological doses. Our design follows a two‐stage approach. Initially, patients are enrolled under broad eligibility criteria. Based on the Stage 1 data, we fit a regression model of toxicity and efficacy outcomes on dose and biomarkers to characterize treatment‐sensitive patients. In the second stage, we restrict the trial population to sensitive patients, apply a personalized dose allocation algorithm, and choose the recommended dose at the end of the trial. Simulation study shows that the proposed design reliably enriches the trial population, minimizes the number of failures, and yields superior operating characteristics compared to several existing dose‐finding designs in terms of both the percentage of correct selection and the number of patients treated at target dose(s).
Strategy for Designing In Vivo Dose–Response Comparison Studies
Novick S., Zhang T.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 0, doi.org, Abstract
ABSTRACTIn preclinical drug discovery, at the step of lead optimization of a compound, in vivo experimentation can differentiate several compounds in terms of efficacy and potency in a biological system of whole living organisms. For the lead optimization study, it may be desirable to implement a dose–response design so that compound comparisons can be made from nonlinear curves fitted to the data. A dose–response design requires more thought relative to a simpler study design, needing parameters for the number of doses, the dose values, and the sample size per dose. This tutorial illustrates how to calculate statistical power, choose doses, and determine sample size per dose for a comparison of two or more dose–response curves for a future in vivo study.
Bayesian Hierarchical Models for Subgroup Analysis
Wang Y., Tu W., Koh W., Travis J., Abugov R., Hamilton K., Zheng M., Crackel R., Bonangelino P., Rothmann M.
Q1
Wiley
Pharmaceutical Statistics, 2024, цитирований: 2, doi.org, Abstract
ABSTRACTIn conventional subgroup analyses, subgroup treatment effects are estimated using data from each subgroup separately without considering data from other subgroups in the same study. The subgroup treatment effects estimated this way may be heterogenous with high variability due to small sample sizes in some subgroups and much different from the treatment effect in the overall population. A Bayesian hierarchical model (BHM) can be used to derive more precise, and less heterogenous estimates of subgroup treatment effects that are closer to the treatment effect in the overall population. BHM assumes exchangeability in treatment effect across subgroups after adjusting for effect modifiers and other relevant covariates. In this article, we will discuss the technical details for applying one‐way and multi‐way BHM using summary‐level statistics, and patient‐level data for subgroup analysis. Four case studies based on four new drug applications are used to illustrate the application of these models in subgroup analyses for continuous, dichotomous, time‐to‐event, and count endpoints.
Cobalt Бета
ru en