Presentation Information
[SS09-05]Machine Learning-Driven Risk Stratification for Early Graft Loss in Living Donor Liver Transplantation
*Raiki Yoshimura1, Takeru Matsuura1, Shingo Iwami1 (1. Nagoya University (Japan))
Keywords:
Liver Transplantation,Graft loss,Machine Learning,Prognosis
Liver transplantation remains the primary life-saving intervention for patients with end-stage liver disease. In recent years, living donor liver transplantation (LDLT) has gained prominence due to its advantages over deceased donor liver transplantation (DDLT), including shorter waiting times and improved graft quality. However, graft failure continues to pose a critical challenge, as some recipients receive grafts that are either suboptimal in quality or inadequately sized for their physiological needs. Addressing this issue requires a deeper understanding of the factors contributing to graft loss.
While various predictive models have been developed, most focus on DDLT, leaving a gap in knowledge regarding LDLT outcomes. In this study, we retrospectively analyzed clinical data from 748 LDLT recipients and leveraged machine learning techniques to enhance the prediction of early graft loss (occurring within 180 days postoperatively). Compared to conventional models, our approach demonstrated superior predictive performance.
Our machine learning model enabled the stratification of a highly heterogeneous patient cohort into five distinct groups. To further refine our analysis, we categorized patients based on survival time into three overarching groups: G1, G2, and G3+G4+G5. Notably, we identified G2 as a unique subpopulation that, despite similarities to the early graft loss group (G1), exhibited distinct survival trajectories.
To enhance clinical applicability, we introduced a hierarchical prediction framework that allows for early identification of high-risk patients using data available within the first 30 days postoperatively. This methodology provides a novel means of distinguishing between at-risk populations, particularly those in the high-risk (G1) and intermediate-risk (G2) categories. By facilitating early intervention strategies—such as reconsideration of DDLT, identifying alternative living donors, or preparing for re-transplantation—our findings contribute to a data-driven, model-informed approach to improving liver transplant success rates and optimizing patient care.
While various predictive models have been developed, most focus on DDLT, leaving a gap in knowledge regarding LDLT outcomes. In this study, we retrospectively analyzed clinical data from 748 LDLT recipients and leveraged machine learning techniques to enhance the prediction of early graft loss (occurring within 180 days postoperatively). Compared to conventional models, our approach demonstrated superior predictive performance.
Our machine learning model enabled the stratification of a highly heterogeneous patient cohort into five distinct groups. To further refine our analysis, we categorized patients based on survival time into three overarching groups: G1, G2, and G3+G4+G5. Notably, we identified G2 as a unique subpopulation that, despite similarities to the early graft loss group (G1), exhibited distinct survival trajectories.
To enhance clinical applicability, we introduced a hierarchical prediction framework that allows for early identification of high-risk patients using data available within the first 30 days postoperatively. This methodology provides a novel means of distinguishing between at-risk populations, particularly those in the high-risk (G1) and intermediate-risk (G2) categories. By facilitating early intervention strategies—such as reconsideration of DDLT, identifying alternative living donors, or preparing for re-transplantation—our findings contribute to a data-driven, model-informed approach to improving liver transplant success rates and optimizing patient care.