Trajectory Inspection: A Method for Iterative Clinician-Driven Design of Reinforcement Learning Studies

Abstract

Reinforcement learning (RL) has the potential to significantly improve clinical decision making. However, treatment policies learned via RL from observational data are sensitive to subtle choices in study design. We highlight a simple approach, trajectory inspection, to bring clinicians into an iterative design process for model-based RL studies. We identify where the model recommends unexpectedly aggressive treatments or expects surprisingly positive outcomes from its recommendations. Then, we examine clinical trajectories simulated with the learned model and policy alongside the actual hospital course. Applying this approach to recent work on RL for sepsis management, we uncover a model bias towards discharge, a preference for high vasopressor doses that may be linked to small sample sizes, and clinically implausible expectations of discharge without weaning off vasopressors. We hope that iterations of detecting and addressing the issues unearthed by our method will result in RL policies that inspire more confidence in deployment.

Publication
AMIA 2021 Virtual Informatics Summit
Christina X Ji
Christina X Ji
PhD Student

Christina is interested in applying machine learning to healthcare, detecting distribution shift and developing transfer learning algorithms, and evaluating treatments and reinforcement learning policies with causal inference.

Michael Oberst
Michael Oberst
PhD Student

Postdoc CMU, Incoming Asst Prof Johns Hopkins

Sanjat Kanjilal
Sanjat Kanjilal
Clinical Fellow

Lecturer, Harvard Pilgrim Health Care Institute

David Sontag
David Sontag
Professor of EECS

My research focuses on advancing machine learning and artificial intelligence, and using these to transform health care.

Related