Characterizing Human Reward-based Decision-making Behavior with Reinforcement Learning Models
Presented by Xingche Guo, Columbia University
Tuesday, February 6 2024
3:30 PM-4:30 PM ET
AUST 105
Webex Meeting Link
Coffee will be served at 3:00 pm in the Noether Lounge (AUST 326)
Major depressive disorder (MDD) is one of the leading causes of disability-adjusted life years. Emerging evidence indicates that reward processing abnormalities may serve as a behavioral marker for MDD. To measure reward processing, patients perform computer-based behavioral tasks that involve making choices based on different stimuli, such as rewards and penalties.
Reinforcement learning (RL) are widely used to characterize how patients make decisions in reward-based behavioral tasks. To account for the nonlinearity of the decision process, we propose a semiparametric RL (Semi-RL) approach that models the RL parameters with nonparametric functions. Between-subject heterogeneity is considered by incorporating random effects. We provide a computationally efficient solution to address the challenges posed by nonparametric functions and random effects, along with theoretical results demonstrating the consistency and asymptotic normality of the parameters of interest. In the real data analysis, we show that individuals with MDD exhibit lower reward sensitivity compared to healthy subjects, and the reward sensitivity has a nonlinear form with a floor and ceiling effect. This finding suggests a switching of decision-making processes between multiple learning strategies.
In my recent work, I propose a RL with hidden Markov models (RL-HMM) framework, enabling learning strategy switching between two distinct strategies: RL model or random choices. The computational algorithm via EM algorithm is briefly introduced. Utilizing the RL-HMM, we demonstrated that individuals with MDD face greater difficulty concentrating during tasks compared to the healthy subjects. Finally, a brief overview of brain-behavior associations is provided, exploring potentials for integrating behavioral data with neuroimaging modalities such as EEG and fMRI.
Speaker Bio:
Guo is a postdoc in the Dept. of Biostat at Columbia. He obtained his PhD in Stat from Iowa State. Guo’s research interests lie in the development of innovative and interpretable statistical/machine learning methods and theory, equipped with computationally efficient algorithms, to tackle real-world challenges, including mental health, neuroscience, agriculture, and environmental health. He is actively engaged in the development of a broad range of methods, including semiparametric reinforcement learning.