Publications 2021

  • Model-based and model-free control predicts alcohol consumption developmental trajectory in young adults - a three-year prospective study
  • Chen H, Belanger MJ, Mojtahedzadeh N, Nebe S, Kuitunen-Paul S, Sebold M, Garbusow M, Huys QJM, Heinz A, Rapp MA and Smolka MN
  • Biological Psychiatry (2021) In Press
  • Background: A shift from goal-directed toward habitual control has been associated with alcohol dependence. Whether such a shift predisposes pathological drinking is not yet clear. We investigated how goal-directed and habitual control at age 18 predict alcohol use trajectories over the course of three years. Methods: Goal-directed and habitual control, as informed by model-based and model-free learning, were assessed with a two-step sequential decision-making task during fMRI in 146 healthy 18-year-old male adults. Two key drinking variables were used to model the three-year alcohol use developmental trajectory: a consumption score from the Alcohol Use Disorders Identification Test (AUDIT-C; assessed every six months) and a binge drinking score (gram alcohol/occasion; assessed every year). We applied a latent growth curve model to examine how model-based and model-free control predicted the drinking trajectory. Results: The drinking behavior was best characterized by a linear trajectory. The model-based behavioral control was negatively associated with the development of the binge drinking score; the model-free reward prediction error (RPE) BOLD signals in the ventromedial prefrontal cortex and the ventral striatum predicted a higher starting point and steeper increase of the consumption score over time, respectively. Conclusions: We found that model-based behavioral control was associated with the binge drinking trajectory, while the model-free RPE signal was closely linked to the consumption score development. These findings support the idea that a shift from model-based to model-free control might be an important individual vulnerability in predisposing hazardous drinking behavior.

Key Publications

  • doi pdf Computational mechanisms of effort and reward decisions in depression and their relationship to relapse after antidepressant discontinuation
  • Berwian IM, Wenzel J, Collins AGE, Seifritz E, Stephan KE, Walter H, Huys QJM
  • JAMA Psychiatry (2020) 77(5):513-522
  • IMPORTANCE Nearly 1 in 3 patients with major depressive disorder who respond to antidepressants relapse within 6 months of treatment discontinuation. No predictors of relapse exist to guide clinical decision-making in this scenario. OBJECTIVES To establish whether the decision to invest effort for rewards represents a persistent depression process after remission, predicts relapse after remission, and is affected by antidepressant discontinuation. DESIGN, SETTING, AND PARTICIPANTS This longitudinal randomized observational prognostic study in a Swiss and German university setting collected data from July 1, 2015, to January 31, 2019, from 66 healthy controls and 123 patients in remission from major depressive disorder in response to antidepressants prior to and after discontinuation. Study recruitment took place until January 2018. EXPOSURE Discontinuation of antidepressants. MAIN OUTCOMES AND MEASURES Relapse during the 6 months after discontinuation. Choice and decision times on a task requiring participants to choose how much effort to exert for various amounts of reward and the mechanisms identified through parameters of a computational model. RESULTS A total of 123 patients (mean [SD] age, 34.5 [11.2] years; 94 women [76%]) and 66 healthy controls (mean [SD] age, 34.6 [11.0] years; 49 women [74%]) were recruited. In the main subsample, mean (SD) decision times were slower for patients (n = 74) compared with controls (n = 34) (1.77 [0.38] seconds vs 1.61 [0.37] seconds; Cohen d = 0.52; P = .02), particularly for those who later relapsed after discontinuation of antidepressants (n = 21) compared with those who did not relapse (n = 39) (1.95 [0.40] seconds vs 1.67 [0.34] seconds; Cohen d = 0.77; P < .001). This slower decision time predicted relapse (accuracy = 0.66; P = .007). Patients invested less effort than healthy controls for rewards (F1,98 = 33.970; P < .001). Computational modeling identified a mean (SD) deviation from standard drift-diffusion models that was more prominent for patients than controls (patients, 0.67 [1.56]; controls, -0.71 [1.93]; Cohen d = 0.82; P < .001). Patients also showed higher mean (SD) effort sensitivity than controls (patients, 0.31 [0.92]; controls, -0.08 [1.03]; Cohen d = 0.51; P = .05). Relapsers differed from nonrelapsers in terms of the evidence required to make a decision for the low-effort choice (mean [SD]: relapsers, 1.36 [0.35]; nonrelapsers, 1.17 [0.26]; Cohen d = 0.65; P = .02). Group differences generally did not reach significance in the smaller replication sample (27 patients and 21 controls), but decision time prediction models from the main sample generalized to the replication sample (validation accuracy = 0.71; P = .03). CONCLUSIONS AND RELEVANCE This study found that the decision to invest effort was associated with prospective relapse risk after antidepressant discontinuation and may represent a persistent disease process in asymptomatic remitted major depressive disorder. Markers based on effort-related decision-making could potentially inform clinical decisions associated with antidepressant discontinuation.
  • blog doi pdf Dissociating neural learning signals in human sign- and goal-trackers
  • Schad DJ, Rapp MA, Garbusow M, Nebe S, Sebold M, Obst E, Sommer C, Deserno L, Rabovsky M, Friedel E, Romanczuk-Seiferth N, Wittchen H-U, Zimmermann US, Walter H, Sterzer P, Smolka MN, Schlagenhauf F, Heinz A, Dayan P, Huys QJM
  • Nat. Hum. Behav. (2020) 4:201-214
  • Individuals differ in how they learn from experience. In Pavlovian conditioning paradigms, where cues predict reinforcer delivery at a different goal location, some animals--so-called sign-trackers--come to approach the cue, whereas others, called `goal-trackers', approach the goal. In sign-trackers, model-free phasic dopaminergic reward prediction errors underlie learning, which renders stimuli `wanted'. `Goal-trackers' do not rely on dopamine for learning and are thought to use model-based learning. We demonstrate this double dissociation in male humans using eye-tracking, pupillometry and fMRI informed by computational models. We show that only sign- trackers exhibit a neural reward prediction error signal. Only for them is gaze and pupil dilation guided by model-free value. Goal-trackers exhibit a stronger model-based neural state prediction error signal. Only for them do model-based constructs determine gaze and pupil dilation. As sign-tracking may be a vulnerability factor for impulsive and addictive behavior, these findings have implications for mental health.
  • doi pdf A formal valuation framework for emotions and their control
  • Huys QJM and Renz D
  • Biological Psychiatry (2017) 82:413--420
  • Computational psychiatry attempts to apply mathematical and computational techniques to help improve psychiatric care. Here, we consider formal valuation accounts of emotion. The flexibility of emotional responses and the nature of appraisals suggest the need for a model-based framework for emotions. Resource limitations make plain model-based valuation impossible, and require strategies to apportion cognitive resources adaptively. We argue that emotions can implement such approximations by restricting the range of behaviours and states considered. We consider the processes that guide the deployment of the emotional approximations, discerning between innate, model-free, heuristic and model-based controllers. A focus on complex model-based decisions reveals the necessity for strategies to deal with the complexity of the problems. Emotions may provide such approximations, and this framework may provide a principled approach to examining them.
  • doi pdf A Roadmap for the Development of Applied Computational Psychiatry
  • Paulus MP, Huys QJM and Maia T
  • Biological Psychiatry: Cognitive Neuroscience and Neuroimaging (2016) 1(5):386-392
  • Background: Computational psychiatry is a burgeoning field that utilizes mathematical approaches to investigate psychiatric disorders, derive quantitative predictions, and integrate data across multiple levels of description. Computational psychiatry has already led to many new insights into the neurobehavioral mechanisms that underlie several psychiatric disorders, but its usefulness from a clinical standpoint is only now starting to be considered. Methods: Examples of computational psychiatry are highlighted, and a phase-based pipeline for the development of clinical computational- psychiatry applications is proposed, similar to the phase-based pipeline used in drug development. It is proposed that each phase has unique endpoints and deliverables, which will be important milestones to move tasks, procedures, computational models, and algorithms from the laboratory to clinical practice. Results: Application of computational approaches should be tested on healthy volunteers in Phase I, transitioned to target populations in Phase IB and Phase IIA, and thoroughly evaluated using randomized clinical trials in Phase IIB and Phase III. Successful completion of these phases should be the basis of determining whether computational models are useful tools for prognosis, diagnosis, or treatment of psychiatric patients. Conclusions: A new type of infrastructure will be necessary to implement the proposed pipeline. This infrastructure should consist of groups of investigators with diverse backgrounds collaborating to make computational psychiatry relevant for the clinic.

  • doi pdf Computational psychiatry as a bridge from neuroscience to clinical applications
  • Huys QJM*, Maia T* and Frank MJ
  • Nature Neuroscience (2016) 19(3):404--413
    Commentary in Scientific American
  • Translating advances in neuroscience into benefits for patients with mental illness presents enormous challenges because it involves both the most complex organ----the brain---and its interaction with a similarly complex environment. Dealing with such complexities demands powerful techniques. Computational psychiatry combines multiple levels and types of computation with multiple types of data in an effort to improve understanding, prediction, and treatment of mental illness. Computational psychiatry, broadly defined, encompasses two complementary approaches: data-driven and theory-driven. Data-driven approaches apply machine-learning methods to high-dimensional data to improve classification of disease, predict treatment outcomes, or improve treatment selection. These approaches are generally agnostic as to the underlying mechanisms. Theory-driven approaches, in contrast, use models that instantiate prior knowledge of, or explicit hypotheses about, such mechanisms, possibly at multiple levels of analysis and abstraction. We review recent advances in both approaches, with an emphasis on clinical applications, and highlight the utility of combining them.

  • doi pdf The interplay of approximate planning strategies
  • Huys QJM, Lally N, Faulkner P, Eshel N, Seifritz E, Gershman SJ, Dayan P and Roiser JP
  • PNAS (2015), 112(10):3098-3103
    Commentary by Daniel, Schuck & Niv
  • Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options."