TY - JOUR
T1 - Credit Assignment during Movement Reinforcement Learning
AU - Dam, Gregory
AU - Kording, Konrad
AU - Wei, Kunlin
N1 - Copyright:
Copyright 2013 Elsevier B.V., All rights reserved.
PY - 2013/2/8
Y1 - 2013/2/8
N2 - We often need to learn how to move based on a single performance measure that reflects the overall success of our movements. However, movements have many properties, such as their trajectories, speeds and timing of end-points, thus the brain needs to decide which properties of movements should be improved; it needs to solve the credit assignment problem. Currently, little is known about how humans solve credit assignment problems in the context of reinforcement learning. Here we tested how human participants solve such problems during a trajectory-learning task. Without an explicitly-defined target movement, participants made hand reaches and received monetary rewards as feedback on a trial-by-trial basis. The curvature and direction of the attempted reach trajectories determined the monetary rewards received in a manner that can be manipulated experimentally. Based on the history of action-reward pairs, participants quickly solved the credit assignment problem and learned the implicit payoff function. A Bayesian credit-assignment model with built-in forgetting accurately predicts their trial-by-trial learning.
AB - We often need to learn how to move based on a single performance measure that reflects the overall success of our movements. However, movements have many properties, such as their trajectories, speeds and timing of end-points, thus the brain needs to decide which properties of movements should be improved; it needs to solve the credit assignment problem. Currently, little is known about how humans solve credit assignment problems in the context of reinforcement learning. Here we tested how human participants solve such problems during a trajectory-learning task. Without an explicitly-defined target movement, participants made hand reaches and received monetary rewards as feedback on a trial-by-trial basis. The curvature and direction of the attempted reach trajectories determined the monetary rewards received in a manner that can be manipulated experimentally. Based on the history of action-reward pairs, participants quickly solved the credit assignment problem and learned the implicit payoff function. A Bayesian credit-assignment model with built-in forgetting accurately predicts their trial-by-trial learning.
UR - http://www.scopus.com/inward/record.url?scp=84873620063&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84873620063&partnerID=8YFLogxK
U2 - 10.1371/journal.pone.0055352
DO - 10.1371/journal.pone.0055352
M3 - Article
C2 - 23408972
AN - SCOPUS:84873620063
SN - 1932-6203
VL - 8
JO - PloS one
JF - PloS one
IS - 2
M1 - e55352
ER -