Rewards make us come back for more. We need them for survival, use them for behavioral choices that maximize them and feel good about them.
Public perception associates rewards primarily with happiness and special gratification, but behavioral research suggests wider functions. There exist various definitions for reward, and this article suggests that reward has a wide spectrum of functions compatible with psychological animal learning theory and economic decision making. Rewards are objects, events, situations or activities that attain positive motivational properties from internal brain processes. They have the potential to (1) increase the probability and intensity of behavioral actions leading to such objects (learning, also called positive reinforcement), (2) generate approach and consummatory behavior and constitute outcomes of economic decision-making, and (3) induce subjective feelings of pleasure and hedonia. Rewards are of crucial importance for individual and gene survival and support such elementary processes as drinking, eating and reproduction. Largely similar behavioral processes are engaged for higher order rewards such as money, novelty and cognitive and social rewards. The basic reward objects are polysensory and do not engage specialized reward receptors, and the brain extracts the reward information from visual, auditory, somatosensory, olfactory and other sensory information. The identification of higher order rewards depends on additional cognitive processes. Thus rewards are not defined by the physics and chemistry of their inputs but by the behavioral reactions they induce. This article describes the key behavioral functions of rewards.
Contents |
The laws of mechanics, optics, acoustics and biochemistry define the key functions of primary sensory systems in the brain. The dedicated physical and chemical receptors of these systems translate environmental energy and information into neural language. By contrast, there are no dedicated receptors for reward, and the information enters the brain through the touch, taste, visual and auditory receptors of primary sensory systems. The functions of rewards cannot be derived entirely from the physics and chemistry of input events and are based primarily on behavioral effects. Thus the investigation of reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior. Animal learning theory and economic decision theories provide coherent frameworks for the investigation of neural reward mechanisms. The central tenets of these theories are based on observable behavior and, superficially, on the behaviorist approach, although mental states of representation and prediction are included.
Rewards induce changes in observable behavior. They serve as positive reinforcers by increasing the frequency of the behavior that results in reward.
Instrumental conditioning allows subjects to influence their environment and determine their rate of reward. However, there is general agreement that the stimuli used in instrumental conditioning have become reward predictors through Pavlovian learning.
Associative learning depends crucially on the discrepancy between the occurrence of a reward and its prediction. The importance of such prediction errors is derived from Kamin’s blocking effect (1969) which postulates that a reward that is fully predicted does not contribute to the learning of a stimulus or action, even when it has been repeatedly paired with the stimulus or action. This is conceptualized in the associative Rescorla-Wagner learning rules (Rescorla & Wagner 1972), according to which learning advances only to the extent to which a reinforcer is unpredicted and slows progressively as the reinforcer becomes more predicted ( Figure 3). The omission of a predicted reinforcer reduces the strength of the conditioned stimulus and produces extinction of behavior. So-called attentional learning rules in addition relate the capacity to learn (associability) in certain situations to the degree of attention evoked by the conditioned stimulus or reward (Mackintosh 1975, Pearce & Hall 1980).
Rewards elicit approach and consummatory behavior and serve as incentives by attracting goal pursuit. This is due to the objects being labelled with appetitive value through innate mechanisms (primary rewards) or, in most cases, classical conditioning, after which these objects constitute, strictly spoken, conditioned reinforcers (for learning) or incentives (for action) (Wise 2002). Nutritional rewards derive their value in addition from hunger and thirst states (drive), and satiation of the animal reduces the values of these objects and consequently the behavioral reactions.
Conditioned, reward-predicting stimuli induce and approach behaviors towards the reward which are less specific for the reward object than the consummatory behavior. In Pavlovian or classical conditioning, subjects often show non-consummatory behavioral reactions that normally occur after the primary reward when reward-predicting stimuli are absent. These reactions are not required to obtain the reward but may increase the chance of consuming the reward. Thus Pavlovian conditioning involves the transfer of a part of the behavioral response from the primary reward to the conditioned stimulus.
In instrumental conditioning, the action becomes associated with reward and thus obtains a value. Decision-making mechanisms can be based on the action values of the different options (Sutton & Barto 1998). Furthermore a reward becomes a goal for instrumental action if there is, at the time of the action, a representation of the reward and of the contingency (dependency) of the reward on that action (Dickinson & Balleine 1994).
When more than one option is available, animals show preferences for specific options, expressed as the probability of choosing one option over all others. Choices are crucially based on predictions of outcomes. Without predictions agents can only guess, as they do not know what they will get when making the choice. This puts the Pavlovian learning of reward predictors into a key role for decision-making. The reward function in decision-making is thus indirect: rewards serve for predictive Pavlovian learning, and the acquired prediction directs the choice. Reward itself is the outcome of the overt choice, and its evaluation is used to update the prediction used for the next decision.
The possible differences between reward and positive reinforcement become apparent in approach behavior and decision-making. Once established, these behaviors are driven by the attractive properties of rewards but are not, strictly spoken, identical to the strengthening of behavior associated with reinforcement. The strengthening of behavior that is likely to take place in these situations (or the prevention of extinction) is a separate phenomenon that is adequately described by positive reinforcement.
Decision mechanisms need to maximize the outcome of choices by comparing the values of all available options and choosing the option with the highest value. Values of the same kinds of reward can be compared on the basis of their probability distributions. The expected value (EV) of a probability distribution denotes the summed product of each reward magnitude weighted by its probability and provides a single numeric value for the outcome. However, additional factors play important roles, and outcomes are measured by the utility they have for the individual decision maker. Utilities are assessed by preferences in overt choice behavior, resulting in the quantification of outcomes as single scalar variable: \[ \mbox{Expected Utility} = \sum u(x)p \] (\(u\) is utility, \(x\) is value, \(p\) is probability) (Von Neumann & Morgenstern 1944).
A major factor influencing utility is uncertainty. In risk averse subjects, uncertainty reduces the utility of a reward (a reward has less value for me if I am afraid that I won’t get it), whereas risk seekers find higher utility in an uncertain reward (I like the uncertainty and thus prefer risky rewards over safe ones), all other reward parameters being equal ( Figure 1). The influence of uncertainty on outcome utility is conceptualized by the Taylor series expansion of expected utility (Huang & Litzenberger 1988).
Another major factor influencing the valuation of rewards is the temporal delay of the reward following a reward-predicting stimulus or an instrumental action. Later rewards lose their utility in a hyperbolically or exponentially decaying fashion (temporal discounting; Loewenstein & Prelec 1992; Figure 2). Subjects usually prefer earlier over later rewards, and learning is slowed as reward is delayed. If rewards occur with variable delays, the temporal discounting of reward utility combines with the uncertainty and usually produces risk-seeking because of the hyperbolically or exponentially flattening, convex, temporal discounting function.
Subjective feelings of pleasure and the resulting positive emotion represent key functions of rewards. It is quite likely that the pleasure derived from an object, event, situation or activity is sufficient to produce a positive reinforcing effect on behavior (what I got makes me feel good, and therefore I will repeat the action that produced the pleasure). However, it is unclear to which extent pleasure is a necessary condition for objects to be reinforcing, and not all reward objects may induce noticeable pleasure. Indeed, recent theories propose a distinction between an unconscious 'wanting' irrespective of pleasure, and a pleasurable 'liking' of rewards (Berridge & Robinson 2003). However, pleasure may be simply an epiphenomenon (my behavior gets reinforced and, in addition, I feel good because of the outcome). Animal experiments assuming subjective states related to reward run into obvious problems because of the lack of common language. Other than for specific investigations of hedonic mechanisms, the issue often may be left aside when studying the neural mechanisms of reward in controlled behavioral neurophysiological experiments on animals.
Punishers have opposite valence to rewards, induce withdrawal behavior and act as negative reinforcers by increasing the behavior that results in decreasing the aversive outcome. Avoidance can be passive when subjects increasingly refrain from doing something that is associated with a punisher, or active by increasing an instrumental response that is likely to reduce the impact of a punisher. Punishers induce negative emotional states of anger, fear and panic.
Rewards share a number of stimulus components with other behaviorally relevant objects. Rewards come in different sensory submodalities and have specific sensory stimulus attributes, such as form, color and spatial position. Rewards induce general behavioral activation and alerting and stimulus-driven attentional reactions that are also produced by punishers and by physically salient stimuli such as novel, large or rapidly moving objects.
The behavioral reactions to sensory, activating, alerting or attentional stimulus attributes can be measured by preferences in free choice behavior, movement responses in reaction time tasks, and vegetative responses such as skin conductance, heart rate, pupillary diameter and salivation. Behavioral reactions to these stimuli can be distinguished from their rewarding properties by using different non-rewarding stimuli with specific sensory and rewarding properties at different spatial positions. Discriminations can be difficult when these objects have also rewarding components, as in the case of novel or physically salient objects. A good initial approximation is the distinction of rewards from punishers.
Information about rewards is processed in a number of brain structures. The dopamine neurons, named after the neurotransmitter they release with nerve impulses in their projection territories, are located in the midbrain structures substantia nigra (pars compacta) and the medially adjoining ventral tegmental area (VTA) ( Figure 6). The axons of dopamine neurons project to the striatum (caudate nucleus, putamen and ventral striatum including nucleus accumbens), the dorsal and ventral prefrontal cortex, and a number of other structures.
Further reward signals are found in the neurons of the dopamine projection structures themselves, including the striatum, orbitofrontal cortex and amygdala ( Figure 1, blue). Rewards influence the action-related activity of neurons in additional structures including striatum and prefrontal and parietal cortex ( Figure 1, green). Additional brain structures influenced by reward include the supplementary motor area in the frontal lobe, the rhinal cortex in the temporal lobe, the pallidum and subthalamic nucleus in the basal ganglia, and a few others.
The past decade has brought an enormous wealth of knowledge on human reward processing using functional brain imaging. Figure 4 gives a small overview of the substantial and reproducible involvement of both the dorsal and ventral striatum in a variety of basic and higher reward processes. The other main human reward structures not shown here largely overlap with those found in neuronal studies in animals and mentioned above, namely the midbrain dopamine groups, different regions of frontal cortex and the amygdala.
The author acknowledges support by the Wellcome Trust, Swiss National Science Foundation, Human Frontiers Science Program and several other grant and fellowship agencies.
Internal references
Actor-Critic Method, Basal Ganglia, Conditioning, Neuroeconomics, Q-Learning, Reinforcement Learning, Rescorla-Wagner Learning Rule, Reward Signals, Temporal Difference Learning