Research
Why do we collectively donate billions to charity, but pay relatively little attention to the impact of these dollars? Abhor the thought of murdering a child, but think little of buying lattes with money that we could use to provide a child with lifesaving medicine? My research integrates evolutionary game theory with laboratory experiments to study the hidden incentives that shape our preferences, beliefs, and ideologies. My recent research has focused in particular on morality and altruism, including inefficient giving, the categorical nature of norms, and the role of plausible deniability in quirks such as the omission-commission distinction and strategic ignorance.
Inefficient Giving
Burum. B., Hoffman, H. & Nowak, M. (2020). An evolutionary explanation for inefficient giving. Nature Human Behavior, 4, 1245-1257.
We donate billions to charities each year, yet much of our giving is ineffective. Why are we motivated to give but not to give effectively? Building off of evolutionary game theory models, we argue that donors evolved (biologically or culturally) to be insensitive to efficacy because efficacy is difficult to socially reward, as social rewards can only depend on well-defined and highly observable behaviors. We present five experiments testing key predictions of this account, predictions that are difficult to reconcile with alternative accounts based on cognitive or emotional limitations. Namely, we show that donors are more sensitive to efficacy when the decision is (i) not pro-social or (ii) affects kin. Moreover, (iii) social rewarders don’t condition on efficacy or (iv, v) other difficult-to-observe behaviors, like amount donated.
Categorical Norms
(Under Review)
Why are norms unduly sensitive to categorical distinctions (e.g., whether a chemical weapon was used) compared to continuous variation (the number of deaths or the amount of undue suffering)? We present a stylized game theory model that helps clarify why it is relatively easier for norms to be conditioned on categorical distinctions than on continuous variation. We explore the robustness of our results, and present evolutionary game theory simulations. Then, in a series of experiments, we verify that participants’ moral intuitions and willingness to punish norm violations are influenced by categorical distinctions far more than continuous variation, that participants anticipate that categorical distinctions are relatively more important when it comes to deterring future transgressions or avoiding the opprobrium of others enforcing the norm, and that the reliance on categorical distinctions weakens when norm enforcement plays less of a role. We end with a discussion of various applications, including territoriality, human rights, inefficient altruism, institutionalized racism, territorial disputes, revolutions, and collusion.
Spin and Motivated Reasoning
Using game theory modeling and a series of experiments, we aim to shed light on the nature of "spin" and its connection to motivated reasoning. Specifically, we show that when trying to persuade, we benefit by sharing and searching for only the evidence in our favor, even when others adjust for this. And that this motive gets internalized, such that our personal beliefs respond more to favorable evidence than equivalent unfavorable evidence.
Omission-Commission Distinction
Central to moral judgment is the difference between acts of commission (killing) and those of omission (letting die). Omissions are judged less harshly even when intentions and outcomes are identical. We propose an explanation for this difference rooted in evolutionary game theory: it is easier to coordinate punishment in response to commissions because they leave more evidence of intent. Across five studies we provide two kinds of evidence for this explanation: (i) among observers, the omission-commission distinction is stronger for judgments related to punishing (which benefit from coordination) than for those related to avoidance (which can be done unilaterally); and (ii) among (hypothetical) actors, the omission-commission distinction is stronger for actions involving strangers than for those involving family members (with whom the threat of punishment presumably plays a smaller role).
Strategic ignorance
People will often avoid finding out information that would lead them to be more prosocial. Building off evolutionary game theory models, we argue that there is a social incentive to be strategically ignorant because plausible deniability hinders coordinated punishment. We demonstrate the presence of this incentive in four studies: (i) in a dictator game, third parties punish selfish dictators more if they know the other player's payoffs, compared to if they opt not to find out those payoffs; (ii) third parties believe it is worse to pass on an STI after being tested than after avoiding testing, (iii) third parties believe it is worse to buy inhumane chicken after finding out the details of the treatment than after avoiding these details; and (iv) third parties believe it is worse to buy clothing made with child labor after finding out about the labor practices than after avoiding this information. We also test two additional predictions of our account: (v) third parties distinguish less between strategic ignorance and knowing harm when it comes to avoidance, compared to punishment, and (vi) the tendency toward strategic ignorance is stronger for choices that affect others than for those that affect the self.
Malleability of Empathy
Empathy encourages us to help, but it is both biased and malleable. We argue that this is because empathy tracks the incentives to help. We show this in two ways: (i) empathy increases (along with giving) when giving is observable to a trust game partner, and (ii) empathy decreases (along with giving) when giving is more financially costly.
Burum. B., Hoffman, H. & Nowak, M. (2020). An evolutionary explanation for inefficient giving. Nature Human Behavior, 4, 1245-1257.
We donate billions to charities each year, yet much of our giving is ineffective. Why are we motivated to give but not to give effectively? Building off of evolutionary game theory models, we argue that donors evolved (biologically or culturally) to be insensitive to efficacy because efficacy is difficult to socially reward, as social rewards can only depend on well-defined and highly observable behaviors. We present five experiments testing key predictions of this account, predictions that are difficult to reconcile with alternative accounts based on cognitive or emotional limitations. Namely, we show that donors are more sensitive to efficacy when the decision is (i) not pro-social or (ii) affects kin. Moreover, (iii) social rewarders don’t condition on efficacy or (iv, v) other difficult-to-observe behaviors, like amount donated.
Categorical Norms
(Under Review)
Why are norms unduly sensitive to categorical distinctions (e.g., whether a chemical weapon was used) compared to continuous variation (the number of deaths or the amount of undue suffering)? We present a stylized game theory model that helps clarify why it is relatively easier for norms to be conditioned on categorical distinctions than on continuous variation. We explore the robustness of our results, and present evolutionary game theory simulations. Then, in a series of experiments, we verify that participants’ moral intuitions and willingness to punish norm violations are influenced by categorical distinctions far more than continuous variation, that participants anticipate that categorical distinctions are relatively more important when it comes to deterring future transgressions or avoiding the opprobrium of others enforcing the norm, and that the reliance on categorical distinctions weakens when norm enforcement plays less of a role. We end with a discussion of various applications, including territoriality, human rights, inefficient altruism, institutionalized racism, territorial disputes, revolutions, and collusion.
Spin and Motivated Reasoning
Using game theory modeling and a series of experiments, we aim to shed light on the nature of "spin" and its connection to motivated reasoning. Specifically, we show that when trying to persuade, we benefit by sharing and searching for only the evidence in our favor, even when others adjust for this. And that this motive gets internalized, such that our personal beliefs respond more to favorable evidence than equivalent unfavorable evidence.
Omission-Commission Distinction
Central to moral judgment is the difference between acts of commission (killing) and those of omission (letting die). Omissions are judged less harshly even when intentions and outcomes are identical. We propose an explanation for this difference rooted in evolutionary game theory: it is easier to coordinate punishment in response to commissions because they leave more evidence of intent. Across five studies we provide two kinds of evidence for this explanation: (i) among observers, the omission-commission distinction is stronger for judgments related to punishing (which benefit from coordination) than for those related to avoidance (which can be done unilaterally); and (ii) among (hypothetical) actors, the omission-commission distinction is stronger for actions involving strangers than for those involving family members (with whom the threat of punishment presumably plays a smaller role).
Strategic ignorance
People will often avoid finding out information that would lead them to be more prosocial. Building off evolutionary game theory models, we argue that there is a social incentive to be strategically ignorant because plausible deniability hinders coordinated punishment. We demonstrate the presence of this incentive in four studies: (i) in a dictator game, third parties punish selfish dictators more if they know the other player's payoffs, compared to if they opt not to find out those payoffs; (ii) third parties believe it is worse to pass on an STI after being tested than after avoiding testing, (iii) third parties believe it is worse to buy inhumane chicken after finding out the details of the treatment than after avoiding these details; and (iv) third parties believe it is worse to buy clothing made with child labor after finding out about the labor practices than after avoiding this information. We also test two additional predictions of our account: (v) third parties distinguish less between strategic ignorance and knowing harm when it comes to avoidance, compared to punishment, and (vi) the tendency toward strategic ignorance is stronger for choices that affect others than for those that affect the self.
Malleability of Empathy
Empathy encourages us to help, but it is both biased and malleable. We argue that this is because empathy tracks the incentives to help. We show this in two ways: (i) empathy increases (along with giving) when giving is observable to a trust game partner, and (ii) empathy decreases (along with giving) when giving is more financially costly.