Morality (from Latin moralitas 'manner, character, proper behavior') is the categorization of intentions, decisions and actions into those that are proper, or right, and those that are improper, or wrong.[1] Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion or culture, or it can derive from a standard that is understood to be universal.[2] Morality may also be specifically synonymous with "goodness", "appropriateness" or "rightness".
Moral philosophy includes meta-ethics, which studies abstract issues such as moral ontology and moral epistemology, and normative ethics, which studies more concrete systems of moral decision-making such as deontological ethics and consequentialism. An example of normative ethical philosophy is the Golden Rule, which states: "One should treat others as one would like others to treat oneself."[3][4]
Immorality is the active opposition to morality (i.e., opposition to that which is good or right), while amorality is variously defined as an unawareness of, indifference toward, or disbelief in any particular set of moral standards or principles.[5][6][7]
Ethics (also known as moral philosophy) is the branch of philosophy which addresses questions of morality. The word "ethics" is "commonly used interchangeably with 'morality' ... and sometimes it is used more narrowly to mean the moral principles of a particular tradition, group, or individual."[8] Likewise, certain types of ethical theories, especially deontological ethics, sometimes distinguish between ethics and morality.
Philosopher Simon Blackburn writes that "Although the morality of people and their ethics amounts to the same thing, there is a usage that restricts morality to systems such as that of Immanuel Kant, based on notions such as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning, based on the notion of a virtue, and generally avoiding the separation of 'moral' considerations from other practical considerations."[9]
In its descriptive sense, "morality" refers to personal or cultural values, codes of conduct or social mores that are observed to be accepted by a significant number of individuals (not necessarily all) in a society. It does not connote objective claims of right or wrong, but only refers to claims of right and wrong that are seen to be made and to conflicts between different claims made. Descriptive ethics is the branch of philosophy which studies morality in this sense.[10]
In its normative sense, "morality" refers to whatever (if anything) is actually right or wrong, which may be independent of the values or mores held by any particular peoples or cultures. Normative ethics is the branch of philosophy which studies morality in this sense.[10]
Philosophical theories on the nature and origins of morality (that is, theories of meta-ethics) are broadly divided into two classes:
Some forms of non-cognitivism and ethical subjectivism, while considered anti-realist in the robust sense used here, are considered realist in the sense synonymous with moral universalism. For example, universal prescriptivism is a universalist form of non-cognitivism which claims that morality is derived from reasoning about implied imperatives, and divine command theory and ideal observer theory are universalist forms of ethical subjectivism which claim that morality is derived from the edicts of a god or the hypothetical decrees of a perfectly rational being, respectively.
Practical reason is necessary for the moral agency but it is not a sufficient condition for moral agency.[12] Real life issues that need solutions do need both rationality and emotion to be sufficiently moral. One uses rationality as a pathway to the ultimate decision, but the environment and emotions towards the environment at the moment must be a factor for the result to be truly moral, as morality is subject to culture. Something can only be morally acceptable if the culture as a whole has accepted this to be true. Both practical reason and relevant emotional factors are acknowledged as significant in determining the morality of a decision.[13][neutrality is disputed]
Celia Green made a distinction between tribal and territorial morality.[14] She characterizes the latter as predominantly negative and proscriptive: it defines a person's territory, including his or her property and dependents, which is not to be damaged or interfered with. Apart from these proscriptions, territorial morality is permissive, allowing the individual whatever behaviour does not interfere with the territory of another. By contrast, tribal morality is prescriptive, imposing the norms of the collective on the individual. These norms will be arbitrary, culturally dependent and 'flexible', whereas territorial morality aims at rules which are universal and absolute, such as Kant's 'categorical imperative' and Geisler's graded absolutism. Green relates the development of territorial morality to the rise of the concept of private property, and the ascendancy of contract over status.
Some observers hold that individuals apply distinct sets of moral rules to people depending on their membership of an "in-group" (the individual and those they believe to be of the same group) or an "out-group" (people not entitled to be treated according to the same rules). Some biologists, anthropologists and evolutionary psychologists believe this in-group/out-group discrimination has evolved because it enhances group survival. This belief has been confirmed by simple computational models of evolution.[15] In simulations this discrimination can result in both unexpected cooperation towards the in-group and irrational hostility towards the out-group.[16] Gary R. Johnson and V.S. Falger have argued that nationalism and patriotism are forms of this in-group/out-group boundary. Jonathan Haidt has noted[17] that experimental observation indicating an in-group criterion provides one moral foundation substantially used by conservatives, but far less so by liberals.
In-group preference is also helpful at the individual level for the passing on of one's genes. For example, a mother who favors her own children more highly than the children of other people will give greater resources to her children than she will to strangers', thus heightening her children's chances of survival and her own gene's chances of being perpetuated. Due to this, within a population, there is substantial selection pressure exerted toward this kind of self-interest, such that eventually, all parents wind up favoring their own children (the in-group) over other children (the out-group).
Peterson and Seligman[18] approach the anthropological view looking across cultures, geo-cultural areas and across millennia. They conclude that certain virtues have prevailed in all cultures they examined. The major virtues they identified include wisdom / knowledge; courage; humanity; justice; temperance; and transcendence. Each of these include several divisions. For instance humanity includes love, kindness, and social intelligence.
Still, others theorize that morality is not always absolute, contending that moral issues often differ along cultural lines. A 2014 PEW research study among several nations illuminates significant cultural differences among issues commonly related to morality, including divorce, extramarital affairs, homosexuality, gambling, abortion, alcohol use, contraceptive use, and premarital sex. Each of the 40 countries in this study has a range of percentages according to what percentage of each country believes the common moral issues are acceptable, unacceptable, or not moral issues at all. Each percentage regarding the significance of the moral issue varies greatly on the culture in which the moral issue is presented.[19]
Advocates of a theory known as moral relativism subscribe to the notion that moral virtues are right or wrong only within the context of a certain standpoint (e.g., cultural community). In other words, what is morally acceptable in one culture may be taboo in another. They further contend that no moral virtue can objectively be proven right or wrong [20] Critics of moral relativism point to historical atrocities such as infanticide, slavery, or genocide as counter arguments, noting the difficulty in accepting these actions simply through cultural lenses.
Fons Trompenaars, author of Did the Pedestrian Die?, tested members of different cultures with various moral dilemmas. One of these was whether the driver of a car would have his friend, a passenger riding in the car, lie in order to protect the driver from the consequences of driving too fast and hitting a pedestrian. Trompenaars found that different cultures had quite different expectations, from none to definite.[21]
Anthropologists from Oxford's Institute of Cognitive & Evolutionary Anthropology (part of the School of Anthropology & Museum Ethnography) analysed ethnographic accounts of ethics from 60 societies, comprising over 600,000 words from over 600 sources and discovered what they believe to be seven universal moral rules: help your family, help your group, return favours, be brave, defer to superiors, divide resources fairly, and respect others' property.[22][23]
The development of modern morality is a process closely tied to sociocultural evolution. Some evolutionary biologists, particularly sociobiologists, believe that morality is a product of evolutionary forces acting at an individual level and also at the group level through group selection (although to what degree this actually occurs is a controversial topic in evolutionary theory). Some sociobiologists contend that the set of behaviors that constitute morality evolved largely because they provided possible survival or reproductive benefits (i.e. increased evolutionary success). Humans consequently evolved "pro-social" emotions, such as feelings of empathy or guilt, in response to these moral behaviors.
On this understanding, moralities are sets of self-perpetuating and biologically driven behaviors which encourage human cooperation. Biologists contend that all social animals, from ants to elephants, have modified their behaviors, by restraining immediate selfishness in order to improve their evolutionary fitness. Human morality, although sophisticated and complex relative to the moralities of other animals, is essentially a natural phenomenon that evolved to restrict excessive individualism that could undermine a group's cohesion and thereby reducing the individuals' fitness.[24]
On this view, moral codes are ultimately founded on emotional instincts and intuitions that were selected for in the past because they aided survival and reproduction (inclusive fitness). Examples: the maternal bond is selected for because it improves the survival of offspring; the Westermarck effect, where close proximity during early years reduces mutual sexual attraction, underpins taboos against incest because it decreases the likelihood of genetically risky behaviour such as inbreeding.
The phenomenon of reciprocity in nature is seen by evolutionary biologists as one way to begin to understand human morality. Its function is typically to ensure a reliable supply of essential resources, especially for animals living in a habitat where food quantity or quality fluctuates unpredictably. For example, some vampire bats fail to feed on prey some nights while others manage to consume a surplus. Bats that did eat will then regurgitate part of their blood meal to save a conspecific from starvation. Since these animals live in close-knit groups over many years, an individual can count on other group members to return the favor on nights when it goes hungry (Wilkinson, 1984)
Marc Bekoff and Jessica Pierce (2009) have argued that morality is a suite of behavioral capacities likely shared by all mammals living in complex social groups (e.g., wolves, coyotes, elephants, dolphins, rats, chimpanzees). They define morality as "a suite of interrelated other-regarding behaviors that cultivate and regulate complex interactions within social groups." This suite of behaviors includes empathy, reciprocity, altruism, cooperation, and a sense of fairness.[25] In related work, it has been convincingly demonstrated that chimpanzees show empathy for each other in a wide variety of contexts.[26] They also possess the ability to engage in deception, and a level of social politics[27] prototypical of our own tendencies for gossip and reputation management.
Christopher Boehm (1982)[28] has hypothesized that the incremental development of moral complexity throughout hominid evolution was due to the increasing need to avoid disputes and injuries in moving to open savanna and developing stone weapons. Other theories are that increasing complexity was simply a correlate of increasing group size and brain size, and in particular the development of theory of mind abilities.
In modern moral psychology, morality is sometimes considered to change through personal development. Several psychologists have produced theories on the development of morals, usually going through stages of different morals. Lawrence Kohlberg, Jean Piaget, and Elliot Turiel have cognitive-developmental approaches to moral development; to these theorists morality forms in a series of constructive stages or domains. In the Ethics of care approach established by Carol Gilligan, moral development occurs in the context of caring, mutually responsive relationships which are based on interdependence, particularly in parenting but also in social relationships generally.[29] Social psychologists such as Martin Hoffman and Jonathan Haidt emphasize social and emotional development based on biology, such as empathy. Moral identity theorists, such as William Damon and Mordechai Nisan, see moral commitment as arising from the development of a self-identity that is defined by moral purposes: this moral self-identity leads to a sense of responsibility to pursue such purposes. Of historical interest in psychology are the theories of psychoanalysts such as Sigmund Freud, who believe that moral development is the product of aspects of the super-ego as guilt-shame avoidance. Theories of moral development therefore tend to regard it as positive moral development: the higher stages are morally higher, though this, naturally, involves a circular argument. The higher stages are better because they are higher, but the better higher because they are better.
As an alternative to viewing morality as an individual trait, some sociologists as well as social- and discursive psychologists have taken upon themselves to study the in-vivo aspects of morality by examining how persons conduct themselves in social interaction.[30][31][32][33]
A new study analyses the common perception of a decline in morality in societies worldwide and throughout history. Adam M. Mastroianni and Daniel T. Gilbert present a series of studies indicating that the perception of moral decline is an illusion and easily produced, with implications for misallocation of resources, underuse of social support, and social influence. To begin with, the authors demonstrate that people in no less than 60 nations hold the belief that morality is deteriorating continuously, and this conviction has been present for the last 70 years. Subsequently, they indicate that people ascribe this decay to the declining morality of individuals as they age and the succeeding generations. Thirdly, the authors demonstrate that people's evaluations of the morality of their peers have not decreased over time, indicating that the belief in moral decline is an illusion. Lastly, the authors explain a basic psychological mechanism that uses two well-established phenomena (distorted exposure to information and distorted memory of information) to cause the illusion of moral decline. The authors present studies that validate some of the predictions about the circumstances in which the perception of moral decline is attenuated, eliminated, or reversed (e.g., when participants are asked about the morality of people closest to them or people who lived before they were born).[34]
Moral cognition refers to cognitive processes implicated in moral judgment and decision making, and moral action. It consists of several domain-general cognitive processes, ranging from perception of a morally salient stimulus to reasoning when faced with a moral dilemma. While it is important to mention that there is not a single cognitive faculty dedicated exclusively to moral cognition,[35][36] characterizing the contributions of domain-general processes to moral behavior is a critical scientific endeavor to understand how morality works and how it can be improved.[37]
Cognitive psychologists and neuroscientists investigate the inputs to these cognitive processes and their interactions, as well as how these contribute to moral behavior by running controlled experiments.[38] In these experiments putatively moral versus nonmoral stimuli are compared to each other, while controlling for other variables such as content or working memory load. Often, the differential neural response to specifically moral statements or scenes, are examined using functional neuroimaging experiments.
Critically, the specific cognitive processes that are involved depend on the prototypical situation that a person encounters.[39] For instance, while situations that require an active decision on a moral dilemma may require active reasoning, an immediate reaction to a shocking moral violation may involve quick, affect-laden processes. Nonetheless, certain cognitive skills such as being able to attribute mental states—beliefs, intents, desires, emotions to oneself, and others is a common feature of a broad range of prototypical situations. In line with this, a meta-analysis found overlapping activity between moral emotion and moral reasoning tasks, suggesting a shared neural network for both tasks.[40] The results of this meta-analysis, however, also demonstrated that the processing of moral input is affected by task demands.
Regarding the issues of morality in video games, some scholars believe that because players appear in video games as actors, they maintain a distance between their sense of self and the role of the game in terms of imagination. Therefore, the decision-making and moral behavior of players in the game are not representing player's Moral dogma.[41]
It has been recently found that moral judgment consists in concurrent evaluations of three different components that align with precepts from three dominant moral theories (virtue ethics, deontology, and consequentialism): the character of a person (Agent-component, A); their actions (Deed-component, D); and the consequences brought about in the situation (Consequences-component, C).[42] This, implies that various inputs of the situation a person encounters affect moral cognition.
Jonathan Haidt distinguishes between two types of moral cognition: moral intuition and moral reasoning. Moral intuition involves the fast, automatic, and affective processes that result in an evaluative feeling of good-bad or like-dislike, without awareness of going through any steps. Conversely, moral reasoning does involve conscious mental activity to reach a moral judgment. Moral reasoning is controlled and less affective than moral intuition. When making moral judgments, humans perform moral reasoning to support their initial intuitive feeling. However, there are three ways humans can override their immediate intuitive response. The first way is conscious verbal reasoning (for example, examining costs and benefits). The second way is reframing a situation to see a new perspective or consequence, which triggers a different intuition. Finally, one can talk to other people which illuminates new arguments. In fact, interacting with other people is the cause of most moral change. [43]
The brain areas that are consistently involved when humans reason about moral issues have been investigated by multiple quantitative large-scale meta-analyses of the brain activity changes reported in the moral neuroscience literature.[44][40][45][46] The neural network underlying moral decisions overlaps with the network pertaining to representing others' intentions (i.e., theory of mind) and the network pertaining to representing others' (vicariously experienced) emotional states (i.e., empathy). This supports the notion that moral reasoning is related to both seeing things from other persons' points of view and to grasping others' feelings. These results provide evidence that the neural network underlying moral decisions is probably domain-global (i.e., there might be no such things as a "moral module" in the human brain) and might be dissociable into cognitive and affective sub-systems.[44]
Cognitive neuroscientist Jean Decety thinks that the ability to recognize and vicariously experience what another individual is undergoing was a key step forward in the evolution of social behavior, and ultimately, morality.[47] The inability to feel empathy is one of the defining characteristics of psychopathy, and this would appear to lend support to Decety's view.[48][49] Recently, drawing on empirical research in evolutionary theory, developmental psychology, social neuroscience, and psychopathy, Jean Decety argued that empathy and morality are neither systematically opposed to one another, nor inevitably complementary.[50][51]
An essential, shared component of moral judgment involves the capacity to detect morally salient content within a given social context. Recent research implicated the salience network in this initial detection of moral content.[52] The salience network responds to behaviorally salient events [53] and may be critical to modulate downstream default and frontal control network interactions in the service of complex moral reasoning and decision-making processes.
The explicit making of moral right and wrong judgments coincides with activation in the ventromedial prefrontal cortex (VMPC), a region involved in valuation, while intuitive reactions to situations containing implicit moral issues activates the temporoparietal junction area, a region that plays a key role in understanding intentions and beliefs.[54][52]
Stimulation of the VMPC by transcranial magnetic stimulation, or neurological lesion, has been shown to inhibit the ability of human subjects to take into account intent when forming a moral judgment. According to such investigations, TMS did not disrupt participants' ability to make any moral judgment. On the contrary, moral judgments of intentional harms and non-harms were unaffected by TMS to either the RTPJ or the control site; presumably, however, people typically make moral judgments of intentional harms by considering not only the action's harmful outcome but the agent's intentions and beliefs. So why were moral judgments of intentional harms not affected by TMS to the RTPJ? One possibility is that moral judgments typically reflect a weighted function of any morally relevant information that is available at the time. Based on this view, when information concerning the agent's belief is unavailable or degraded, the resulting moral judgment simply reflects a higher weighting of other morally relevant factors (e.g., outcome). Alternatively, following TMS to the RTPJ, moral judgments might be made via an abnormal processing route that does not take belief into account. On either account, when belief information is degraded or unavailable, moral judgments are shifted toward other morally relevant factors (e.g., outcome). For intentional harms and non-harms, however, the outcome suggests the same moral judgment as to the intention. Thus, the researchers suggest that TMS to the RTPJ disrupted the processing of negative beliefs for both intentional harms and attempted harms, but the current design allowed the investigators to detect this effect only in the case of attempted harms, in which the neutral outcomes did not afford harsh moral judgments on their own.[55]
Similarly, individuals with a lesion of the VMPC judge an action purely on its outcome and are unable to take into account the intent of that action.[56]
This section needs expansion. You can help by adding to it. (May 2022) |
Moral intuitions may have genetic bases. A 2022 study conducted by scholars Michael Zakharin and Timothy C. Bates, and published by the European Journal of Personality, found that moral foundations have significant genetic bases.[57] Another study, conducted by Smith and Hatemi, similarly found significant evidence in support of moral heritability by looking at and comparing the answers of moral dilemmas between twins.[58]
Genetics play a role in influencing prosocial behaviors and moral decision-making. Genetics contribute to the development and expression of certain traits and behaviors, including those related to morality. However, it is important to note that while genetics play a role in shaping certain aspects of moral behavior, morality itself is a multifaceted concept that encompasses cultural, societal, and personal influences as well.
If morality is the answer to the question 'how ought we to live' at the individual level, politics can be seen as addressing the same question at the social level, though the political sphere raises additional problems and challenges.[59] It is therefore unsurprising that evidence has been found of a relationship between attitudes in morality and politics. Moral foundations theory, authored by Jonathan Haidt and colleagues,[60][61] has been used to study the differences between liberals and conservatives, in this regard.[17][62] Haidt found that Americans who identified as liberals tended to value care and fairness higher than loyalty, respect and purity. Self-identified conservative Americans valued care and fairness less and the remaining three values more. Both groups gave care the highest over-all weighting, but conservatives valued fairness the lowest, whereas liberals valued purity the lowest. Haidt also hypothesizes that the origin of this division in the United States can be traced to geo-historical factors, with conservatism strongest in closely knit, ethnically homogeneous communities, in contrast to port-cities, where the cultural mix is greater, thus requiring more liberalism.
Group morality develops from shared concepts and beliefs and is often codified to regulate behavior within a culture or community. Various defined actions come to be called moral or immoral. Individuals who choose moral action are popularly held to possess "moral fiber", whereas those who indulge in immoral behavior may be labeled as socially degenerate. The continued existence of a group may depend on widespread conformity to codes of morality; an inability to adjust moral codes in response to new challenges is sometimes credited with the demise of a community (a positive example would be the function of Cistercian reform in reviving monasticism; a negative example would be the role of the Dowager Empress in the subjugation of China to European interests). Within nationalist movements, there has been some tendency to feel that a nation will not survive or prosper without acknowledging one common morality, regardless of its content.
Political morality is also relevant to the behavior internationally of national governments, and to the support they receive from their host population. The Sentience Institute, co-founded by Jacy Reese Anthis, analyzes the trajectory of moral progress in society via the framework of an expanding moral circle.[63] Noam Chomsky states that
... if we adopt the principle of universality: if an action is right (or wrong) for others, it is right (or wrong) for us. Those who do not rise to the minimal moral level of applying to themselves the standards they apply to others—more stringent ones, in fact—plainly cannot be taken seriously when they speak of appropriateness of response; or of right and wrong, good and evil. In fact, one of them, maybe the most, elementary of moral principles is that of universality, that is, If something's right for me, it's right for you; if it's wrong for you, it's wrong for me. Any moral code that is even worth looking at has that at its core somehow.[64]
Religion and morality are not synonymous. Morality does not depend upon religion although for some this is "an almost automatic assumption".[65] According to The Westminster Dictionary of Christian Ethics, religion and morality "are to be defined differently and have no definitional connections with each other. Conceptually and in principle, morality and a religious value system are two distinct kinds of value systems or action guides."[66]
Within the wide range of moral traditions, religious value-systems co-exist with contemporary secular frameworks such as consequentialism, freethought, humanism, utilitarianism, and others. There are many types of religious value-systems. Modern monotheistic religions, such as Islam, Judaism, Christianity, and to a certain degree others such as Sikhism and Zoroastrianism, define right and wrong by the laws and rules as set forth by their respective scriptures and as interpreted by religious leaders within each respective faith. Other religions spanning pantheistic to nontheistic tend to be less absolute. For example, within Buddhism, the intention of the individual and the circumstances should be accounted for in the form of merit, to determine if an action is termed right or wrong.[67] Barbara Stoler Miller points out a further disparity between the values of religious traditions, stating that in Hinduism, "practically, right and wrong are decided according to the categories of social rank, kinship, and stages of life. For modern Westerners, who have been raised on ideals of universality and egalitarianism, this relativity of values and obligations is the aspect of Hinduism most difficult to understand".[68]
Religions provide different ways of dealing with moral dilemmas. For example, Hinduism lacks any absolute prohibition on killing, recognizing that it "may be inevitable and indeed necessary" in certain circumstances.[69] Monotheistic traditions view certain acts—such as abortion or divorce—in more absolute terms.[a] Religion is not always positively associated with morality. Philosopher David Hume stated that "the greatest crimes have been found, in many instances, to be compatible with a superstitious piety and devotion; Hence it is justly regarded as unsafe to draw any inference in favor of a man's morals, from the fervor or strictness of his religious exercises, even though he himself believe them sincere."[70]
Religious value-systems can be used to justify acts that are contrary to general contemporary morality, such as massacres, misogyny and slavery. For example, Simon Blackburn states that "apologists for Hinduism defend or explain away its involvement with the caste system, and apologists for Islam defend or explain away its harsh penal code or its attitude to women and infidels".[71] In regard to Christianity, he states that the "Bible can be read as giving us a carte blanche for harsh attitudes to children, the mentally handicapped, animals, the environment, the divorced, unbelievers, people with various sexual habits, and elderly women",[72] and notes morally-suspect themes in the Bible's New Testament as well.[73][e] Elizabeth Anderson likewise holds that "the Bible contains both good and evil teachings", and it is "morally inconsistent".[74] Christian apologists address Blackburn's viewpoints[75] and construe that Jewish laws in the Hebrew Bible showed the evolution of moral standards towards protecting the vulnerable, imposing a death penalty on those pursuing slavery and treating slaves as persons and not as property.[76] Humanists like Paul Kurtz believe that we can identify moral values across cultures, even if we do not appeal to a supernatural or universalist understanding of principles – values including integrity, trustworthiness, benevolence, and fairness. These values can be resources for finding common ground between believers and nonbelievers.[77]
Several studies have been conducted on the empirics of morality in various countries, and the overall relationship between faith and crime is unclear.[b] A 2001 review of studies on this topic found "The existing evidence surrounding the effect of religion on crime is varied, contested, and inconclusive, and currently, no persuasive answer exists as to the empirical relationship between religion and crime."[78] Phil Zuckerman's 2008 book, Society without God, based on studies conducted during 14 months in Scandinavia in 2005–2006, notes that Denmark and Sweden, "which are probably the least religious countries in the world, and possibly in the history of the world", enjoy "among the lowest violent crime rates in the world [and] the lowest levels of corruption in the world".[79][c]
Dozens of studies have been conducted on this topic since the twentieth century. A 2005 study by Gregory S. Paul published in the Journal of Religion and Society stated that, "In general, higher rates of belief in and worship of a creator correlate with higher rates of homicide, juvenile and early adult mortality, STD infection rates, teen pregnancy, and abortion in the prosperous democracies," and "In all secular developing democracies a centuries long-term trend has seen homicide rates drop to historical lows" with the exceptions being the United States (with a high religiosity level) and "theistic" Portugal.[80][d] In a response, Gary Jensen builds on and refines Paul's study.[81] he concludes that a "complex relationship" exists between religiosity and homicide "with some dimensions of religiosity encouraging homicide and other dimensions discouraging it". In April 2012, the results of a study which tested their subjects' pro-social sentiments were published in the Social Psychological and Personality Science journal in which non-religious people had higher scores showing that they were more motivated by their own compassion to perform pro-social behaviors. Religious people were found to be less motivated by compassion to be charitable than by an inner sense of moral obligation.[82][83]
The maxim 'Treat others how you wish to be treated'. Various expressions of this fundamental moral rule are to be found in tenets of most religions and creeds through the ages, testifying to its universal applicability.
{{cite book}}
: |journal=
ignored (help)
{{cite book}}
: |journal=
ignored (help)
{{cite web}}
: CS1 maint: unfit URL (link)