Medical ethics is an applied branch of ethics which analyzes the practice of clinical medicine and related scientific research.[1] Medical ethics is based on a set of values that professionals can refer to in the case of any confusion or conflict. These values include the respect for autonomy, non-maleficence, beneficence, and justice.[2] Such tenets may allow doctors, care providers, and families to create a treatment plan and work towards the same common goal.[3] It is important to note that these four values are not ranked in order of importance or relevance and that they all encompass values pertaining to medical ethics.[4] However, a conflict may arise leading to the need for hierarchy in an ethical system, such that some moral elements overrule others with the purpose of applying the best moral judgement to a difficult medical situation.[5] Medical ethics is particularly relevant in decisions regarding involuntary treatment and involuntary commitment.
There are several codes of conduct. The Hippocratic Oath discusses basic principles for medical professionals.[5] This document dates back to the fifth century BCE.[6] Both The Declaration of Helsinki (1964) and The Nuremberg Code (1947) are two well-known and well respected documents contributing to medical ethics. Other important markings in the history of medical ethics include Roe v. Wade[why?] in 1973 and the development of hemodialysis in the 1960s. With hemodialysis now available, but a limited number of dialysis machines to treat patients, an ethical question arose on which patients to treat and which ones not to treat, and which factors to use in making such a decision.[7] More recently, new techniques for gene editing aiming at treating, preventing and curing diseases utilizing gene editing, are raising important moral questions about their applications in medicine and treatments as well as societal impacts on future generations,[8][9] yet remain controversial due to their association with eugenics.[10]
As this field continues to develop and change throughout history, the focus remains on fair, balanced, and moral thinking across all cultural and religious backgrounds around the world.[11][12] The field of medical ethics encompasses both practical application in clinical settings and scholarly work in philosophy, history, and sociology.
Medical ethics encompasses beneficence, autonomy, and justice as they relate to conflicts such as euthanasia, patient confidentiality, informed consent, and conflicts of interest in healthcare.[13][14][15] In addition, medical ethics and culture are interconnected as different cultures implement ethical values differently, sometimes placing more emphasis on family values and downplaying the importance of autonomy. This leads to an increasing need for culturally sensitive physicians and ethical committees in hospitals and other healthcare settings.[11][12][16]
The term medical ethics first dates back to 1803, when English author and physician Thomas Percival published a document describing the requirements and expectations of medical professionals within medical facilities. The Code of Ethics was then adapted in 1847, relying heavily on Percival's words.[17] Over the years in 1903, 1912, and 1947, revisions have been made to the original document.[17] The practice of Medical Ethics is widely accepted and practiced throughout the world.[4]
Historically, Western medical ethics may be traced to guidelines on the duty of physicians in antiquity, such as the Hippocratic Oath, and early Christian teachings. The first code of medical ethics, Formula Comitis Archiatrorum, was published in the 5th century, during the reign of the Ostrogothic Christian king Theodoric the Great. In the medieval and early modern period, the field is indebted to Islamic scholarship such as Ishaq ibn Ali al-Ruhawi (who wrote the Conduct of a Physician, the first book dedicated to medical ethics), Avicenna's Canon of Medicine and Muhammad ibn Zakariya ar-Razi (known as Rhazes in the West), Jewish thinkers such as Maimonides, Roman Catholic scholastic thinkers such as Thomas Aquinas, and the case-oriented analysis (casuistry) of Catholic moral theology. These intellectual traditions continue in Catholic, Islamic and Jewish medical ethics.
By the 18th and 19th centuries, medical ethics emerged as a more self-conscious discourse. In England, Thomas Percival, a physician and author, crafted the first modern code of medical ethics. He drew up a pamphlet with the code in 1794 and wrote an expanded version in 1803, in which he coined the expressions "medical ethics" and "medical jurisprudence".[18] However, there are some who see Percival's guidelines that relate to physician consultations as being excessively protective of the home physician's reputation. Jeffrey Berlant is one such critic who considers Percival's codes of physician consultations as being an early example of the anti-competitive, "guild"-like nature of the physician community.[19][20] In addition, since the mid 19th century up to the 20th century, physician-patient relationships that once were more familiar became less prominent and less intimate, sometimes leading to malpractice, which resulted in less public trust and a shift in decision-making power from the paternalistic physician model to today's emphasis on patient autonomy and self-determination.[21]
In 1815, the Apothecaries Act was passed by the Parliament of the United Kingdom. It introduced compulsory apprenticeship and formal qualifications for the apothecaries of the day under the license of the Society of Apothecaries. This was the beginning of regulation of the medical profession in the UK.
In 1847, the American Medical Association adopted its first code of ethics, with this being based in large part upon Percival's work.[22] While the secularized field borrowed largely from Catholic medical ethics, in the 20th century a distinctively liberal Protestant approach was articulated by thinkers such as Joseph Fletcher. In the 1960s and 1970s, building upon liberal theory and procedural justice, much of the discourse of medical ethics went through a dramatic shift and largely reconfigured itself into bioethics.[23]
Well-known medical ethics cases include:
Since the 1970s, the growing influence of ethics in contemporary medicine can be seen in the increasing use of Institutional Review Boards to evaluate experiments on human subjects, the establishment of hospital ethics committees, the expansion of the role of clinician ethicists, and the integration of ethics into many medical school curricula.[24]
In December 2019, the virus COVID-19 emerged as a threat to worldwide public health and, over the following years, ignited novel inquiry into modern-age medical ethics. For example, since the first discovery of COVID-19 in Wuhan, China[25] and subsequent global spread by mid-2020, calls for the adoption of Open Science principles dominated research communities.[26] Some academics believed that Open Science principles — like constant communication between research groups, rapid translation of study results into public policy, and transparency of scientific processes to the public — represented the only solutions to halt the impact of the virus. Others, however, cautioned that these interventions may lead to side-stepping safety in favor of speed, wasteful use of research capital, and creation of public confusion.[26] Drawbacks of these practices include resource-wasting and public confusion surrounding the use of hydroxychloroquine and azithromycin as treatment for COVID-19 — a combination which was later shown to have no impact on COVID-19 survivorship and carried notable cardiotoxic side-effects[27] — as well as a type of vaccine hesitancy specifically due to the speed at which COVID-19 vaccines were created and made publicly available.[28] However, Open Science also allowed for the rapid implementation of life-saving public interventions like wearing masks and social distancing, the rapid development of multiple vaccines and monoclonal antibodies that have significantly lowered transmission and death rates, and increased public awareness about the severity of the pandemic as well as explanation of daily protective actions against COVID-19 infection, like hand washing.[26]
Other notable areas of medicine impacted by COVID-19 ethics include:
The ethics of COVID-19 spans many more areas of medicine and society than represented in this paragraph — some of these principles will likely not be discovered until the end of the pandemic which, as of September 12, 2022, is still ongoing.
A common framework used when analysing medical ethics is the "four principles" approach postulated by Tom Beauchamp and James Childress in their textbook Principles of Biomedical Ethics. It recognizes four basic moral principles, which are to be judged and weighed against each other, with attention given to the scope of their application. The four principles are:[38]
The principle of autonomy, broken down into "autos" (self) and "nomos (rule), views the rights of an individual to self-determination.[21] This is rooted in society's respect for individuals' ability to make informed decisions about personal matters with freedom. Autonomy has become more important as social values have shifted to define medical quality in terms of outcomes that are important to the patient and their family rather than medical professionals.[21] The increasing importance of autonomy can be seen as a social reaction against the "paternalistic" tradition within healthcare.[21][40] Some have questioned whether the backlash against historically excessive paternalism in favor of patient autonomy has inhibited the proper use of soft paternalism to the detriment of outcomes for some patients.[41]
The definition of autonomy is the ability of an individual to make a rational, uninfluenced decision. Therefore, it can be said that autonomy is a general indicator of a healthy mind and body. The progression of many terminal diseases are characterized by loss of autonomy, in various manners and extents. For example, dementia, a chronic and progressive disease that attacks the brain can induce memory loss and cause a decrease in rational thinking, almost always results in the loss of autonomy.[42]
Psychiatrists and clinical psychologists are often asked to evaluate a patient's capacity for making life-and-death decisions at the end of life. Persons with a psychiatric condition such as delirium or clinical depression may lack capacity to make end-of-life decisions. For these persons, a request to refuse treatment may be taken in the context of their condition. Unless there is a clear advance directive to the contrary, persons lacking mental capacity are treated according to their best interests. This will involve an assessment involving people who know the person best to what decisions the person would have made had they not lost capacity.[43] Persons with the mental capacity to make end-of-life decisions may refuse treatment with the understanding that it may shorten their life. Psychiatrists and psychologists may be involved to support decision making.[44]
The term beneficence refers to actions that promote the well-being of others. In the medical context, this means taking actions that serve the best interests of patients and their families.[2] However, uncertainty surrounds the precise definition of which practices do in fact help patients.
James Childress and Tom Beauchamp in Principles of Biomedical Ethics (1978) identify beneficence as one of the core values of healthcare ethics. Some scholars, such as Edmund Pellegrino, argue that beneficence is the only fundamental principle of medical ethics. They argue that healing should be the sole purpose of medicine, and that endeavors like cosmetic surgery and euthanasia are severely unethical and against the Hippocratic Oath.
The concept of non-maleficence is embodied by the phrase, "first, do no harm," or the Latin, primum non nocere. Many consider that should be the main or primary consideration (hence primum): that it is more important not to harm your patient, than to do them good, which is part of the Hippocratic oath that doctors take.[45] This is partly because enthusiastic practitioners are prone to using treatments that they believe will do good, without first having evaluated them adequately to ensure they do no harm to the patient. Much harm has been done to patients as a result, as in the saying, "The treatment was a success, but the patient died." It is not only more important to do no harm than to do good; it is also important to know how likely it is that your treatment will harm a patient. So a physician should go further than not prescribing medications they know to be harmful—he or she should not prescribe medications (or otherwise treat the patient) unless s/he knows that the treatment is unlikely to be harmful; or at the very least, that patient understands the risks and benefits, and that the likely benefits outweigh the likely risks.
In practice, however, many treatments carry some risk of harm. In some circumstances, e.g. in desperate situations where the outcome without treatment will be grave, risky treatments that stand a high chance of harming the patient will be justified, as the risk of not treating is also very likely to do harm. So the principle of non-maleficence is not absolute, and balances against the principle of beneficence (doing good), as the effects of the two principles together often give rise to a double effect (further described in next section). Even basic actions like taking a blood sample or an injection of a drug cause harm to the patient's body. Euthanasia also goes against the principle of beneficence because the patient dies as a result of the medical treatment by the doctor.
Double effect refers to two types of consequences that may be produced by a single action,[46] and in medical ethics it is usually regarded as the combined effect of beneficence and non-maleficence.[47]
A commonly cited example of this phenomenon is the use of morphine or other analgesic in the dying patient. Such use of morphine can have the beneficial effect of easing the pain and suffering of the patient while simultaneously having the maleficent effect of shortening the life of the patient through the deactivation of the respiratory system.[48]
The human rights era started with the formation of the United Nations in 1945, which was charged with the promotion of human rights. The Universal Declaration of Human Rights (1948) was the first major document to define human rights. Medical doctors have an ethical duty to protect the human rights and human dignity of the patient so the advent of a document that defines human rights has had its effect on medical ethics.[49] Most codes of medical ethics now require respect for the human rights of the patient.
The Council of Europe promotes the rule of law and observance of human rights in Europe. The Council of Europe adopted the European Convention on Human Rights and Biomedicine (1997) to create a uniform code of medical ethics for its 47 member-states. The Convention applies international human rights law to medical ethics. It provides special protection of physical integrity for those who are unable to consent, which includes children.
No organ or tissue removal may be carried out on a person who does not have the capacity to consent under Article 5.[50]
As of December 2013, the convention had been ratified or acceded to by twenty-nine member-states of the Council of Europe.[51]
The United Nations Educational, Scientific and Cultural Organization (UNESCO) also promotes the protection of human rights and human dignity. According to UNESCO, "Declarations are another means of defining norms, which are not subject to ratification. Like recommendations, they set forth universal principles to which the community of States wished to attribute the greatest possible authority and to afford the broadest possible support." UNESCO adopted the Universal Declaration on Human Rights and Biomedicine (2005) to advance the application of international human rights law in medical ethics. The Declaration provides special protection of human rights for incompetent persons.
In applying and advancing scientific knowledge, medical practice and associated technologies, human vulnerability should be taken into account. Individuals and groups of special vulnerability should be protected and the personal integrity of such individuals respected.[52]
Individualistic standards of autonomy and personal human rights as they relate to social justice seen in the Anglo-Saxon community, clash with and can also supplement the concept of solidarity, which stands closer to a European healthcare perspective focused on community, universal welfare, and the unselfish wish to provide healthcare equally for all.[53] In the United States individualistic and self-interested healthcare norms are upheld, whereas in other countries, including European countries, a sense of respect for the community and personal support is more greatly upheld in relation to free healthcare.[53]
The concept of normality, that there is a human physiological standard contrasting with conditions of illness, abnormality and pain, leads to assumptions and bias that negatively affects health care practice.[54] It is important to realize that normality is ambiguous and that ambiguity in healthcare and the acceptance of such ambiguity is necessary in order to practice humbler medicine and understand complex, sometimes unusual usual medical cases.[54] Thus, society's views on central concepts in philosophy and clinical beneficence must be questioned and revisited, adopting ambiguity as a central player in medical practice.[54]
Beneficence can come into conflict with non-maleficence when healthcare professionals are deciding between a “first, do no harm” approach vs. a “first, do good” approach, such as when deciding whether or not to operate when the balance between the risk and benefit of the operation is not known and must be estimated. Healthcare professionals who place beneficence below other principles like non-maleficence may decide not to help a patient more than a limited amount if they feel they have met the standard of care and are not morally obligated to provide additional services. Young and Wagner argued that, in general, beneficence takes priority over non-maleficence (“first, do good,” not “first, do no harm”), both historically and philosophically.[1]
Autonomy can come into conflict with beneficence when patients disagree with recommendations that healthcare professionals believe are in the patient's best interest. When the patient's interests conflict with the patient's welfare, different societies settle the conflict in a wide range of manners. In general, Western medicine defers to the wishes of a mentally competent patient to make their own decisions, even in cases where the medical team believes that they are not acting in their own best interests. However, many other societies prioritize beneficence over autonomy. People deemed to not be mentally competent or having a mental disorder may be treated involuntarily.
Examples include when a patient does not want treatment because of, for example, religious or cultural views. In the case of euthanasia, the patient, or relatives of a patient, may want to end the life of the patient. Also, the patient may want an unnecessary treatment, as can be the case in hypochondria or with cosmetic surgery; here, the practitioner may be required to balance the desires of the patient for medically unnecessary potential risks against the patient's informed autonomy in the issue. A doctor may want to prefer autonomy because refusal to respect the patient's self-determination would harm the doctor-patient relationship.
Organ donations can sometimes pose interesting scenarios, in which a patient is classified as a non-heart beating donor (NHBD), where life support fails to restore the heartbeat and is now considered futile but brain death has not occurred. Classifying a patient as a NHBD can qualify someone to be subject to non-therapeutic intensive care, in which treatment is only given to preserve the organs that will be donated and not to preserve the life of the donor. This can bring up ethical issues as some may see respect for the donors wishes to donate their healthy organs as respect for autonomy, while others may view the sustaining of futile treatment during vegetative state maleficence for the patient and the patient's family. Some are worried making this process a worldwide customary measure may dehumanize and take away from the natural process of dying and what it brings along with it.
Individuals' capacity for informed decision-making may come into question during resolution of conflicts between autonomy and beneficence. The role of surrogate medical decision-makers is an extension of the principle of autonomy.
On the other hand, autonomy and beneficence/non-maleficence may also overlap. For example, a breach of patients' autonomy may cause decreased confidence for medical services in the population and subsequently less willingness to seek help, which in turn may cause inability to perform beneficence.
The principles of autonomy and beneficence/non-maleficence may also be expanded to include effects on the relatives of patients or even the medical practitioners, the overall population and economic issues when making medical decisions.
There is disagreement among American physicians as to whether the non-maleficence principle excludes the practice of euthanasia. Euthanasia is currently legal in the states of Washington, DC, California, Colorado, Oregon, Vermont, and Washington.[55] Around the world, there are different organizations that campaign to change legislation about the issue of physician-assisted death, or PAD. Examples of such organizations are the Hemlock Society of the United States and the Dignity in Dying campaign in the United Kingdom.[56] These groups believe that doctors should be given the right to end a patient's life only if the patient is conscious enough to decide for themselves, is knowledgeable about the possibility of alternative care, and has willingly asked to end their life or requested access to the means to do so.
This argument is disputed in other parts of the world. For example, in the state of Louisiana, giving advice or supplying the means to end a person's life is considered a criminal act and can be charged as a felony.[57] In state courts, this crime is comparable to manslaughter.[58] The same laws apply in the states of Mississippi and Nebraska.[59]
Informed consent refers to a patient's right to receive information relevant to a recommended treatment, in order to be able to make a well-considered, voluntary decision about their care.[60] To give informed consent, a patient must be competent to make a decision regarding their treatment and be presented with relevant information regarding a treatment recommendation, including its nature and purpose, and the burdens, risks and potential benefits of all options and alternatives.[61] After receiving and understanding this information, the patient can then make a fully informed decision to either consent or refuse treatment.[62] In certain circumstances, there can be an exception to the need for informed consent, including, but not limited to, in cases of a medical emergency or patient incompetency.[63] The ethical concept of informed consent also applies in a clinical research setting; all human participants in research must voluntarily decide to participate in the study after being fully informed of all relevant aspects of the research trial necessary to decide whether to participate or not.[64] Informed consent is both an ethical and legal duty; if proper consent is not received prior to a procedure, treatment, or participation in research, providers can be held liable for battery and/or other torts.[65] In the United States, informed consent is governed by both federal and state law, and the specific requirements for obtaining informed consent vary state to state.[66]
Confidentiality is commonly applied to conversations between doctors and patients.[67] This concept is commonly known as patient-physician privilege. Legal protections prevent physicians from revealing their discussions with patients, even under oath in court.
Confidentiality is mandated in the United States by the Health Insurance Portability and Accountability Act of 1996 known as HIPAA,[68] specifically the Privacy Rule, and various state laws, some more rigorous than HIPAA. However, numerous exceptions to the rules have been carved out over the years. For example, many states require physicians to report gunshot wounds to the police and impaired drivers to the Department of Motor Vehicles. Confidentiality is also challenged in cases involving the diagnosis of a sexually transmitted disease in a patient who refuses to reveal the diagnosis to a spouse, and in the termination of a pregnancy in an underage patient, without the knowledge of the patient's parents. Many states in the U.S. have laws governing parental notification in underage abortion.[69][70] Those working in mental health have a duty to warn those who they deem to be at risk from their patients in some countries.[71]
Traditionally, medical ethics has viewed the duty of confidentiality as a relatively non-negotiable tenet of medical practice. More recently, critics like Jacob Appel have argued for a more nuanced approach to the duty that acknowledges the need for flexibility in many cases.[13]
Confidentiality is an important issue in primary care ethics, where physicians care for many patients from the same family and community, and where third parties often request information from the considerable medical database typically gathered in primary health care.
In increasing frequency, medical researchers are researching activities in online environments such as discussion boards and bulletin boards, and there is concern that the requirements of informed consent and privacy are not applied, although some guidelines do exist.[72]
One issue that has arisen, however, is the disclosure of information. While researchers wish to quote from the original source in order to argue a point, this can have repercussions when the identity of the patient is not kept confidential. The quotations and other information about the site can be used to identify the patient, and researchers have reported cases where members of the site, bloggers and others have used this information as 'clues' in a game in an attempt to identify the site.[73] Some researchers have employed various methods of "heavy disguise."[73] including discussing a different condition from that under study.[74][75]
Healthcare institutions' websites have the responsibility to ensure that the private medical records of their online visitors are secure from being marketed and monetized into the hands of drug companies, occupation records, and insurance companies. The delivery of diagnosis online leads patients to believe that doctors in some parts of the country are at the direct service of drug companies, finding diagnosis as convenient as what drug still has patent rights on it.[76] Physicians and drug companies are found to be competing for top ten search engine ranks to lower costs of selling these drugs with little to no patient involvement.[77]
With the expansion of internet healthcare platforms, online practitioner legitimacy and privacy accountability face unique challenges such as e-paparazzi, online information brokers, industrial spies, unlicensed information providers that work outside of traditional medical codes for profit. The American Medical Association (AMA) states that medical websites have the responsibility to ensure the health care privacy of online visitors and protect patient records from being marketed and monetized into the hands of insurance companies, employers, and marketers. [40] With the rapid unification of healthcare, business practices, computer science and e-commerce to create these online diagnostic websites, efforts to maintain health care system's ethical confidentiality standard need to keep up as well. Over the next few years, the Department of Health and Human Services has stated that they will be working towards lawfully protecting the online privacy and digital transfers of patient Electronic Medical Records (EMR) under The Health Insurance Portability and Accountability Act (HIPAA). [41]. Looking forward, strong governance and accountability mechanisms will need to be considered with respect to digital health ecosystems, including potential metaverse healthcare platforms, to ensure the highest ethical standards are upheld relating to medical confidentiality and patient data.[78]
In the UK, medical ethics forms part of the training of physicians and surgeons[79] and disregard for ethical principles can result in doctors barred from medical practice after a decision by the Medical Practitioners Tribunal Service.[80](p32)
To ensure that appropriate ethical values are being applied within hospitals, effective hospital accreditation requires that ethical considerations are taken into account, for example with respect to physician integrity, conflict of interest, research ethics and organ transplantation ethics.
There is much documentation of the history and necessity of the Declaration of Helsinki. The first code of conduct for research including medical ethics was the Nuremberg Code. This document had large ties to Nazi war crimes, as it was introduced in 1997, so it didn't make much of a difference in terms of regulating practice. This issue called for the creation of the Declaration. There are some stark differences between the Nuremberg Code and the Declaration of Helsinki, including the way it is written. Nuremberg was written in a very concise manner, with a simple explanation. The Declaration of Helsinki is written with a thorough explanation in mind and including many specific commentaries.[81]
In the United Kingdom, General Medical Council provides clear overall modern guidance in the form of its 'Good Medical Practice' statement.[82] Other organizations, such as the Medical Protection Society and a number of university departments, are often consulted by British doctors regarding issues relating to ethics.
Often, simple communication is not enough to resolve a conflict, and a hospital ethics committee must convene to decide a complex matter.
These bodies are composed primarily of healthcare professionals, but may also include philosophers, lay people, and clergy – indeed, in many parts of the world their presence is considered mandatory in order to provide balance.
With respect to the expected composition of such bodies in the US, Europe and Australia, the following applies.[83]
U.S. recommendations suggest that Research and Ethical Boards (REBs) should have five or more members, including at least one scientist, one non-scientist, and one person not affiliated with the institution.[84] The REB should include people knowledgeable in the law and standards of practice and professional conduct.[84] Special memberships are advocated for handicapped or disabled concerns, if required by the protocol under review.
The European Forum for Good Clinical Practice (EFGCP) suggests that REBs include two practicing physicians who share experience in biomedical research and are independent from the institution where the research is conducted; one lay person; one lawyer; and one paramedical professional, e.g. nurse or pharmacist. They recommend that a quorum include both sexes from a wide age range and reflect the cultural make-up of the local community.
The 1996 Australian Health Ethics Committee recommendations were entitled, "Membership Generally of Institutional Ethics Committees". They suggest a chairperson be preferably someone not employed or otherwise connected with the institution. Members should include a person with knowledge and experience in professional care, counseling or treatment of humans; a minister of religion or equivalent, e.g. Aboriginal elder; a layman; a laywoman; a lawyer and, in the case of a hospital-based ethics committee, a nurse.
The assignment of philosophers or religious clerics will reflect the importance attached by the society to the basic values involved. An example from Sweden with Torbjörn Tännsjö on a couple of such committees indicates secular trends gaining influence.
Cultural differences can create difficult medical ethics problems. Some cultures have spiritual or magical theories about the origins and cause of disease, for example, and reconciling these beliefs with the tenets of Western medicine can be very difficult. As different cultures continue to intermingle and more cultures live alongside each other, the healthcare system, which tends to deal with important life events such as birth, death and suffering, increasingly experiences difficult dilemmas that can sometimes lead to cultural clashes and conflict. Efforts to respond in a culturally sensitive manner go hand in hand with a need to distinguish limits to cultural tolerance.[11]
As more people from different cultural and religious backgrounds move to other countries, among these, the United States, it is becoming increasingly important to be culturally sensitive to all communities in order to provide the best health care for all people.[12] Lack of cultural knowledge can lead to misunderstandings and even inadequate care, which can lead to ethical problems. A common complaint patients have is feeling like they are not being heard, or perhaps, understood.[12] Preventing escalating conflict can be accomplished by seeking interpreters, noticing body language and tone of both yourself and the patient as well as attempting to understand the patient's perspective in order to reach an acceptable option.[12]
Some believe most medical practitioners in the future will have to be or greatly benefit from being bilingual. In addition to knowing the language, truly understanding culture is best for optimal care.[85] Recently, a practice called 'narrative medicine' has gained some interest as it has a potential for improving patient-physician communication and understanding of patient's perspective. Interpreting a patient's stories or day-to-day activities as opposed to standardizing and collecting patient data may help in acquiring a better sense of what each patient needs, individually, with respect to their illness. Without this background information, many physicians are unable to properly understand the cultural differences that may set two different patients apart, and thus, may diagnose or recommend treatments that are culturally insensitive or inappropriate. In short, patient narrative has the potential for uncovering patient information and preferences that may otherwise be overlooked.
In order to address the underserved, uneducated communities in need of nutrition, housing, and healthcare disparities seen in much of the world today, some argue that we must fall back on ethical values in order to create a foundation to move towards a reasonable understanding, which encourages commitment and motivation to improve factors causing premature death as a goal in a global community.[14] Such factors – such as poverty, environment and education – are said to be out of national or individual control and so this commitment is by default a social and communal responsibility placed on global communities that are able to aid others in need.[14] This is based on the framework of 'provincial globalism,' which seeks a world in which all people have the capability to be healthy.[14]
One concern regarding the intersection of medical ethics and humanitarian medical aid is how medical assistance can be as harmful as it is helpful to the community being served. One such example being how political forces may control how foreign humanitarian aid can be utilized in the region it is meant to be provided in. This would be congruous in situations where political strife could lead such aid being used in favor of one group over another. Another example of how foreign humanitarian aid can be misused in its intended community includes the possibility of dissonance forming between a foreign humanitarian aid group and the community being served.[86] Examples of this could include the relationships being viewed between aid workers, style of dress, or the lack of education regarding local culture and customs.[87]
Humanitarian practices in areas lacking optimum care can also pause other interesting and difficult ethical dilemmas in terms of beneficence and non-maleficence. Humanitarian practices are based upon providing better medical equipment and care for communities whose country does not provide adequate healthcare.[88] The issues with providing healthcare to communities in need may sometimes be religious or cultural backgrounds keeping people from performing certain procedures or taking certain drugs. On the other hand, wanting certain procedures done in a specific manner due to religious or cultural belief systems may also occur. The ethical dilemma stems from differences in culture between communities helping those with medical disparities and the societies receiving aid. Women's rights, informed consent and education about health become controversial, as some treatments needed are against societal law, while some cultural traditions involve procedures against humanitarian efforts.[88] Examples of this are female genital mutilation (FGM), aiding in reinfibulation, providing sterile equipment in order to perform procedures such as FGM, as well as informing patients of their HIV positive testing. The latter is controversial because certain communities have in the past outcast or killed HIV positive individuals.[88]
Leading causes of death in the United States and around the world are highly related to behavioral consequences over genetic or environmental factors.[89] This leads some to believe true healthcare reform begins with cultural reform, habit and overall lifestyle.[89] Lifestyle, then, becomes the cause of many illnesses and the illnesses themselves are the result or side-effect of a larger problem.[89] Some people believe this to be true and think that cultural change is needed in order for developing societies to cope and dodge the negative effects of drugs, food and conventional modes of transportation available to them.[89] In 1990, tobacco use, diet, and exercise alone accounted for close to 80 percent of all premature deaths and continue to lead in this way through the 21st century.[89] Heart disease, stroke, dementia, and diabetes are some of the diseases that may be affected by habit-forming patterns throughout our life.[89] Some believe that medical lifestyle counseling and building healthy habits around our daily lives is one way to tackle health care reform.[89]
Buddhist ethics and medicine are based on religious teachings of compassion and understanding[90] of suffering and cause and effect and the idea that there is no beginning or end to life, but that instead there are only rebirths in an endless cycle.[11] In this way, death is merely a phase in an indefinitely lengthy process of life, not an end. However, Buddhist teachings support living one's life to the fullest so that through all the suffering which encompasses a large part of what is life, there are no regrets. Buddhism accepts suffering as an inescapable experience, but values happiness and thus values life.[11] Because of this, suicide and euthanasia, are prohibited. However, attempts to rid oneself of any physical or mental pain and suffering are seen as good acts. On the other hand, sedatives and drugs are thought to impair consciousness and awareness in the dying process, which is believed to be of great importance, as it is thought that one's dying consciousness remains and affects new life. Because of this, analgesics must not be part of the dying process, in order for the dying person to be present entirely and pass on their consciousness wholesomely. This can pose significant conflicts during end of life care in Western medical practice.[11]
In traditional Chinese philosophy, human life is believed to be connected to nature, which is thought of as the foundation and encompassing force sustaining all of life's phases.[11] Passing and coming of the seasons, life, birth and death are perceived as a cyclic and perpetual occurrences that are believed to be regulated by the principles of yin and yang.[11] When one dies, the life-giving material force referred to as ch'i, encompassing both body and spirit, rejoins the material force of the universe and cycles on with respect to the rhythms set forth by yin and yang.[11]
Because many Chinese people believe that circulation of both physical and 'psychic energy' is important to stay healthy, procedures which require surgery, as well as donations and transplantations of organs, are seen as a loss of ch'i, resulting in the loss of someone's vital energy supporting their consciousness and purpose in their lives. Furthermore, a person is never seen as a single unit but rather as a source of relationship, interconnected in a social web.[11] Thus, it is believed that what makes a human one of us is relatedness and communication and family is seen as the basic unit of a community.[11][16] This can greatly affect the way medical decisions are made among family members, as diagnoses are not always expected to be announced to the dying or sick, the elderly are expected to be cared for and represented by their children and physicians are expected to act in a paternalistic way.[11][16] In short, informed consent as well as patient privacy can be difficult to enforce when dealing with Confucian families.[11]
Furthermore, some Chinese people may be inclined to continue futile treatment in order to extend life and allow for fulfillment of the practice of benevolence and humanity.[11] In contrast, patients with strong Daoist beliefs may see death as an obstacle and dying as a reunion with nature that should be accepted, and are therefore less likely to ask for treatment of an irreversible condition.[11]
Some believe Islamic medical ethics and framework remain poorly understood by many working in healthcare. It is important to recognize that for people of Islamic faith, Islam envelops and affects all aspects of life, not just medicine.[91] Because many believe it is faith and a supreme deity that hold the cure to illness, it is common that the physician is viewed merely as help or intermediary player during the process of healing or medical care.[91]
In addition to Chinese culture's emphasis on family as the basic unit of a community intertwined and forming a greater social construct, Islamic traditional medicine also places importance on the values of family and the well-being of a community.[16][91] Many Islamic communities uphold paternalism as an acceptable part of medical care.[91] However, autonomy and self-rule is also valued and protected and, in Islamic medicine, it is particularly upheld in terms of providing and expecting privacy in the healthcare setting. An example of this is requesting same gender providers in order to retain modesty.[91] Overall, Beauchamp's principles of beneficence, non-maleficence and justice[2] are promoted and upheld in the medical sphere with as much importance as in Western culture.[91] In contrast, autonomy is important but more nuanced. Furthermore, Islam also brings forth the principles of jurisprudence, Islamic law and legal maxims, which also allow for Islam to adapt to an ever-changing medical ethics framework.[91]
Physicians should not allow a conflict of interest to influence medical judgment. In some cases, conflicts are hard to avoid, and doctors have a responsibility to avoid entering such situations. Research has shown that conflicts of interests are very common among both academic physicians[92] and physicians in practice.[93][94]
Doctors who receive income from referring patients for medical tests have been shown to refer more patients for medical tests.[95] This practice is proscribed by the American College of Physicians Ethics Manual.[96] Fee splitting and the payments of commissions to attract referrals of patients is considered unethical and unacceptable in most parts of the world.[citation needed]
Studies show that doctors can be influenced by drug company inducements, including gifts and food.[15] Industry-sponsored Continuing Medical Education (CME) programs influence prescribing patterns.[97] Many patients surveyed in one study agreed that physician gifts from drug companies influence prescribing practices.[98] A growing movement among physicians is attempting to diminish the influence of pharmaceutical industry marketing upon medical practice, as evidenced by Stanford University's ban on drug company-sponsored lunches and gifts. Other academic institutions that have banned pharmaceutical industry-sponsored gifts and food include the Johns Hopkins Medical Institutions, University of Michigan, University of Pennsylvania, and Yale University.[99][100]
The American Medical Association (AMA) states that "Physicians generally should not treat themselves or members of their immediate family".[101] This code seeks to protect patients and physicians because professional objectivity can be compromised when the physician is treating a loved one. Studies from multiple health organizations have illustrated that physician-family member relationships may cause an increase in diagnostic testing and costs.[102] Many doctors still treat their family members. Doctors who do so must be vigilant not to create conflicts of interest or treat inappropriately.[103][104] Physicians that treat family members need to be conscious of conflicting expectations and dilemmas when treating relatives, as established medical ethical principles may not be morally imperative when family members are confronted with serious illness.[102][105]
Sexual relationships between doctors and patients can create ethical conflicts, since sexual consent may conflict with the fiduciary responsibility of the physician.[106] Out of the many disciplines in current medicine, there are studies that have been conducted in order to ascertain the occurrence of Doctor-Patient sexual misconduct. Results from those studies appear to indicate that certain disciplines are more likely to be offenders than others. Psychiatrists and obstetrician-gynecologists, for example, are two disciplines noted for having a higher rate of sexual misconduct.[107] The violation of ethical conduct between doctors and patients also has an association with the age and sex of doctor and patient. Male physicians aged 40–59 years have been found to be more likely to have been reported for sexual misconduct; women aged 20–39 have been found to make up a significant portion of reported victims of sexual misconduct.[108] Doctors who enter into sexual relationships with patients face the threats of losing their medical license and prosecution. In the early 1990s, it was estimated that 2–9% of doctors had violated this rule.[109] Sexual relationships between physicians and patients' relatives may also be prohibited in some jurisdictions, although this prohibition is highly controversial.[110]
In some hospitals, medical futility is referred to as treatment that is unable to benefit the patient.[111] An important part of practicing good medical ethics is by attempting to avoid futility by practicing non-maleficence.[111] What should be done if there is no chance that a patient will survive or benefit from a potential treatment but the family members insist on advanced care?[111] Previously, some articles defined futility as the patient having less than a one percent chance of surviving. Some of these cases are examined in court.
Advance directives include living wills and durable powers of attorney for health care. (See also Do Not Resuscitate and cardiopulmonary resuscitation) In many cases, the "expressed wishes" of the patient are documented in these directives, and this provides a framework to guide family members and health care professionals in the decision-making process when the patient is incapacitated. Undocumented expressed wishes can also help guide decisions in the absence of advance directives, as in the Quinlan case in Missouri.
"Substituted judgment" is the concept that a family member can give consent for treatment if the patient is unable (or unwilling) to give consent themselves. The key question for the decision-making surrogate is not, "What would you like to do?", but instead, "What do you think the patient would want in this situation?".
Courts have supported family's arbitrary definitions of futility to include simple biological survival, as in the Baby K case (in which the courts ordered a child born with only a brain stem instead of a complete brain to be kept on a ventilator based on the religious belief that all life must be preserved).
Baby Doe Law establishes state protection for a disabled child's right to life, ensuring that this right is protected even over the wishes of parents or guardians in cases where they want to withhold treatment.
Original source: https://en.wikipedia.org/wiki/Medical ethics.
Read more |