Effective altruism

From RationalWiki - Reading time: 20 min

One of the endless series of panel discussions on EA, this one discussing AI and existential risk. The leader of effective assholism[1] is in the center. Pictured, left to right: a moderator, Nick BostromWikipedia, Elon Musk, Nick Soares, and Stuart J. RussellWikipedia
Thinking hardly
or hardly thinking?

Philosophy
Icon philosophy.svg
Major trains of thought
The good, the bad,
and the brain fart
Come to think of it
#Bitcoin is Effective Altruism.
—Michael Saylor, Bitcoin billionaire who was charged with tax evasion in 2022[2]
I thought effective altruism is a bad word now.
—Changpeng Zhao, owner of cryptocurrency fund Binance[2]

Effective altruism (EA), and its umbrella concept longtermism, is a quasi-utilitarian[3] movement to change the world through making carefully-targeted charitable donations — not only through making carefully-targeted charitable donations, but that is the overwhelming focus. Philosopher Peter Singer started the original idea in 2013 and bought into it big time.[4] Effective Altruism is also pushed by Bay Area technolibertarians, and artificial intelligence existential risk groups, including MIRI.Wikipedia The latter, of course, consider themselves an obvious beneficiary — if not the obvious beneficiary (i.e., self-dealing). After EA began gaining traction, Singer became very wary about EA, stating in 2021:[5]

The dangers of treating extinction risk as humanity’s overriding concern should be obvious. Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth.

The sales pitch is that, if you're going to try make the world a better place for other people, you should try to do the best possible job you can. If you had the choice between helping a local community theater group put on a show or saving African children from malaria, the right thing to do is, of course, to save the children. People face dilemmas like this in real life whenever they donate money to charity: if you're not donating to the most cost-effective charities that you can, you fail at utilitarianism. (It's impossible not to fail at utilitarianism, but you can fail less hard.)

It is important to remember that EA invented neither the concept of charity, nor the concept of evaluating charities — though some EAs behave as though they did. Beware of EAs equivocating by responding to criticisms of the EA subculture's behaviours with advocacy of the value of charity or evaluating charities in general.

How EAs evaluate charities[edit]

In the ideal case, EAs leave actually evaluating how good charities are to dedicated organisations set up for that purpose — charity evaluators, such as GiveWell and Giving What We Can (GWWC). GiveWell and GWWC tend to rate charities in a quasi-utilitarian way, using a combination of the best available published evidence for the interventions, and asking lots of questions of the charities they rate relating to things like checking that the interventions actually work (auditing), room for more funding, and whether adding more funding would do the same amount of good, more good, or less good. Overhead is also considered: however, overhead is not regarded as a terrible thing if it invests the effectiveness of the work (monitoring programs being a notable example). They are preferred by EAs over existing charity evaluators such as Charity Navigator — Charity Navigator just looks at the percentage a charity spends on administrative and fundraising overheads and pays no attention to whether what the charity is doing is effective, or how effective it is.

Of course, then there's donations to MIRI, but MIRI appear to be special-cased for subcultural reasons.

GiveWell[edit]

However, GiveWell has partnered with billionaire Facebook co-founder Dustin Moscovitz and his wife's charitable foundation in a joint initiative called the Open Philanthropy Project, and in this initiative they have been accused of casting aside analytical rigour, and have been accused of bias.[6]

GiveWell has also recommended that people spam the Against Malaria Foundation (AMF) with all[note 1] the money they have set aside to donate, on the grounds that they think it's the best charity, even at the risk of exhausting the AMF's room for more funding, amongst other dubious decisions.[citation needed]

Effective altruists have criticized GiveWell for being too strict in their criteria which leaves UNICEF out of their list of recommended charities because UNICEF focuses on too many interventions which makes it harder for GiveWell to evaluate their effectiveness.[7] This is despite the fact that UNICEF engages in many cost-effective interventions, such as providing vaccines.

Origins[edit]

Singer at an EA conference in 2015
Yudkowsky (left) at the Singularity Summit in 2007

The philosophical underpinnings mostly come from philosopher Peter Singer, particularly his 1972 essay Famine, Affluence, and Morality.Wikipedia He argues in this essay that affluent people are morally obligated to donate far more of their income to humanitarian causes than is considered normal in Western culture. This did not start the effective altruism subculture, but once it was going, he joined in enthusiastically.

The effective altruism subculture — as opposed to the concept of altruism that is effective — originated around LessWrong.[8] The earliest known use of the term was in the form "effectively altruistic" by user "Anand" in a 2003 edit on the wiki of the singularitarian Shock Level 4 mailing list, a predecessor of LessWrong run by Eliezer Yudkowsky.[9] Anand's article argued that donating to the Singularity Institute (now known as MIRI) is more effective than donating to prevent the spread of HIV/AIDS, even though the latter may be more emotionally compelling. Later, the term was used in the form "effective altruist" by Yudkowsky himself in his 2007 blog post Scope Insensitivity, arguing against sentimentality and for utilitarian calculation in charity:[10]

If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.

Other names were used, e.g. "efficient charity" in 2010,[11] but the movement eventually settled on the name "effective altruism" by 2012.[12][8]

Earning to give[edit]

People that call themselves effective altruists commonly endorse the "earning to give" approach, at least for those who have, or might be able to get, well-paid jobs. At its most hardcore, "earning to give" means getting the highest-paying job that one can and then donating as much of it as possible (up to some threshold, for sanity's sake). After all, you can get more done by paying a bunch of other people to solve problems for you than you can do all on your own, right?[13] One story about EA originally did not give much heed to the morality or potential for harm of the job itself, recommending "trading in quantitative hedge funds" in 2014-2015,[14] later adding a caveat in 2017:[13]

Don’t many high earning jobs cause harm? We don’t recommend taking a job that does a lot of harm in order to donate the money.

But still retaining "quantitative trading" as a career option.[13] This does not appear to be a harmless career choice at all in light of the quantitative trading that helped lead to the Great Recession in 2009, or quantitative trading that led to the bankruptcy of the FTX cryptocurrency exchange in 2022.[15] Disregarding the potential harm that one's job causes for the purposes of philanthropy, makes the philanthropy either a smokescreen, or the whole process one of consequentialism (ends justifying the means).

In practice, people will not always take (or keep) the highest-paying job they can, for a variety of reasons including commute time, company culture, working hours, the employer's attitude to diversity, work-related stress, and whether the management are perceived to treat employees well or badly. However, 80,000 Hours, an organisation dedicated to giving career advice to wannabe effective altruists, published a blog post claiming that research showed that, depending on the type of stress, stress at work wasn't necessarily a big deal anyway and in some cases, people should consider just sucking it up and maybe "reframe stress as opportunity", in the interests of saving more children from malaria.[16]

Also, in practice nobody literally donates "as much as possible," an unrealistic standard which would presumably mean forgoing any kind of luxuries and a curtailed social life, at least after securing a long-term relationship — and which would still leave the awkward question of whether one's kids should be brought up in near-poverty. (The powerful human instinct towards protection of one's own offspring would tend to mitigate against such thinking when it came down to it — and if not, there's always social services.) One EA organisation, Giving What We Can, promotes a suggested amount of 10% of one's entire working lifetime income, spread over a working lifetime. Although this is easily achievable by generously-compensated San Francisco Bay Area software engineers, and (as even Giving What We Can recognises) not achievable by students struggling to get by on student loans, some in the movement seem curiously blind to the fact that not everyone who has a job might be able to part with 10% of their entire income (what amounts to a voluntary flat tax — punitive for the poor and easy for the rich). Some — not all of them millionaires — even pledge to give much more than 10% of their income. It is unclear whether this behaviour is, on balance, inspirational, or whether it acts to drive away potential donors, activists and charity workers who might feel that this is a movement of exclusively privileged people that is remote from their lives and concerns.

Compounding the problem, effective altruism is regularly conflated, even inside the movement, with:

  • Giving What We Can, even though not all people who identify as "effective altruists" have pledged to donate 10% of their income or are planning to do so
  • Utilitarianism, even though not all effective altruists are utilitarians
  • Supporting everything that everyone in the movement does, even though that would be arguably self-contradictory (see below)

EA organisations regularly conduct research into what brings people into the EA movement, but no formal research seems to have been done into what drives some people away from EA. The thinking of many EAs is that effective altruism is so obviously right, only people who were somehow in fundamental disagreement with EA values like doing nice things, and doing more and better things rather than fewer and worse things, would even consider not joining the movement...

Mosquito nets versus artificial intelligence risk[edit]

Karnofsky visiting the NGO Seva Mandir in Rajasthan, India in 2010
See the main articles on this topic: Artificial intelligenceRoko's basilisk, and Cybernetic revolt

The ideas have been around a while, but the current subculture that calls itself Effective Altruism got a big push from Machine Intelligence Research InstituteWikipedia (MIRI) and its friends in the LessWrong community, many of whom considered MIRI obviously the most effective charity in the world.[note 2] However, unfortunately for MIRI, EA charity guide GiveWell subsequently rated donations to them as actually worse for their project (addressing the threat to humanity posed by hypothetical future advanced Artificial Intelligence technology) than not donating, with GiveWell's Holden Karnofsky stating in 2012, "I do not believe that these objections constitute a sharp/tight case for the idea that SI's work has low/negative value; I believe, instead, that SI's own arguments are too vague for such a rebuttal to be possible." GiveWell, unlike LessWrong and MIRI, primarily promotes charities focused on improving health in the developing world. GiveWell's criticism of MIRI argued that MIRI's focus on supposedly trying to save the world and create "Friendly AI" amounted to a form of Pascal's Mugging — promising enormous benefits, even though the probability of actually receiving those benefits is tiny.[19][20]

This is not the only example. Reducing animal suffering is an important cause for some of the movement,[21] but some people have unusual ideas on how to do this. One prominent effective altruist has put up for discussion (since retracted) on his blog the idea of destroying nature in order to reduce wild animal suffering.[22] Was it satire or serious? One may never know. In fact, some members of the Effective Altruism movement identify as "negative utilitarians", meaning that preventing suffering is the only thing that matters. However, this philosophy seems to imply that we should be willing to destroy the entire world to prevent one person from suffering a pinprick,[23] or at least anti-natalism.

These examples represent internal tensions around a key concept of EA, longtermism, that was developed by William MacAskill and Toby Ord.[24] MacAskill defines longtermism as:

…the idea that positively influencing the longterm future is a key moral priority of our time. Longtermism is about taking seriously just how big the future could be and how high the stakes are in shaping it. If humanity survives to even a fraction of its potential life span, then, strange as it may seem, we are the ancients: we live at the very beginning of history, in the most distant past. What we do now will affect untold numbers of future people We need to act wisely.[25]:4-5

This sounds wise and benevolent, but if predicting the future is often highly fraught, even a few years out. Without paying attention to immediate world problems, one can easily waste money and perhaps create unintended negative consequences.

Despite the many and varied differences of opinion within the EA movement, those that remain in the movement tend not to spend too much time arguing about fundamental "cause selection" issues (whether to donate to AI risk, global health, poverty or animal causes) — and even when they do, such discussions tend to remain relatively civil and non-rancorous. Part of the reason for this is that all EAs are in favour of "growing the pie" of EA supporters at this point in time, and most of them recognise that rancorous discussions would impede that goal. Although ideas about targeting growth differently have been mooted, such as focusing more on trying to recruit the rich (by hard-headed pragmatists) or women and ethnic minorities (by social justice people) or people who don't speak English (by people who think outside the English-speaking world), no-one is so pessimistic about their favoured EA cause area that they think that growing the pie won't gain their preferred cause area more EA recruits.

However, one EA has argued that this polite truce doesn't make sense, because if people think their cause is vastly better, they should be spending a lot of their time trying to persuade people of that.[26] Scott Alexander, an EA supporter,[27] has counter-argued, based on his extensive personal (and often unsuccessful) experience of arguing with people who are sceptical about AI risk as a cause, that repeated arguments of this kind at EA meetups would be tiring, repetitive, and unpleasant.[26] This is not to say that Alexander does not advocate for AI risk reduction — however, he prefers to write long blog posts where he can assemble his arguments and evidence and engage in an extensive, uninterrupted written monologue.[28]

Where "Effective Altruists" actually send their money[edit]

Wytham Abbey: effective self-dealing

According to William MacAskill of "The Effective Altruism Blog", effective altruists currently tend to think that the most important causes to focus on are global poverty, factory farming, and the long-term future of life on Earth.[29] In practice, this amounts to complaining when people try to solve local problems, feeling bad when people eat hamburgers,[note 3] and sending money to Eliezer Yudkowsky, respectively.

The effective-altruism.com 2014 Survey of Effective Altruists was self-selected and non-random, but includes a list of how many respondents said they donated to various organisations:[30][31]

  • Against Malaria Foundation: 211
  • Schistosomiasis Control Initiative: 114
  • GiveDirectly: 101
  • Machine Intelligence Research Institute: 77
  • GiveWell: 46
  • CFAR: 45[note 4]
  • Deworm the World: 43
  • Vegan Outreach: 27
  • The Humane League: 22
  • 80,000 Hours: 21[note 5]
  • Project Healthy Children: 16
  • Centre for Effective Altruism: 14[note 6]
  • Giving What We Can: 10
  • Animal Charity Evaluators: 10
  • Leverage Research: 7[note 7]

The organisations listed in boldface above are directly linked to EA, i.e. founded by, staffed by, and/or would not exist without EA enthusiasts. When one is essentially self-dealing like this, it's hard to argue that it's actual altruism (i.e., "disinterested and selfless concern for the well-being of others"[35]).

Cultishness[edit]

EA has been accused of being a cult, and it does have a few of the hallmarks of a cult:[36][37]

The Economist quoted one unnamed effective altruist as saying, "Don’t ever say ‘People sometimes think EA is a cult, but it’s not."[37] The article next said that EA may not be a cult but rather "kind of church – one that has become increasingly centralised and controlled over time."[37] An edited book on EA and religion has even been written.[42]

Risks[edit]

Robin Hanson points out that people accumulate knowledge and wisdom as they get older and may change their minds about important things as a result. He therefore advises effective altruists — those of them who are young, anyway, which is most of them — to do nothing now and save money for later, because they might change their mind about where to give that money. Hanson, who specialises in the study of allegedly hypocritical human behaviour ("X is not really about Y!") argues that effective altruists are prone to irrationally give now rather than later, to signal sincerity to their fellow effective altruists. When asked whether it is not better to give now while we still can, because our future selves might spend the money on e.g. putting our children through university, he responded "Maybe that's the right thing to do! Why do you distrust your future self so much?"[43]

Like activism and do-gooding generally, for high-scrupulosityWikipedia people, going overboard with EA can be dangerous. It can lead to burnoutWikipedia from overwork and/or neglecting one's own needs and/or those of one's family. It's worth remembering that, "effective" as it may be to buy a bed net for a child in Africa, people close to you also have needs of various sorts, which can often most "effectively" be met by you.

A significant number of EAs advocate giving large portions (10%+) of one's income away on a continuous basis, but it is important to remember that one's circumstances may change — for example, one may lose one's job or encounter a health crisis — so it is worthwhile considering saving some money in case you need it. One can always give away that saved money later, or change one's mind if one decides that the money is really need for oneself.

Excessive moralising about EA can also cause one to — like a kind of inverse Dale CarnegieWikipedia — lose friends and fail to influence people. Arguably, persuading other people to give to good causes is best approached in an upbeat "look what you could achieve" way, rather than trying to guilt-trip people. (The latter is probably more likely to work on people who were already high-scrupulosity and thus more susceptible to EA ideas in the first place — so the value of "converting" such people to EA by guilt-tripping them could well be less than you might think, because they might have ended up being converted anyway.)

In the (unlikely) worst case scenario, you could lose all your non-EA friends through being seen as extremely preachy and arrogant, then later become financially ruined through a chance accident or illness leaving you unable to work, have no savings to fall back on — and then receive no help whatsoever from your EA friends despite all the past good you have done, because helping you is not an "effective" cause. This scenario is probably unlikely to pan out this way in practice though. Probably.

Exemplars[edit]

Sam Bankman-Fried[edit]

Bankman-Fried remotely attending the MIT Bitcoin Expo 2022
I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. … If you wrote a book, you fucked up, and it should have been a six-paragraph blog post.
—SBF, super-genius[44][45]

Sam Bankman-Fried (SBF) (1992–), owner of cryptocurrency exchange FTX until bankruptcy in November 2022, was a vocal (with a voice that could drive a monk to insanity, no less) proponent of EA and had pledged to give away his fortune to charity, signing the Giving Pledge.[46][47][note 8] It's unclear whether SBF ever gave money to charity, though he did make a lot of political contributions (likely to fend off crypto regulation)[47] allegedly illegally using other people's money.[48] SBF's crypto empire spectacularly came crashing down in November 2022, due to his self-dealing and high-risk (leveraged) gambles and he did claim to have given a lot of moola to Jane Street Capital, both founded by his mentor (so basically self-dealing).[49] After the crash, which left SBF with essentially nothing, it was revealed that he was living a lavish and debauched lifestyle in the Bahamas[50] while he was hypothetically donating to charity at some point in the distant future. Trial testimony and his own statements indicated that SBF had an extreme desire for risk, even to the point of risking other people's lives without their consent,[51][52] something that would seem to be antithetical to the idea of altruism. The bookkeeping for FTX, as it turned out, was close to non-existent[44] (Was it a six-paragraph blog post?).

By number of Ponzi schemes there are way more in crypto, kinda per capita, than in other places. But by size of actual Ponzis, I’m not sure that it is particularly unusual. It’s just like a ton of extremely small ones.
—SBF in May 2022, foreshadowing the crash of FTX[53]

In November 2023, SBF was convicted of 2 counts of wire fraud, 4 counts of conspiracy to commit fraud and 1 count of conspiracy to commit money laundering in connection with FTX and the affiliated Alameda Research cryptocurrency trading fund.[54] In 2024, SBF was sentenced to 25 years in prison.[55]

Lest one think that SBF was somehow an aberration within EA, both of SBF's parents were involved with FTX. His parents are both law professors at Stanford University, with his father Joseph Bankman specializing in tax law, and his mother Barbara Fried specializing in legal ethics and wrote a book that supports consequentialist philosophies![56] SBF grew up immersed in utilitarianist thought, and fully embraced utilitarianism in a blog post.[57]:81-82

Both of SBF's parents have been sued by FTX creditors to recover funds, but it is unclear if they will also face criminal charges.[58]

Ben Delo[edit]

Ben Delo (1984–) was also courted by MacAskill to join the EA movement.[59][60] Delo became a member of MacAskill's Giving What We Can,[61] and signed the Giving Pledge.[46] Subsequently Delo was convicted of willfully failing to implement anti-money laundering measures at the crypto exchange BitMEX, sentenced to 30-months probation and ordered to pay a $10 million fine.[62]

The poor and cryptocurrency[edit]

Given that EA has at times been about the rich helping the poor, it is noteworthy to look at the cryptocurrency entanglements of EA and the effect of EA on the poor. Besides the crypto debacles of SBF and Delo, Elon Musk has promoted cryptocurrency[63][64] and EA,[65] and MacAskill advised SBF before he created FTX. MacAskill also promoted the idea of intentionally working for an immoral organization so as to earn money to donate to charity. MacAskill gave an example to justify this, that of Nazi Oskar SchindlerWikipedia who saved Jews from the Holocaust.[66] This was an example of a Nazi who later happened to rescue Jews, not of someone who became a Nazi so that he could rescue Jews. What MacAskill is trying to justify is the latter, e.g. help destroy the planet by working in the fossil fuel industry while promising to donate some money to an anti-climate change charity.

Cryptocurrency has been directly and indirectly associated with various forms of impoverishment through criminality:

  • Pump-and-dump schemes[67]
  • Theft of poor people's crypto from the Axie Infinity game[68][57]:121-128
  • Human trafficking and enslavement of poor people, then forcing them to con others into fraudulent crypto investments that depend on Tether (text flirting followed by enticement into fraudulent crypto investments, a method known as "pig butchering").[57]:172-199

Gallery[edit]

See also[edit]

Icon fun.svg For those of you in the mood, RationalWiki has a fun article about Effective altruism.

External links[edit]

Notes[edit]

  1. except if they are billionaires, obviously
  2. They claim "8 lives saved per dollar donated".[17][18]
  3. EAs also disapprove of donating to train guide dogs (a.k.a. seeing-eye dogs in American English), although for completely different reasons than PETA. While PETA opposes the very concept of guide dogs because they believe that animals should not be owned, EAs note the high cost of training guide dogs compared to performing eye surgeries in poor countries to cure some types of blindness.
  4. CFAR is the Center for Applied Rationality, another LessWrong-subculture organisation;[32] at this time its mission was to promote rationality techniques, it repivoted in late 2016 to being another AI risk organisation.
  5. 80,000 Hours is an EA career advice website.
  6. The Centre for Effective Altruism in 2021 purchased "Wytham Abbey, a palatial estate near Oxford, built in 1480",[33] reportedly for £15 million.[34]
  7. Leverage Research is a separate rationality organisation which has received funding from billionaire Peter Thiel, and is "dedicated to researching the human mind and group dynamics."
  8. The Giving Pledge is not formally associated with the effective altruism movement, having been founded by Bill Gates and Warren Buffett in 2016. It does function with a similar unstated goal: reputational whitewashing of billionaires. Other notable signatories of the Giving Pledge include:[46]

References[edit]

  1. The effective assholism movement doesn’t need to hide its accomplishments by Rich Seymour (1:28 PM · Aug 10, 2022) Twitter (archived from 17 Feb 2023 03:13:34 UTC).
  2. 2.0 2.1 I thought effective altruism is a bad word now. by @cz_binance (5:39 AM · Nov 15, 2022) Twitter (archived from 18 Dec 2022 18:23:38 UTC).
  3. The failed philanthropy of Sam Bankman-Fried (October 16, 2023 at 7:47 p.m. EDT) The Washington Post.
  4. Peter Singer: The why and how of effective altruism (Filmed Mar 2013 • Posted May 2013) TED Talks (archived from December 8, 2013).
  5. The Hinge of History by Peter Singer (Oct 8, 2021) Project Syndicate.
  6. You have $8 billion. You want to do as much good as possible. What do you do? Inside the Open Philanthropy Project. by Dylan Matthews (Updated Oct 16, 2018, 10:05am EDT) Vox.
  7. Where I'm giving and why: Eric Friedman (21st Dec 2013) Effective Altruism Forum
  8. 8.0 8.1 The history of the term 'effective altruism' by William MacAskill (10th Mar 2014) Effective Altruism.
  9. History of EffectiveAltruism by Anand (October 4, 2003 1:19 am) SL4Wiki (archived from 14 Aug 2017 23:23:52 UTC).
  10. Scope Insensitivity by Eliezer Yudkowsky (13th May 2007) LessWrong.
  11. Efficient Charity by multifoliaterose (4th Dec 2010) LessWrong.
  12. A Name for a Movement? (March 12th, 2012) Jeff Kaufman.
  13. 13.0 13.1 13.2 Why and how to earn to give by Benjamin Todd (Published September 2014; Last updated April 2017) 80,000 Hours (archived from December 13, 2022).
  14. Why and how to earn to give 80,000 Hours (archived from July 22, 2015).
  15. Was FTX’s Sam Bankman-Fried Behind the World's Greatest Ponzi Scheme? The FTX collapse has had a catastrophic effect on crypto in general. Could this have been the biggest Ponzi scheme in history? by Bernard Zambonin (Nov 28, 2022 6:17 AM EST) The Street.
  16. Will high stress kill you, save your life, or neither? by Roman Duda (February 26th, 2016) 80,000 Hours.
  17. Rain comments on The $125,000 Summer Singularity Challenge - Less Wrong by Rain (29 July 2011 06:19:00PM) LessWrong (archived from November 24, 2011).
  18. Anna Salamon's 2nd Talk at Singularity Summit -- How Much it Matters to Know What Matters: A Back of the Envelope Calculation by Singularity Institute (c. 2009) Vimeo (archived from September 6, 2011).
  19. Thoughts on the Singularity Institute (SI) by Holden Karnofsky (10th May 2012) LessWrong.
  20. Pascal's mugging by Nick Bostrom (2009) Analysis 69(3):443-445.doi:10.1093/analys/anp062.
  21. Four focus areas of effective altruism. by Luke Muehlhauser (7th Jul 2013) Effective Altruism Forum.
  22. Why improve nature when destroying it is so much easier? (January 21, 2010) Robert Wiblin (archived from 30 Jun 2014 05:33:08 UTC).
  23. Why I'm Not a Negative Utilitarian (Posted 28 Feb 2013; Last updated 1 Mar 2013) Toby Ord.
  24. 'Longtermism' by William MacAskill (25th Jul 2019) Effective Altruism Forum.
  25. What We Owe the Future by William MacAskill (2022) Basic Books. ISBN 1541618629.
  26. 26.0 26.1 Concerning MIRI’s Place in the EA Movement by ozymandias (17 Februrary 2016) Thing of Things.
  27. Slate Star Codex (archived from April 3, 2023).
  28. AI Researchers On AI Risk by Scott Alexander (May 22, 2015) Star Slate Codex.
  29. What is effective altruism? by William MacAskill (12th May 2013) Effective Altruism.
  30. The 2014 Survey of Effective Altruists: Results and Analysis by Peter Wildeford (16th Mar 2015) Effective Altruism Forum.
  31. The 2014 Survey of Effective Altruists: Results and Analysis analysis by Peter Hurford, Jacy Reese, David Moss, and Robert Krzyzanowski (May 2014) EA Hub (archived from March 15, 2016).
  32. More Rational Resolutions: To Reach Goals, Be More Logical and Take a Scientific View of Your Emotions by Angela Chen (Updated Jan. 1, 2014 9:47 p.m. ET) The Wall Street Journal.
  33. The Reluctant Prophet of Effective Altruism: William MacAskill’s movement set out to help the global poor. Now his followers fret about runaway A.I. Have they seen our threats clearly, or lost their way? by Gideon Lewis-Kraus (August 15, 2022) New Yorker.
  34. Looks like it was on sale last year for £15M by Rhiannon Dauster (11:00 PM · Dec 4, 2022) Twitter (archived from December 7, 2022).
  35. altruism, n., Oxford English Dictionary.
  36. Sam Altman and the cult of effective altruism: How did such a mad, apocalyptic ideology gain so much influence? by Andrew Orlowski (26th November 2023) spiked.
  37. 37.0 37.1 37.2 37.3 37.4 37.5 37.6 The good delusion: has effective altruism broken bad? A group of young idealists wanted to live the most ethical lives possible. Now some wonder whether the movement they joined has lost its moral compass by Linda Kinstler (Nov 15th 2022) The Economist.
  38. Strong Longtermism, Irrefutability, and Moral Progress by Ben Chugg (Dec 26 2020) Effective Altruism Forum.
  39. A Case Against Strong Longtermism: A response to Hilary Greaves and William MacAskill (part 1/4) by Vaden Masrani (Dec. 10, 2020) Vaden Masrani.
  40. The Poverty of Longtermism: Why longtermism isn't an idea that could save one hundred billion trillion lives (Part 4/4) by Vaden Masrani (July 3, 2021) Vaden Masrani.
  41. Poetry: FHI at Oxford (2018) Nick Bostrom.
  42. Effective Altruism and Religion: Synergies, Tensions, Dialogue, edited by Dominic Roser, Stefan Riedener & Markus Huppenbauer (2022) Nomos Verlag. ISBN 9783748925361. doi:10.5771/9783748925361.
  43. Robin Hanson, talk at King's College London entitled "Robin Hanson on: Effective Altruism, Betting, Robots & More", 20 March 2016
  44. 44.0 44.1 Sam Bankman-Fried doesn’t read. That tells us everything. by Molly Roberts (November 29, 2022 at 4:11 p.m. EST) The Washington Post.
  45. Sam Bankman-Fried Has a Savior Complex—And Maybe You Should Too by Adam Fisher (September 22, 2022) Sequoia Capital (archived from 10 Nov 2022 09:58:25 UTC).
  46. 46.0 46.1 46.2 Pledge Signatories Giving Pledge.
  47. 47.0 47.1 FTX Founder Sam Bankman-Fried Signs Billionaires’ Giving Pledge: The crypto billionaire has promised to give away the majority of his wealth to philanthropic causes. by Tracy Wang (Jun 1, 2022) CoinDesk.
  48. How Serious Are Sam Bankman-Fried’s Alleged Campaign-Finance Violations? He gave to Democrats, and claims that he also gave to Republicans through dark-money donations. But the money may have never been his to give. by Sheelah Kolhatkar (January 11, 2023) The New Yorker.
  49. Sam Bankman-Fried Has a Savior Complex—And Maybe You Should Too by Adam Fisher (September 22, 2022) Sequoia Capital (archived from 8 Nov 2022 21:02:30 UTC).
  50. FTX’s Bahamas crypto empire: Stimulants, subterfuge and a spectacular collapse: Crypto wunderkind Sam Bankman-Fried had promised the island paradise a path to financial glory. His meltdown has left some Bahamians worried about the ripple effects. by Tim Craig et al. (November 24, 2022 at 9:12 a.m. EST) The Washington Post.
  51. The coin flip that could convict Sam Bankman-Fried by Jason Willick (October 13, 2023 at 6:00 a.m. EDT) The Washington Post.
  52. Sam Bankman-Fried on Arbitrage and Altruism | Conversations with Tyler (Mar 9, 2022) YouTube.
  53. Crypto billionaire Sam Bankman-Fried: ‘I got involved with no clue what a blockchain was’ by Joshua Oliver (May 13, 2022) Financial Times (archived from 13 May 2022 11:39:29 UTC).
  54. Bankman-Fried convicted on all charges after weeks-long criminal trial: The co-founder of the FTX crypto exchange was accused of one of the largest financial frauds in history. by Eli Tan & Tory Newmyer (November 2, 2023) The Washington Post.
  55. ‘He knew it was wrong’: Sam Bankman-Fried sentenced to 25 years in prison over FTX fraud: Judge orders disgraced crypto mogul to forfeit $11bn in assets and says he showed no remorse for his crimes by Nick Robins-Early (28 Mar 2024 14.38 EDT) The Guardian.
  56. Facing Up to Scarcity: The Logic and Limits of Nonconsequentialist Thought by Barbara H. Fried (2020) Oxford University Press. ISBN 0198847874.
  57. 57.0 57.1 57.2 Number Go Up: Inside Crypto's Wild Rise and Staggering Fall by Zeke Faux (2023) Crown Currency. ISBN 0593443810.
  58. Bankman-Fried’s parents could face their own legal perils, experts say: Following their son’s conviction on fraud and conspiracy charges, the fate of the Stanford Law professors remains an open question by Tory Newmyer & Eli Tan (November 11, 2023 at 7:00 a.m. EST) The Washington Post.
  59. Will's outrage about the FTX situation is difficult to take seriously for three reasons: (1) SBF is not the first disgraced crypto billionaire that Will has vouched for (2) Will was longtime friends with SBF (3) Will was warned about SBF's unethical behavior as far back as 2018 by Kerry Vaughn (November 12, 2022) Twitter (archived from November 14, 2022).
  60. Pledge Letter by Ben Delo (15 April 2019) The Giving Pledge.
  61. Co-funding Partnership with Ben Delo by Holden Karnofsky (November 07, 2019) Open Philosophy (archived from August 14, 2022).
  62. BitMEX Co-Founder Delo Gets 30 Months Probation, Avoids Jail Time: BitMEX co-founder Ben Delo will carry out his sentence of 30 months of probation without being confined at home by Sebastian Sinclair (June 15, 2022 06:18 pm) Blockworks.
  63. "Why Did Elon Musk Change the Twitter Logo to the Dogecoin Cryptocurrency Meme?" by Todd Spangler, Variety, 2023 April 3
  64. Keith Johnson v. Elon Musk, United States District Court Southern District of New York, Case No 1:22-cv-5037
  65. Is Elon Musk on Board With ‘Effective Altruism? by Nicholas G. Evans (April 21, 2023) The Chronicle of Philanthropy.
  66. Doing Harm Through Law? by William MacAskill (June 11, 2013) The Petrie-Flom Center Staff Health Law Policy, Harvard Law.
  67. Is Bitcoin Really Untethered? by John M. Griffin & Amin Shams (2020) The Journal of Finance 75(4):1913-1964.
  68. Earnings for Axie Infinity players drop below Philippines minimum wage by Lachlan Keller (16 November 2021) forkast.
  69. What’s Effective Altruism? What Does It Mean for AI? by Saritha Rai & Ellen Huet (November 22, 2023 at 9:17 PM UTC) Bloomberg.

Licensed under CC BY-SA 3.0 | Source: https://rationalwiki.org/wiki/Effective_altruism
16 views | Status: cached on November 21 2024 16:32:23
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF