From Rationalwiki | Poetry of reality Science |
| We must know. We will know. |
|
| A view from the shoulders of giants. |
v - t - e
|
Existential risk (sometimes abbreviated to X-risk) is the term for scientifically plausible risks that may cause the entire human race to become extinct.
Such risks are best studied so we can identify and avoid them. However, we must be careful not to overemphasize risks that really are implausible, at the expense of addressing other serious problems our civilization faces.
This is obviously a really, really bad thing, but whether such a nuclear war would actually cause the extinction of every last human is debatable. More information can be found at Wikipedia's page on nuclear holocaust. Whether such a thing occurs depends solely on human factors such as politics and diplomacy. The most effective course of action for individuals would be to vote often, vote for the party/candidate who will promote good international relations rather than national pride and machismo, as well as persuading their governments to take these risks seriously, as defending the lives of its citizens is a fundamental duty of a state.
Although asteroid risks are relatively easy to get a handle on, it is impossible to come up with an accurate probability because the frequency of asteroid impacts is not accurately known. Nevertheless, the existential risk posed by asteroids is very, very tiny over human time-frames. This is among the best-understood and best-monitored of all existential risks — NASA carefully tracks all known NEOs that would be large enough to cause the worst damage — although more research is needed on deflection techniques. Billionaire Elon Musk, among others, has apparently serious plans to set up a very large self-sustaining space colony on Mars, and to do it entirely with private money (well, if you ignore the shedloads of money his company SpaceX is getting from the US government, which is essentially subsidising his rocket R&D). Such an off-Earth colony might be a very effective measure to prevent a civilisation-destroying large asteroid from making humanity go extinct, even if deflection attempts are unsuccessful and the asteroid does strike the Earth. Also, establishing a colony on a possibly sterile Mars would be difficult enough, but creating a long-term self-sustaining Martian colony that is independent of Earth is probably at least an order of magnitude more difficult.
It isn't clear exactly how damaging a nearby supernova would be, and predictions of the frequency of supernovae vary significantly. One estimate suggests that there is a supernova within 10 parsecs (33 light years. For the record, the currently closest known
supernova candidate is 155 light years away), the maximum distance for one to be really harmful for the biosphere, that is estimated every 240 million years; other estimates range from 100 million years to 20 billion years. Gamma ray bursts are extremely energetic, but only last a short period of time, and therefore damage would be limited — one might strip half of the ozone layer, requiring a few years to build up again, but it would be unlikely to cause total extinction immediately. There is little clear evidence of such events causing major damage in earth's past, although the Ordovician–Silurian extinction has been attributed to a supernova or gamma ray burst by some.[5][6] It would be virtually impossible to prevent such an event (we may be able to predict the behavior of well-studied stars, but there is the possibility of either an unobserved pair of white dwarfs/neutron stars or an undiscovered binary system with a white dwarf close enough colliding and going supernova), and current technology would be helpless even with advanced knowledge). Solutions to such problems would involve leaving the earth, although it may be necessary to move a significant distance away from the entire solar system. Alternatively, we could sit tight, but without interstellar travel our options are very limited.
Aside from the inherent problems in any idea of 'intelligence explosion', it seems likely that the so-called 'value alignment' problem in AI will be solved well before they reach general intelligence. For example, a cleaning robot must be able to learn what humans consider dirt and garbage, and what they consider valuable property, to be useful. A psychopathic AI which cannot learn and internalize human goals is of no use and will not be developed. Only 8% of respondents of a survey of the 100 most cited authors in the AI field considered AI to present an existential risk.[7] If you're still worried, you have folks like Elon Musk, who is a founder of OpenAI and who recently donated US$10 million towards AI safety research.[8] However, donating money to AI safety research may be wasteful or even counterproductive; for example, effective altruist (EA) charity evaluator GiveWell has in fact recommended against giving to MIRI,[9] much to the consternation of some in the LessWrong camp who believed — and still believe — that the EA movement would be a useful Trojan horse for getting more donations sent MIRI's way.
For much of human history, pandemics were the most plausible existential risk to mankind. However, our increased understanding of disease and sanitation have gone a significant way towards decreasing this risk. Knowledge of basic mechanisms of disease spread, and how to identify these mechanisms and act to minimize them, go a long way towards decreasing the risk of large-scale outbreaks, while the presence of antibiotics and vaccines make treating of potential viral strains more plausible. Drastic measures to contain a pandemic would doubtless be employed by governments as soon as they became aware of the nature of the threat.[citation needed] After the 9/11 attacks, the US government temporarily shut down all air travel over the continental United States, and an X-risk pandemic would obviously be much more dangerous than a small group of airplane hijackers.
One common concern cited by those fearing pandemics is the over-use of antibiotics which risks encouraging antibiotic-resistant strains to develop. It would be difficult for a single bacteria to develop immunity to all forms of antibiotics, owing to the large number of different ones available, but were a bacteria to develop immunity to all known forms, humanity would be limited to 'old-fashioned' means of disease prevention, such as the use of quarantine and mitigating potential vectors that could spread the disease. However, historical evidence suggests that naturally evolved viruses, even if immune to antibiotics, would be unlikely to reach the level of risking eradicating human life. The time period between the discovery and implementation of proper sanitation and the development of antibiotics shows that proper sanitation alone significantly decreased the loss of life due to disease even without antibiotics. With superior medical knowledge and resources available, modern man should ideally be better suited to preventing the spread of even a hypothetical antibiotic-immune disease.
Arguably the most probable cause of a life-threatening disease would be genetic engineering, which could theoretically create a disease that is immune to all known countermeasures, as well as develop techniques to increase the lethality of the disease or make the disease harder to contain, such as an extended dormant phase where an individual is infectious but not obviously ill, that are unlikely to develop through standard evolutionary means.[note 1] Lucky for humanity, biology is way more complicated than many sci-fi fans assume, so engineering a pathogen is very, very difficult, and out of reach for a small band of terrorists. Furthermore, most people capable of creating such a theoretical engineered pathogen are unlikely to wish to kill themselves, their loved ones, and all of humanity.[citation needed] There are methods that can be put into place to limit the potential spread of engineered diseases that most experienced genetic engineers are ideally using to prevent this sort of scenario.
Climate change is currently not thought to be an existential risk, at least within the next 100 years — although more research is needed on worst-case climate scenarios. The non-existential reality is bad enough, though. Hypothetically, a runaway greenhouse effect
is a situation where a planet gets hotter and hotter through a positive feedback loop until all the oceans boil off and there is no possibility of sustaining life, as happened on Venus. However, this is considered virtually impossible on the Earth.[10]
Eric Drexler, who popularized the idea of nanotechnology, points out that grey goo (nanotech that accidentally eats everything on the planet and turns it into goo) is not an existential risk because it is not a realistic risk at all — although this does not rule out more deliberate uses of nanotechnology for military ends. This is aside from the various fundamental problems with nanotechnology itself.
It isn't clear how likely it is, and it depends on a full understanding of subatomic physics.[11] The fact that it has not happened in the last 14 billion years suggests it isn't terribly likely, and some estimations putting it in the very distant future in cosmological terms, but if it did happen, we'd be shit outta luck. More info here.
Religious and other mystical claims of the impending end of the world are based on unverifiable visions, "voices from God" which conveniently only one person can hear, or unique interpretations of holy books, and in particular, numerology. Sometimes, they involve believers handing over all their savings to the person making the warning, who then conveniently decides to keep the money after the predicted end of the world fails to materialise.
Some people, such as Sun Microsystems' former chief scientist Bill Joy[12] and MIRI's former Director of Research Ben Goertzel,[13] have argued that in order to avoid existential risks, we ought to halt the march of technological progress, to a greater or lesser extent, either temporarily or permanently. A tiny minority (not necessarily acting out of concern over existential risk) have even decided that it is appropriate to resort to violence to achieve their aims of stopping certain technologies.[14]
However, it is almost impossible to achieve technological relinquishment in any useful (i.e. global) way, even if it was considered desirable. Even if America and Europe both ban a technology, if it is useful, one or more countries which face different cultural, political, and economic constraints, such as China for example, will probably eventually develop it, and quite likely out-compete those who don't adopt the technology. We are better off exploring other options.
In addition, while it is true that technology itself causes new problems (nuclear proliferation), it is the only solution to old problems (famine). Of all the existential risks considered here, only asteroids are known for sure to be a real risk and capable of wiping out entire species, and the only possible solutions to this existential risk involve high technology.
While the machines rising up against their masters is a common sci-fi trope, in reality, there is no incentive to build any machine or AI that is not a tool in humanity's hand, with no will of its own other than what humans give it.[15]
Various organizations are committed to reducing or spreading awareness about Existential Risk. One of them is Nick Bostrom's Future of Humanity Institute at Oxford University. Nick Bostrom wrote an article known as 'Astronomical Waste'. He argued that the continued existence of human civilization has immense moral value. Vast populations of humans could exist with space colonization, and their lives would be happy because of advanced technology. So, reducing existential risk is the most valuable cause. [16]
In 2014, an organization known as the Future of Life Institute was established. Its goal is somewhat like FHI. Its founders included the scientist Max Tegmark, Skype co-founder Jaan Tallinn, etc. In 2018, its scientific advisory board included Elon Musk and Stephen Hawking.[17] Their primary goal has been to spread awareness about AI 'risk'. They distributed 2 million dollars to 10 researchers who they deemed to be carrying out AI risk-reducing research.[18] On the other hand, they don't believe that arrival of human-level AI is imminent and say that it is decades away or might not even happen in the 21st century. But they focus on AI risk because solving an AI control problem will take a long time.[19]
In 2012, at Cambridge University, Centre for the Study of Existential Risk was established. The founders included the above-mentioned Jaan Tallinn and Lord Martin Rees (Astronomer Royal). They have collaborated with FHI. [20] [21] Other such organizations include Global Catastrophic Risk Institute,[22] X-Risks Institute[23]Saving Humanity from Homo Sapiens,[24] Lifeboat Foundation[25], Foresight Institute,[26] and Skoll Global Threat Fund.[27].
Many organizations are combatting specific forms of existential risk. Those combating or claiming to combat AI-related x-risk only include:
(i) Centre for Human-Compatible AI[28]
(ii) Machine Intelligence Research Institute[29]
(iii) Leverhulme Centre for the Future of Intelligence[30]
<ref> tag; no text was provided for refs named physorgCategories: [Science] [End of the world] [Futurism]