The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre within the University of Cambridge that studies artificial intelligence.[1][2] It is funded by the Leverhulme Trust.[3]
The Centre brings together academics from the fields of computer science, philosophy, social science and others. The centre works with the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley[1] and has a memorandum of understanding with the Coral Bell School of Asia Pacific Affairs at the Australian National University.[4]
Programmes
The CFI research is structured in a series of programmes and research exercises. The topics of the programmes range from algorithmic transparency to exploring the implications of AI for democracy.[5]
- AI: Futures and Responsibility
- AI: Trust and Society
- Kinds of Intelligence
- AI: Narrative and Justice
- Philosophy and Ethics of AI
In July 2019, Leverhulme released the Animal-AI Olympics competition, featuring tests ordinarily used to test animal intelligence.[6][7][8]
See also
- Centre for the Study of Existential Risk
- Future of Humanity Institute
- Future of Life Institute
- Machine Intelligence Research Institute
References
- ↑ 1.0 1.1 "The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity". University of Cambridge. http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of.
- ↑ Care, Adam. "Cambridge University launches £10 million AI research centre, to 'move away from science fiction'". Cambridge News. http://www.cambridge-news.co.uk/Cambridge-University-launches-10-million-AI/story-28287166-detail/story.html.
- ↑ "About". http://lcfi.ac.uk/about/.
- ↑ "Bell School signs MoU with Cambridge University AI centre". October 5, 2018. http://bellschool.anu.edu.au/news-events/stories/6455/bell-school-signs-mou-cambridge-university-ai-centre.
- ↑ "Programmes". http://lcfi.ac.uk/projects/.
- ↑ Whipple, Tom (10 April 2019). "Forget chess, build a robot that can outwit a chicken" (in en). The Times. https://www.thetimes.co.uk/article/forget-chess-build-a-robot-that-can-outwit-a-chicken-xq379kh2c. Retrieved 17 July 2019.
- ↑ "AIs go up against animals in an epic competition to test intelligence". New Scientist. 2019. https://www.newscientist.com/article/2197791-ais-go-up-against-animals-in-an-epic-competition-to-test-intelligence/. Retrieved 17 July 2019.
- ↑ Crosby, Matthew. "Animal-AI Olympics". http://animalaiolympics.com/.
External links
Existential risk from artificial intelligence |
|---|
| Concepts |
- AI box
- AI takeover
- Control problem
- Existential risk from artificial general intelligence
- Friendly artificial intelligence
- Instrumental convergence
- Intelligence explosion
- Machine ethics
- Superintelligence
- Technological singularity
|
|---|
| Organizations |
- Allen Institute for Artificial Intelligence
- Center for Applied Rationality
- Center for Security and Emerging Technology
- Centre for the Study of Existential Risk
- DeepMind
- Foundational Questions Institute
- Future of Humanity Institute
- Future of Life Institute
- Humanity+
- Institute for Ethics and Emerging Technologies
- Leverhulme Centre for the Future of Intelligence
- Machine Intelligence Research Institute
- OpenAI
|
|---|
| People |
- Nick Bostrom
- Sam Harris
- Stephen Hawking
- Bill Hibbard
- Bill Joy
- Elon Musk
- Steve Omohundro
- Huw Price
- Martin Rees
- Stuart J. Russell
- Jaan Tallinn
- Max Tegmark
- Frank Wilczek
- Roman Yampolskiy
- Andrew Yang
- Eliezer Yudkowsky
|
|---|
| Other |
- Open Letter on Artificial Intelligence
- Ethics of artificial intelligence
- Controversies and dangers of artificial general intelligence
- Artificial intelligence as a global catastrophic risk
- Superintelligence
- Our Final Invention
|
|---|
Category
|