Nick Bostrom, is a notable Swedish philosopher, who is globally celebrated for his work on existential risk, the anthropic principle, human enhancement ethics, super intelligence risks, the reversal test and consequentialism. Currently, he teaches Applied Ethics at the University of Oxford, and is also the founding director of the Future of Humanity Institute at Oxford University. He is ranked among the chief consultants employed by the President’s Council on Bioethics in USA, the Central Intelligence Agency of USA (CIA), the European Commission and the European Group on Ethics in Brussels. He also serves as an advisor to the Centre for the Study of Existential Risk.
Nick Bostrom was born on March 10, 1973, in Helsingborg, Sweden. He attended the University of Gothenburg, where he received his Bachelor’s degree in Philosophy, Mathematics, Mathematical logic and Artificial intelligence. He later attended the University of Stockholm and King’s College London, and obtained a Master’s degree in philosophy and physics, and computational neuroscience. In 1998, Bostrom co-founded the Humanity+ (formaly World Transhumanist Association), and served at the organization’s Chair of Board of Directors until 2009. He enrolled in the London School of Economics to pursue his doctoral studies, and in 2000, he received his Ph.D. in Philosophy.
In 2000, he was offered a teaching position at Yale University, which he held for the next two years. In 2002, he accepted the position of a British Academy Postdoctoral Fellow at the University of Oxford. In 2005, he cofounded the Institute of Ethics and Emerging Technologies, and severed the organization as Chair of its Board of Directors until 2011. The same year, he also established the Future of Humanity Institute at Oxford, an organization devoted to research into the future of the human race.
Nick Bostrom is applauded for his academic contributions to the fields of cosmology, computational neuroscience, mathematical logic, philosophy of science and artificial intelligence among others. He has conducted extensive research into the complex queries posed by the dynamics of the probability theory, the philosophy of science and the strategic implications of emerging and future technologies. He is accredited for four major works, Existential risk, Biological Cognitive Enhancement, Infinities in ethics, and Anthropic principle.
In Existential risk, Bostrom explores the implications and possibilities that might lead to the destruction of the entire human race. He was the first to develop and address this issue, and bring it to the limelight by entwining the concept with great comprehension and ethical relevance. He describes existential risk as an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Bostrom has explored the development of technological safety and its use to enhance human life, and his work on Infinities in ethics explore life in an infinite universe. His Anthropic Principle proposes that one must think of themselves as a random member of their own reference class.
Bostrum has made several notable literary contributions, he has made over 200 publications, some of his highly acclaimed and widely read books include, “Superintelligence: Paths, Dangers, Strategies” and “Anthropic Bias: Observation Selection Effects in Science and Philosophy”. In 2009, he was included in Foreign Policy’s list of Top 100 Global Thinkers.
Nick Bostrom has been employed by an extensive range of government and corporate organisation to benefit from the intellect of his consultancy and policy advice. He gave evidence to the House of Lords and the Select Committee on Digital Skills. He served as a consultant to the UK Government Office for Science (GOSE) and he was appointed on “The Future of Human Identity” report. He is an Expert Member for World Economic Forum’s Agenda Coucil for Catastrophic Risk. His is also an advisory board member for the Machine Intelligence Research Institute, Future of Life Institute, Foundational Questions Institute in Physics and Cosmology and an external advisor for the Cambridge Existential Risks Project. In 2015, Bostrum, along with Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, signed the Future of Life Institute’s open letter warning of the potential dangers associated with artificial intelligence.