Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanitys cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biologicalcognitive enhancement, and collective intelligence.This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostroms work nothing less than a reconceptualization of the essential task of our time.
很多科幻电影都在谈,人类设计出的人工智能,即机器人,反叛人类,统治人类。但是,为什么这些超级智能机器人要统治人类?无一例外,所有人都采用了拟人化思维,认为机器人同样要保护自己,争夺资源,包括本书作者,包括被许多人神化的库布里克《2001:太空奥德赛》。这是一...
评分 评分很多科幻电影都在谈,人类设计出的人工智能,即机器人,反叛人类,统治人类。但是,为什么这些超级智能机器人要统治人类?无一例外,所有人都采用了拟人化思维,认为机器人同样要保护自己,争夺资源,包括本书作者,包括被许多人神化的库布里克《2001:太空奥德赛》。这是一...
评分很多科幻电影都在谈,人类设计出的人工智能,即机器人,反叛人类,统治人类。但是,为什么这些超级智能机器人要统治人类?无一例外,所有人都采用了拟人化思维,认为机器人同样要保护自己,争夺资源,包括本书作者,包括被许多人神化的库布里克《2001:太空奥德赛》。这是一...
评分从高级层面探讨超级智能与人类生存相关战略、方法、风险和哲学问题的书,探讨得比较深入细致。文风相当dry,一些内容偏抽象,另涉及的领域非常广,部分章节读起来较吃力(就个人而言,最吃力的是探讨道德价值那章,实在是太“哲学”了)。虽70%的内容属于猜测性或/和难以预见范畴,作者提供的见解和提出的问题有很高的智识、远见和启发性,值得一看。虽本书提供的实质性解决方法很有限(另一些方案即使听上去有吸引力因人性的限制未必能很好实施),但如果更多的人和机构,尤其是拥有决策权的人和机构拥有该书作者这样的意识的话,书中提出的危机更可能被避免或解决。科幻作家也应该看看这本书。P.S. 虽是本哲学书,全书的风格相当“实在”。
评分Blinkist扫过。速读中没有感到什么哲学成分,讨论的内容和框架也缺乏新意,提倡全球合作有序推进集体监管。多次技术阻碍都是需要储存上信息量太大和运算量太大。两种发展模式:模仿人的思维逻辑和复制人脑结构/功能。人类的核心价值观到底能不能被学习?作者感觉对于由于机器的过度发达以及对于目的高效执行力,以及落入坏人手里后,都有导致人类毁灭的危险。咋就不怕川普一冲动就按核按钮呢?
评分读过中文版
评分“The fact that there are many paths that lead to superintelligence should increase our confidence that we will eventually get there.” “It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland without children.”
评分看标题还以为是那种媒体里常见的吸引眼球标题党然后不负责任地胡说八道一通的书。看内容发现还是比较中肯地陈述和分析的。 不过有些内容也比较空洞。比如连 AI 或者超级 AI 是否可能,以及会以什么样的形态出现都没搞清楚的情况下讨论 AI 出现的过程会有多快之类的问题就很没有着力点的感觉。到后面的部分就觉得有些无聊很快扫过了。
本站所有内容均为互联网搜索引擎提供的公开搜索信息,本站不存储任何数据与内容,任何内容与数据均与本站无关,如有需要请联系相关搜索引擎包括但不限于百度,google,bing,sogou 等
© 2025 book.quotespace.org All Rights Reserved. 小美书屋 版权所有