Artificial Intelligence

Artificial Intelligence pdf epub mobi txt 电子书 下载 2026

出版者:Pearson
作者:Stuart Russell
出品人:
页数:1152
译者:
出版时间:2009-12-11
价格:USD 195.00
装帧:Hardcover
isbn号码:9780136042594
丛书系列:
图书标签:
  • 人工智能
  • AI
  • Artificial.Intelligence
  • 计算机
  • 计算机科学
  • CS
  • 算法
  • 机器学习
  • 人工智能
  • 机器学习
  • 深度学习
  • 神经网络
  • 数据科学
  • 编程
  • 算法
  • 智能系统
  • 自动化
  • 科技
想要找书就要到 小美书屋
立刻按 ctrl+D收藏本页
你会得到大惊喜!!

具体描述

The long-anticipated revision of this #1 selling book offers the most comprehensive, state of the art introduction to the theory and practice of artificial intelligence for modern applications. Intelligent Agents. Solving Problems by Searching. Informed Search Methods. Game Playing. Agents that Reason Logically. First-order Logic. Building a Knowledge Base. Inference in First-Order Logic. Logical Reasoning Systems. Practical Planning. Planning and Acting. Uncertainty. Probabilistic Reasoning Systems. Making Simple Decisions. Making Complex Decisions. Learning from Observations. Learning with Neural Networks. Reinforcement Learning. Knowledge in Learning. Agents that Communicate. Practical Communication in English. Perception. Robotics. For computer professionals, linguists, and cognitive scientists interested in artificial intelligence.

《智能边缘:下一代计算的黎明》 在一个万物互联、数据洪流汹涌的时代,我们正以前所未有的速度迈向一个由智能驱动的未来。然而,传统的集中式计算模式正逐渐暴露出其局限性——延迟、带宽瓶颈、隐私担忧以及对稳定网络连接的严重依赖。正是在这样的背景下,《智能边缘:下一代计算的黎明》应运而生,它将带领读者深入探索一个正在快速崛起的全新计算范式:边缘计算。 这本书并非对人工智能(Artificial Intelligence)这一宏大概念的泛泛而谈,而是聚焦于人工智能技术如何通过边缘计算得以实现并释放出其真正的潜能。它将揭示,当智能不再仅仅局限于遥远的数据中心,而是触及到我们生活的每一个角落——从智能家居中的传感器到自动驾驶汽车的传感器,从工业生产线上的监控设备到医疗保健中的可穿戴设备——世界将如何被重塑。 《智能边缘》将首先勾勒出边缘计算的宏大图景,阐释其核心理念和关键技术。我们将探讨计算能力如何被推向数据的生成端,从而显著缩短数据处理的延迟。这一点对于实时响应至关重要的应用,如自动驾驶、远程手术、工业自动化以及沉浸式虚拟现实(VR)和增强现实(AR)体验,具有革命性的意义。书中将详细分析边缘计算的架构,包括设备层、网关层、边缘节点以及与云端的协同工作模式,帮助读者构建起一个清晰的理解框架。 本书的一大亮点在于,它将深入剖析人工智能算法如何在边缘设备上运行。我们将探讨模型压缩、模型优化和轻量级人工智能框架等关键技术,这些技术使得复杂的AI模型能够部署在资源有限的边缘设备上,实现本地化的智能决策。从图像识别、语音处理到异常检测和预测性维护,书中将通过丰富的案例,展示边缘AI如何在本地快速分析数据,并立即采取行动,而无需将敏感数据传输到云端。 《智能边缘》还将着重探讨边缘计算为数据隐私和安全带来的新机遇。通过在设备本地处理数据,可以最大限度地减少敏感信息的泄露风险,从而更好地满足日益严格的数据保护法规。书中将深入分析去中心化身份验证、联邦学习等隐私保护技术在边缘环境中的应用,为构建安全可信的智能边缘生态系统提供指导。 此外,本书还将关注边缘计算对不同行业带来的深远影响。我们将探讨智能制造如何通过边缘AI实现更高效的生产调度和质量控制;智慧城市如何利用边缘计算提升交通管理、公共安全和资源利用效率;医疗保健领域如何通过边缘设备实现远程监测、早期疾病预警和个性化治疗;以及零售、农业、能源等行业如何拥抱边缘智能,实现业务模式的创新和转型。 《智能边缘》并非一本纯粹的技术手册,它同样关注边缘计算发展所面临的挑战和未来趋势。我们将讨论标准化、互操作性、功耗管理、边缘算力分配以及如何构建一个可持续的边缘AI生态系统等问题。同时,本书也将展望边缘计算与5G、物联网(IoT)、区块链等前沿技术的融合,描绘出下一代智能基础设施的蓝图。 无论您是技术专家、行业决策者,还是对未来科技充满好奇的读者,《智能边缘:下一代计算的黎明》都将为您打开一扇通往全新智能时代的大门。它将帮助您理解,我们正如何从云端的智慧走向身边的智能,并将如何共同塑造一个更加高效、安全、个性化且充满无限可能的未来。

作者简介

Stuart Russell was born in 1962 in Portsmouth, England. He received his B.A. with first-class honours in physics from Oxford University in 1982, and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is a professor of computer science, director of the Center for Intelligent Systems, and holder of the Smith–Zadeh Chair in Engineering. In 1990, he received the Presidential Young Investigator Award of the National Science Foundation, and in 1995 he was cowinner of the Computers and Thought Award. He was a 1996 Miller Professor of the University of California and was appointed to a Chancellor’s Professorship in 2000. In 1998, he gave the Forsythe Memorial Lectures at Stanford University. He is a Fellow and former Executive Council member of the American Association for Artificial Intelligence. He has published over 100 papers on a wide range of topics in artificial intelligence. His other books include The Use of Knowledge in Analogy and Induction and (with Eric Wefald) Do the Right Thing: Studies in Limited Rationality.

Peter Norvig is currently Director of Research at Google, Inc., and was the director responsible for the core Web search algorithms from 2002 to 2005. He is a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery. Previously, he was head of the Computational Sciences Division at NASA Ames Research Center, where he oversaw NASA’s research and development in artificial intelligence and robotics, and chief scientist at Junglee, where he helped develop one of the first Internet information extraction services. He received a B.S. in applied mathematics from Brown University and a Ph.D. in computer science from the University of California at Berkeley. He received the Distinguished Alumni and Engineering Innovation awards from Berkeley and the Exceptional Achievement Medal from NASA. He has been a professor at the University of Southern California and a research faculty member at Berkeley. His other books are Paradigms of AI Programming: Case Studies in Common Lisp and Verbmobil: A Translation System for Face-to-Face Dialog and Intelligent Help Systems for UNIX.

目录信息

Part I: Artificial Intelligence
Chapter 1: Introduction ... 1
1.1. What Is AI? ... 1
1.1.1. Acting humanly: The Turing Test approach ... 2
1.1.2. Thinking humanly: The cognitive modeling approach ... 3
1.1.3. Thinking rationally: The ``laws of thought'' approach ... 4
1.1.4. Acting rationally: The rational agent approach ... 4
1.2. The Foundations of Artificial Intelligence ... 5
1.2.1. Philosophy ... 5
1.2.2. Mathematics ... 7
1.2.3. Economics ... 9
1.2.4. Neuroscience ... 10
1.2.5. Psychology ... 12
1.2.6. Computer engineering ... 13
1.2.7. Control theory and cybernetics ... 15
1.2.8. Linguistics ... 15
1.3. The History of Artificial Intelligence ... 16
1.3.1. The gestation of artificial intelligence (1943--1955) ... 16
1.3.2. The birth of artificial intelligence (1956) ... 17
1.3.3. Early enthusiasm, great expectations (1952--1969) ... 18
1.3.4. A dose of reality (1966--1973) ... 20
1.3.5. Knowledge-based systems: The key to power? (1969--1979) ... 22
1.3.6. AI becomes an industry (1980--present) ... 24
1.3.7. The return of neural networks (1986--present) ... 24
1.3.8. AI adopts the scientific method (1987--present) ... 25
1.3.9. The emergence of intelligent agents (1995--present) ... 26
1.3.10. The availability of very large data sets (2001--present) ... 27
1.4. The State of the Art ... 28
1.5. Summary ... 29
Bibliographical and Historical Notes ... 30
Exercises ... 31
Chapter 2: Intelligent Agents ... 34
2.1. Agents and Environments ... 34
2.2. Good Behavior: The Concept of Rationality ... 36
2.2.1. Rationality ... 37
2.2.2. Omniscience, learning, and autonomy ... 38
2.3. The Nature of Environments ... 40
2.3.1. Specifying the task environment ... 40
2.3.2. Properties of task environments ... 41
2.4. The Structure of Agents ... 46
2.4.1. Agent programs ... 46
2.4.2. Simple reflex agents ... 48
2.4.3. Model-based reflex agents ... 50
2.4.4. Goal-based agents ... 52
2.4.5. Utility-based agents ... 53
2.4.6. Learning agents ... 54
2.4.7. How the components of agent programs work ... 57
2.5. Summary ... 59
Bibliographical and Historical Notes ... 59
Exercises ... 61
Part II: Problem-solving
Chapter 3: Solving Problems by Searching ... 64
3.1. Problem-Solving Agents ... 64
3.1.1. Well-defined problems and solutions ... 66
3.1.2. Formulating problems ... 68
3.2. Example Problems ... 69
3.2.1. Toy problems ... 70
3.2.2. Real-world problems ... 73
3.3. Searching for Solutions ... 75
3.3.1. Infrastructure for search algorithms ... 78
3.3.2. Measuring problem-solving performance ... 80
3.4. Uninformed Search Strategies ... 81
3.4.1. Breadth-first search ... 81
3.4.2. Uniform-cost search ... 83
3.4.3. Depth-first search ... 85
3.4.4. Depth-limited search ... 87
3.4.5. Iterative deepening depth-first search ... 88
3.4.6. Bidirectional search ... 90
3.4.7. Comparing uninformed search strategies ... 91
3.5. Informed (Heuristic) Search Strategies ... 92
3.5.1. Greedy best-first search ... 92
3.5.2. A* search: Minimizing the total estimated solution cost ... 93
Conditions for optimality: Admissibility and consistency ... 94
Optimality of A* ... 95
3.5.3. Memory-bounded heuristic search ... 99
3.5.4. Learning to search better ... 102
3.6. Heuristic Functions ... 102
3.6.1. The effect of heuristic accuracy on performance ... 103
3.6.2. Generating admissible heuristics from relaxed problems ... 104
3.6.3. Generating admissible heuristics from subproblems: Pattern databases ... 106
3.6.4. Learning heuristics from experience ... 107
3.7. Summary ... 108
Bibliographical and Historical Notes ... 109
Exercises ... 112
Chapter 4: Beyond Classical Search ... 120
4.1. Local Search Algorithms and Optimization Problems ... 120
4.1.1. Hill-climbing search ... 122
4.1.2. Simulated annealing ... 125
4.1.3. Local beam search ... 125
4.1.4. Genetic algorithms ... 126
4.2. Local Search in Continuous Spaces ... 129
4.3. Searching with Nondeterministic Actions ... 133
4.3.1. The erratic vacuum world ... 133
4.3.2 AND-OR search trees ... 135
4.3.3. Try, try again ... 137
4.4. Searching with Partial Observations ... 138
4.4.1. Searching with no observation ... 138
4.4.2. Searching with observations ... 142
4.4.3. Solving partially observable problems ... 143
4.4.4. An agent for partially observable environments ... 144
4.5. Online Search Agents and Unknown Environments ... 147
4.5.1. Online search problems ... 147
4.5.2. Online search agents ... 149
4.5.3. Online local search ... 150
4.5.4. Learning in online search ... 153
4.6. Summary ... 153
Bibliographical and Historical Notes ... 154
Exercises ... 157
Chapter 5: Adversarial Search ... 161
5.1. Games ... 161
5.2. Optimal Decisions in Games ... 163
5.2.1. The minimax algorithm ... 165
5.2.2. Optimal decisions in multiplayer games ... 165
5.3. Alpha--Beta Pruning ... 167
5.3.1. Move ordering ... 169
5.4. Imperfect Real-Time Decisions ... 171
5.4.1. Evaluation functions ... 171
5.4.2. Cutting off search ... 173
5.4.3. Forward pruning ... 174
5.4.4. Search versus lookup ... 176
5.5. Stochastic Games ... 177
5.5.1. Evaluation functions for games of chance ... 178
5.6. Partially Observable Games ... 180
5.6.1. Kriegspiel: Partially observable chess ... 180
5.6.2. Card games ... 183
5.7. State-of-the-Art Game Programs ... 185
5.8. Alternative Approaches ... 187
5.9. Summary ... 189
Bibliographical and Historical Notes ... 190
Exercises ... 195
Chapter 6: Constraint Satisfaction Problems ... 202
6.1. Defining Constraint Satisfaction Problems ... 202
6.1.1. Example problem: Map coloring ... 203
6.1.2. Example problem: Job-shop scheduling ... 204
6.1.3. Variations on the CSP formalism ... 205
6.2. Constraint Propagation: Inference in CSPs ... 208
6.2.1. Node consistency ... 208
6.2.2. Arc consistency ... 208
6.2.3. Path consistency ... 210
6.2.4. K-consistency. ... 211
6.2.5. Global constraints ... 211
6.2.6. Sudoku example ... 212
6.3. Backtracking Search for CSPs ... 214
6.3.1. Variable and value ordering ... 216
6.3.2. Interleaving search and inference ... 217
6.3.3. Intelligent backtracking: Looking backward ... 218
6.4. Local Search for CSPs ... 220
6.5. The Structure of Problems ... 222
6.6. Summary ... 227
Bibliographical and Historical Notes ... 227
Exercises ... 230
Part III: Knowledge, reasoning, and planning
Chapter 7: Logical Agents ... 234
7.1. Knowledge-Based Agents ... 235
7.2. The Wumpus World ... 236
7.3. Logic ... 240
7.4. Propositional Logic: A Very Simple Logic ... 243
7.4.1. Syntax ... 244
7.4.2. Semantics ... 245
7.4.3. A simple knowledge base ... 246
7.4.4. A simple inference procedure ... 247
7.5. Propositional Theorem Proving ... 249
7.5.1. Inference and proofs ... 250
7.5.2. Proof by resolution ... 252
Conjunctive normal form ... 253
A resolution algorithm ... 254
Completeness of resolution ... 255
7.5.3. Horn clauses and definite clauses ... 256
7.5.4. Forward and backward chaining ... 257
7.6. Effective Propositional Model Checking ... 259
7.6.1. A complete backtracking algorithm ... 260
7.6.2. Local search algorithms ... 262
7.6.3. The landscape of random SAT problems ... 263
7.7. Agents Based on Propositional Logic ... 265
7.7.1. The current state of the world ... 265
7.7.2. A hybrid agent ... 268
7.7.3. Logical state estimation ... 269
7.7.4. Making plans by propositional inference ... 271
7.8. Summary ... 274
Bibliographical and Historical Notes ... 275
Exercises ... 279
Chapter 8: First-Order Logic ... 285
8.1. Representation Revisited ... 285
8.1.1. The language of thought ... 286
8.1.2. Combining the best of formal and natural languages ... 288
8.2. Syntax and Semantics of First-Order Logic ... 290
8.2.1. Models for first-order logic ... 290
8.2.2. Symbols and interpretations ... 292
8.2.3. Terms ... 294
8.2.4. Atomic sentences ... 294
8.2.5. Complex sentences ... 295
8.2.6. Quantifiers ... 295
Universal quantification (∀) ... 295
Existential quantification (∃) ... 297
Nested quantifiers ... 297
Connections between ∀ and ∃ ... 298
8.2.7. Equality ... 299
8.2.8. An alternative semantics? ... 299
8.3. Using First-Order Logic ... 300
8.3.1. Assertions and queries in first-order logic ... 301
8.3.2. The kinship domain ... 301
8.3.3. Numbers, sets, and lists ... 303
8.3.4. The wumpus world ... 305
8.4. Knowledge Engineering in First-Order Logic ... 307
8.4.1. The knowledge-engineering process ... 307
8.4.2. The electronic circuits domain ... 309
Identify the task ... 309
Assemble the relevant knowledge ... 309
Decide on a vocabulary ... 310
Encode general knowledge of the domain ... 310
Encode the specific problem instance ... 311
Pose queries to the inference procedure ... 312
Debug the knowledge base ... 312
8.5. Summary ... 313
Bibliographical and Historical Notes ... 313
Exercises ... 315
Chapter 9: Inference in First-Order Logic ... 322
9.1. Propositional vs. First-Order Inference ... 322
9.1.1. Inference rules for quantifiers ... 322
9.1.2. Reduction to propositional inference ... 324
9.2. Unification and Lifting ... 325
9.2.1. A first-order inference rule ... 325
9.2.2. Unification ... 326
9.2.3. Storage and retrieval ... 327
9.3. Forward Chaining ... 330
9.3.1. First-order definite clauses ... 330
9.3.2. A simple forward-chaining algorithm ... 331
9.3.3. Efficient forward chaining ... 333
Matching rules against known facts ... 333
Incremental forward chaining ... 335
Irrelevant facts ... 336
9.4. Backward Chaining ... 337
9.4.1. A backward-chaining algorithm ... 337
9.4.2. Logic programming ... 339
9.4.3. Efficient implementation of logic programs ... 340
9.4.4. Redundant inference and infinite loops ... 342
9.4.5. Database semantics of Prolog ... 343
9.4.6. Constraint logic programming ... 344
9.5. Resolution ... 345
9.5.1. Conjunctive normal form for first-order logic ... 345
9.5.2. The resolution inference rule ... 347
9.5.3. Example proofs ... 347
9.5.4. Completeness of resolution ... 350
9.5.5. Equality ... 353
9.5.6. Resolution strategies ... 355
Practical uses of resolution theorem provers ... 356
9.6. Summary ... 357
Bibliographical and Historical Notes ... 357
Exercises ... 360
Chapter 10: Classical Planning ... 366
10.1. Definition of Classical Planning ... 366
10.1.1. Example: Air cargo transport ... 369
10.1.2. Example: The spare tire problem ... 370
10.1.3. Example: The blocks world ... 370
10.1.4. The complexity of classical planning ... 372
10.2. Algorithms for Planning as State-Space Search ... 373
10.2.1. Forward (progression) state-space search ... 373
10.2.2. Backward (regression) relevant-states search ... 374
10.2.3. Heuristics for planning ... 376
10.3. Planning Graphs ... 379
10.3.1. Planning graphs for heuristic estimation ... 381
10.3.2. The Graphplan algorithm ... 383
10.3.3. Termination of Graphplan ... 385
10.4. Other Classical Planning Approaches ... 387
10.4.1. Classical planning as Boolean satisfiability ... 387
10.4.2. Planning as first-order logical deduction: Situation calculus ... 388
10.4.3. Planning as constraint satisfaction ... 390
10.4.4. Planning as refinement of partially ordered plans ... 390
10.5. Analysis of Planning Approaches ... 392
10.6. Summary ... 393
Bibliographical and Historical Notes ... 393
Exercises ... 396
Chapter 11: Planning and Acting in the Real World ... 401
11.1. Time, Schedules, and Resources ... 401
11.1.1. Representing temporal and resource constraints ... 402
11.1.2. Solving scheduling problems ... 403
11.2. Hierarchical Planning ... 406
11.2.1. High-level actions ... 406
11.2.2. Searching for primitive solutions ... 408
11.2.3. Searching for abstract solutions ... 410
11.3. Planning and Acting in Nondeterministic Domains ... 415
11.3.1. Sensorless planning ... 417
11.3.2. Contingent planning ... 421
11.3.3. Online replanning ... 422
11.4. Multiagent Planning ... 425
11.4.1. Planning with multiple simultaneous actions ... 426
11.4.2. Planning with multiple agents: Cooperation and coordination ... 428
11.5. Summary ... 430
Bibliographical and Historical Notes ... 431
Exercises ... 435
Chapter 12: Knowledge Representation ... 437
12.1. Ontological Engineering ... 437
12.2. Categories and Objects ... 440
12.2.1. Physical composition ... 441
12.2.2. Measurements ... 444
12.2.3. Objects: Things and stuff ... 445
12.3. Events ... 446
12.3.1. Processes ... 447
12.3.2. Time intervals ... 448
12.3.3. Fluents and objects ... 449
12.4. Mental Events and Mental Objects ... 450
12.5. Reasoning Systems for Categories ... 453
12.5.1. Semantic networks ... 454
12.5.2. Description logics ... 456
12.6. Reasoning with Default Information ... 458
12.6.1. Circumscription and default logic ... 458
12.6.2. Truth maintenance systems ... 460
12.7. The Internet Shopping World ... 462
12.7.1. Following links ... 464
12.7.2. Comparing offers ... 466
12.8. Summary ... 467
Bibliographical and Historical Notes ... 468
Exercises ... 473
Part IV: Uncertain knowledge and reasoning
Chapter 13: Quantifying Uncertainty ... 480
13.1. Acting under Uncertainty ... 480
13.1.1. Summarizing uncertainty ... 481
13.1.2. Uncertainty and rational decisions ... 482
13.2. Basic Probability Notation ... 483
13.2.1. What probabilities are about ... 484
13.2.2. The language of propositions in probability assertions ... 486
13.2.3. Probability axioms and their reasonableness ... 488
13.3. Inference Using Full Joint Distributions ... 490
13.4. Independence ... 494
13.5. Bayes' Rule and Its Use ... 495
13.5.1. Applying Bayes' rule: The simple case ... 496
13.5.2. Using Bayes' rule: Combining evidence ... 497
13.6. The Wumpus World Revisited ... 499
13.7. Summary ... 503
Bibliographical and Historical Notes ... 503
Exercises ... 506
Chapter 14: Probabilistic Reasoning ... 510
14.1. Representing Knowledge in an Uncertain Domain ... 510
14.2. The Semantics of Bayesian Networks ... 513
14.2.1. Representing the full joint distribution ... 513
A method for constructing Bayesian networks ... 514
Compactness and node ordering ... 515
14.2.2. Conditional independence relations in Bayesian networks ... 517
14.3. Efficient Representation of Conditional Distributions ... 518
Bayesian nets with continuous variables ... 519
14.4. Exact Inference in Bayesian Networks ... 522
14.4.1. Inference by enumeration ... 523
14.4.2. The variable elimination algorithm ... 524
Operations on factors ... 526
Variable ordering and variable relevance ... 527
14.4.3. The complexity of exact inference ... 528
14.4.4. Clustering algorithms ... 529
14.5. Approximate Inference in Bayesian Networks ... 530
14.5.1. Direct sampling methods ... 530
Rejection sampling in Bayesian networks ... 532
Likelihood weighting ... 532
14.5.2. Inference by Markov chain simulation ... 535
Gibbs sampling in Bayesian networks ... 536
Why Gibbs sampling works ... 536
14.6. Relational and First-Order Probability Models ... 539
14.6.1. Possible worlds ... 540
14.6.2. Relational probability models ... 542
14.6.3. Open-universe probability models ... 544
14.7. Other Approaches to Uncertain Reasoning ... 546
14.7.1. Rule-based methods for uncertain reasoning ... 547
14.7.2. Representing ignorance: Dempster--Shafer theory ... 549
14.7.3. Representing vagueness: Fuzzy sets and fuzzy logic ... 550
14.8. Summary ... 551
Bibliographical and Historical Notes ... 552
Exercises ... 558
Chapter 15: Probabilistic Reasoning over Time ... 566
15.1. Time and Uncertainty ... 566
15.1.1. States and observations ... 567
15.1.2. Transition and sensor models ... 568
15.2. Inference in Temporal Models ... 570
15.2.1. Filtering and prediction ... 571
15.2.2. Smoothing ... 574
15.2.3. Finding the most likely sequence ... 576
15.3. Hidden Markov Models ... 578
15.3.1. Simplified matrix algorithms ... 579
15.3.2. Hidden Markov model example: Localization ... 581
15.4. Kalman Filters ... 584
15.4.1. Updating Gaussian distributions ... 584
15.4.2. A simple one-dimensional example ... 585
15.4.3. The general case ... 587
15.4.4. Applicability of Kalman filtering ... 588
15.5. Dynamic Bayesian Networks ... 590
15.5.1. Constructing DBNs ... 591
15.5.2. Exact inference in DBNs ... 595
15.5.3. Approximate inference in DBNs ... 596
15.6. Keeping Track of Many Objects ... 599
15.7. Summary ... 603
Bibliographical and Historical Notes ... 603
Exercises ... 606
Chapter 16: Making Simple Decisions ... 610
16.1. Combining Beliefs and Desires under Uncertainty ... 610
16.2. The Basis of Utility Theory ... 611
16.2.1. Constraints on rational preferences ... 612
16.2.2. Preferences lead to utility ... 613
16.3. Utility Functions ... 615
16.3.1. Utility assessment and utility scales ... 615
16.3.2. The utility of money ... 616
16.3.3. Expected utility and post-decision disappointment ... 618
16.3.4. Human judgment and irrationality ... 619
16.4. Multiattribute Utility Functions ... 622
16.4.1. Dominance ... 622
16.4.2. Preference structure and multiattribute utility ... 624
Preferences without uncertainty ... 624
Preferences with uncertainty ... 625
16.5. Decision Networks ... 626
16.5.1. Representing a decision problem with a decision network ... 626
16.5.2. Evaluating decision networks ... 628
16.6. The Value of Information ... 628
16.6.1. A simple example ... 629
16.6.2. A general formula for perfect information ... 630
16.6.3. Properties of the value of information ... 631
16.6.4. Implementation of an information-gathering agent ... 632
16.7. Decision-Theoretic Expert Systems ... 633
16.8. Summary ... 636
Bibliographical and Historical Notes ... 636
Exercises ... 640
Chapter 17: Making Complex Decisions ... 645
17.1. Sequential Decision Problems ... 645
17.1.1. Utilities over time ... 648
17.1.2. Optimal policies and the utilities of states ... 650
17.2. Value Iteration ... 652
17.2.1. The Bellman equation for utilities ... 652
17.2.2. The value iteration algorithm ... 652
17.2.3. Convergence of value iteration ... 654
17.3. Policy Iteration ... 656
17.4. Partially Observable MDPs ... 658
17.4.1. Definition of POMDPs ... 658
17.4.2. Value iteration for POMDPs ... 660
17.4.3. Online agents for POMDPs ... 664
17.5. Decisions with Multiple Agents: Game Theory ... 666
17.5.1. Single-move games ... 667
17.5.2. Repeated games ... 673
17.5.3. Sequential games ... 674
17.6. Mechanism Design ... 679
17.6.1. Auctions ... 679
17.6.2. Common goods ... 683
17.7. Summary ... 684
Bibliographical and Historical Notes ... 685
Exercises ... 688
Part V: Learning
Chapter 18: Learning from Examples ... 693
18.1. Forms of Learning ... 693
Components to be learned ... 694
Representation and prior knowledge ... 694
Feedback to learn from ... 694
18.2. Supervised Learning ... 695
18.3. Learning Decision Trees ... 697
18.3.1. The decision tree representation ... 698
18.3.2. Expressiveness of decision trees ... 698
18.3.3. Inducing decision trees from examples ... 699
18.3.4. Choosing attribute tests ... 703
18.3.5. Generalization and overfitting ... 705
18.3.6. Broadening the applicability of decision trees ... 706
18.4. Evaluating and Choosing the Best Hypothesis ... 708
18.4.1. Model selection: Complexity versus goodness of fit ... 709
18.4.2. From error rates to loss ... 710
18.4.3. Regularization ... 712
18.5. The Theory of Learning ... 713
18.5.1. PAC learning example: Learning decision lists ... 715
18.6. Regression and Classification with Linear Models ... 717
18.6.1. Univariate linear regression ... 718
18.6.2. Multivariate linear regression ... 720
18.6.3. Linear classifiers with a hard threshold ... 723
18.6.4. Linear classification with logistic regression ... 725
18.7. Artificial Neural Networks ... 727
18.7.1. Neural network structures ... 728
18.7.2. Single-layer feed-forward neural networks (perceptrons) ... 729
18.7.3. Multilayer feed-forward neural networks ... 731
18.7.4. Learning in multilayer networks ... 733
18.7.5. Learning neural network structures ... 736
18.8. Nonparametric Models ... 737
18.8.1. Nearest neighbor models ... 738
18.8.2. Finding nearest neighbors with k-d trees ... 739
18.8.3. Locality-sensitive hashing ... 740
18.8.4. Nonparametric regression ... 741
18.9. Support Vector Machines ... 744
18.10. Ensemble Learning ... 748
18.10.1. Online Learning ... 752
18.11. Practical Machine Learning ... 753
18.11.1. Case study: Handwritten digit recognition ... 753
18.11.2. Case study: Word senses and house prices ... 755
18.12. Summary ... 757
Bibliographical and Historical Notes ... 758
Exercises ... 763
Chapter 19: Knowledge in Learning ... 768
19.1. A Logical Formulation of Learning ... 768
19.1.1. Examples and hypotheses ... 768
19.1.2. Current-best-hypothesis search ... 770
19.1.3. Least-commitment search ... 773
19.2. Knowledge in Learning ... 777
19.2.1. Some simple examples ... 778
19.2.2. Some general schemes ... 778
19.3. Explanation-Based Learning ... 780
19.3.1. Extracting general rules from examples ... 781
19.3.2. Improving efficiency ... 783
19.4. Learning Using Relevance Information ... 784
19.4.1. Determining the hypothesis space ... 785
19.4.2. Learning and using relevance information ... 785
19.5. Inductive Logic Programming ... 788
19.5.1. An example ... 788
19.5.2. Top-down inductive learning methods ... 791
19.5.3. Inductive learning with inverse deduction ... 794
19.5.4. Making discoveries with inductive logic programming ... 796
19.6. Summary ... 797
Bibliographical and Historical Notes ... 798
Exercises ... 801
Chapter 20: Learning Probabilistic Models ... 802
20.1. Statistical Learning ... 802
20.2. Learning with Complete Data ... 806
20.2.1. Maximum-likelihood parameter learning: Discrete models ... 806
20.2.2. Naive Bayes models ... 808
20.2.3. Maximum-likelihood parameter learning: Continuous models ... 809
20.2.4. Bayesian parameter learning ... 810
20.2.5. Learning Bayes net structures ... 813
20.2.6. Density estimation with nonparametric models ... 814
20.3. Learning with Hidden Variables: The EM Algorithm ... 816
20.3.1. Unsupervised clustering: Learning mixtures of Gaussians ... 817
20.3.2. Learning Bayesian networks with hidden variables ... 820
20.3.3. Learning hidden Markov models ... 822
20.3.4. The general form of the EM algorithm ... 823
20.3.5. Learning Bayes net structures with hidden variables ... 824
20.4. Summary ... 825
Bibliographical and Historical Notes ... 825
Exercises ... 827
Chapter 21: Reinforcement Learning ... 830
21.1. Introduction ... 830
21.2. Passive Reinforcement Learning ... 832
21.2.1. Direct utility estimation ... 833
21.2.2. Adaptive dynamic programming ... 834
21.2.3. Temporal-difference learning ... 836
21.3. Active Reinforcement Learning ... 839
21.3.1. Exploration ... 839
21.3.2. Learning an action-utility function ... 842
21.4. Generalization in Reinforcement Learning ... 845
21.5. Policy Search ... 848
21.6. Applications of Reinforcement Learning ... 850
21.6.1. Applications to game playing ... 850
21.6.2. Application to robot control ... 851
21.7. Summary ... 853
Bibliographical and Historical Notes ... 854
Exercises ... 858
Part VI: Communicating, perceiving, and acting
Chapter 22: Natural Language Processing ... 860
22.1. Language Models ... 860
22.1.1 N-gram character models ... 861
22.1.2. Smoothing n-gram models ... 862
22.1.3. Model evaluation ... 863
22.1.4 N-gram word models ... 864
22.2. Text Classification ... 865
22.2.1. Classification by data compression ... 866
22.3. Information Retrieval ... 867
22.3.1. IR scoring functions ... 868
22.3.2. IR system evaluation ... 869
22.3.3. IR refinements ... 869
22.3.4. The PageRank algorithm ... 870
22.3.5. The HITS algorithm ... 872
22.3.6. Question answering ... 872
22.4. Information Extraction ... 873
22.4.1. Finite-state automata for information extraction ... 874
22.4.2. Probabilistic models for information extraction ... 876
22.4.3. Conditional random fields for information extraction ... 878
22.4.4. Ontology extraction from large corpora ... 879
22.4.5. Automated template construction ... 880
22.4.6. Machine reading ... 881
22.5. Summary ... 882
Bibliographical and Historical Notes ... 883
Exercises ... 885
Chapter 23: Natural Language for Communication ... 888
23.1. Phrase Structure Grammars ... 888
23.1.1. The lexicon of E0 ... 890
23.1.2. The Grammar of E0 ... 890
23.2. Syntactic Analysis (Parsing) ... 892
23.2.1. Learning probabilities for PCFGs ... 895
23.2.2. Comparing context-free and Markov models ... 896
23.3. Augmented Grammars and Semantic Interpretation ... 897
23.3.1. Lexicalized PCFGs ... 897
23.3.2. Formal definition of augmented grammar rules ... 898
23.3.3. Case agreement and subject--verb agreement ... 899
23.3.4. Semantic interpretation ... 900
23.3.5. Complications ... 902
23.4. Machine Translation ... 907
23.4.1. Machine translation systems ... 908
23.4.2. Statistical machine translation ... 909
23.5. Speech Recognition ... 912
23.5.1. Acoustic model ... 914
23.5.2. Language model ... 917
23.5.3. Building a speech recognizer ... 917
23.6. Summary ... 918
Bibliographical and Historical Notes ... 919
Exercises ... 923
Chapter 24: Perception ... 928
24.1. Image Formation ... 929
24.1.1. Images without lenses: The pinhole camera ... 929
24.1.2. Lens systems ... 931
24.1.3. Scaled orthographic projection ... 932
24.1.4. Light and shading ... 932
24.1.5. Color ... 935
24.2. Early Image-Processing Operations ... 935
24.2.1. Edge detection ... 936
24.2.2. Texture ... 939
24.2.3. Optical flow ... 939
24.2.4. Segmentation of images ... 941
24.3. Object Recognition by Appearance ... 942
24.3.1. Complex appearance and pattern elements ... 944
24.3.2. Pedestrian detection with HOG features ... 945
24.4. Reconstructing the 3D World ... 947
24.4.1. Motion parallax ... 948
24.4.2. Binocular stereopsis ... 949
24.4.3. Multiple views ... 951
24.4.4. Texture ... 951
24.4.5. Shading ... 952
24.4.6. Contour ... 953
24.4.7. Objects and the geometric structure of scenes ... 954
24.5. Object Recognition from Structural Information ... 957
24.5.1. The geometry of bodies: Finding arms and legs ... 958
24.5.2. Coherent appearance: Tracking people in video ... 959
24.6. Using Vision ... 961
24.6.1. Words and pictures ... 962
24.6.2. Reconstruction from many views ... 962
24.6.3. Using vision for controlling movement ... 963
24.7. Summary ... 965
Bibliographical and Historical Notes ... 966
Exercises ... 969
Chapter 25: Robotics ... 971
25.1. Introduction ... 971
25.2. Robot Hardware ... 973
25.2.1. Sensors ... 973
25.2.2. Effectors ... 975
25.3. Robotic Perception ... 978
25.3.1. Localization and mapping ... 979
25.3.2. Other types of perception ... 984
25.3.3. Machine learning in robot perception ... 985
25.4. Planning to Move ... 986
25.4.1. Configuration space ... 986
25.4.2. Cell decomposition methods ... 989
25.4.3. Modified cost functions ... 991
25.4.4. Skeletonization methods ... 991
25.5. Planning Uncertain Movements ... 993
25.5.1. Robust methods ... 994
25.6. Moving ... 997
25.6.1. Dynamics and control ... 997
25.6.2. Potential-field control ... 999
25.6.3. Reactive control ... 1001
25.6.4. Reinforcement learning control ... 1002
25.7. Robotic Software Architectures ... 1003
25.7.1. Subsumption architecture ... 1003
25.7.2. Three-layer architecture ... 1004
25.7.3. Pipeline architecture ... 1005
25.8. Application Domains ... 1006
25.9. Summary ... 1010
Bibliographical and Historical Notes ... 1011
Exercises ... 1014
Part VII: Conclusions
Chapter 26: Philosophical Foundations ... 1020
26.1. Weak AI: Can Machines Act Intelligently? ... 1020
26.1.1. The argument from disability ... 1021
26.1.2. The mathematical objection ... 1022
26.1.3. The argument from informality ... 1024
26.2. Strong AI: Can Machines Really Think? ... 1026
26.2.1. Mental states and the brain in a vat ... 1028
26.2.2. Functionalism and the brain replacement experiment ... 1029
26.2.3. Biological naturalism and the Chinese Room ... 1031
26.2.4. Consciousness, qualia, and the explanatory gap ... 1033
26.3. The Ethics and Risks of Developing Artificial Intelligence ... 1034
26.4. Summary ... 1040
Bibliographical and Historical Notes ... 1040
Exercises ... 1043
Chapter 27: AI: The Present and Future ... 1044
27.1. Agent Components ... 1044
27.2. Agent Architectures ... 1047
27.3. Are We Going in the Right Direction? ... 1049
27.4. What If AI Does Succeed? ... 1051
Chapter A: Mathematical background ... 1053
A.1. Complexity Analysis and O() Notation ... 1053
A.1.1. Asymptotic analysis ... 1053
A.1.2. NP and inherently hard problems ... 1054
A.2. Vectors, Matrices, and Linear Algebra ... 1055
A.3. Probability Distributions ... 1057
Bibliographical and Historical Notes ... 1059
Chapter B: Notes on Languages and Algorithms ... 1060
B.1. Defining Languages with Backus--Naur Form (BNF) ... 1060
B.2. Describing Algorithms with Pseudocode ... 1061
B.3. Online Help ... 1062
Bibliography ... 1063
Index ... 1109
· · · · · · (收起)

读后感

评分

关于人工智能,此书无疑是No.1的经典著作,研究生课程的时候就拜读过其中几个章节,受益良多。当时学校图书馆的书店有卖的,当时觉得贵,没有狠下心买,结果后来发现缺货了。这经典著作一缺就是好几年的时间,最后还是终于再版了。 现在要写毕业论文,重新拜读...  

评分

翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的如何翻译的...  

评分

这本书不是很好懂的,对于自学的初学者而言。我自学的,看这本书,半懂不懂的,最大的困难还是在逻辑那一块吧。这本书很全面,虽然不敢说把人工智能(包括机器学习)领域的一切都包括了吧,但是至少概况是都覆盖到了。或许正是这么全面的原因,也或许是译者翻译的原因,也有...  

评分

基本上一本书被称为经典, 就没人敢去反驳他了. 因为你觉得不好大家就会说你水平不行. 就好像你说乔丹不如科比是绝B会被爆一样... 这本书我只能说如果你真的是理解能力很差, 或者毫无自学能力... 那这本书会是本好书... 但你理解能力稍微有一点, 也懂得自己利用网络去找不懂的...

评分

基本上一本书被称为经典, 就没人敢去反驳他了. 因为你觉得不好大家就会说你水平不行. 就好像你说乔丹不如科比是绝B会被爆一样... 这本书我只能说如果你真的是理解能力很差, 或者毫无自学能力... 那这本书会是本好书... 但你理解能力稍微有一点, 也懂得自己利用网络去找不懂的...

用户评价

评分

读完《人工智能》,我感觉自己像是坐上了一趟时光列车,穿越了信息时代的几个重要节点。这本书的叙述风格非常独特,它巧妙地将技术发展与社会变革 interwoven(交织)在一起,让我看到了人工智能不仅仅是冰冷的算法和代码,更是深深地影响着我们生活的方方面面。书中关于机器学习的章节,用非常形象的比喻解释了“学习”的过程,比如让机器“看”大量的图片来识别猫和狗,这个过程的描述,就像是在给一个孩子讲故事,充满了趣味性,也让我这个非技术背景的读者能够轻松理解。我特别欣赏作者在描述不同人工智能流派时那种客观而又深入的分析,既指出了各自的优势,也毫不避讳地暴露了其局限性。读到关于专家系统的那部分,我仿佛看到了上世纪八十年代,科学家们对人工智能的乐观预期,以及随之而来的“AI冬天”。这种对历史的真实记录,让我对人工智能的发展有了更全面的认识,也更加理解了那些起起伏伏背后的原因。这本书的结构安排也非常合理,逻辑清晰,每一章都像是一个独立的故事,但又紧密相连,共同构成了人工智能的宏大画卷。

评分

这本书就像是一次思想的深度旅行,带我走进了人工智能的奇妙世界。它并没有拘泥于技术细节,而是更侧重于人工智能的发展脉络、核心思想以及它对我们社会产生的深远影响。我尤其对书中关于“通用人工智能”(AGI)的探讨很感兴趣。作者分析了实现AGI所面临的巨大挑战,也展望了它可能带来的革命性变化。这种既理性分析又富有想象力的叙述,让我对人工智能的未来充满了期待。书中穿插的许多历史故事和科学家的访谈,也为这本书增添了许多人文色彩,让我看到了那些伟大的头脑在探索人工智能过程中的坚持与创新。这本书的独特之处在于,它不仅仅是讲述“是什么”,更重要的是引导读者去思考“为什么”以及“将走向何方”。它让我认识到,人工智能的发展,并非是孤立的技术进步,而是与人类文明的发展紧密相连的。

评分

这本书的魅力在于它将高深的技术概念,转化成了引人入胜的叙事。我之前对人工智能的理解,大多停留在科幻电影里那种“终结者”式的概念,但《人工智能》这本书,用一种更为理性、更为贴近现实的方式,为我打开了一扇新世界的大门。它没有回避人工智能带来的伦理和社会问题,而是将这些复杂的话题,置于技术发展的宏观视角下进行探讨。我尤其对书中关于人工智能在医疗、金融等领域应用的案例分析印象深刻。作者通过具体的案例,展现了人工智能如何解决现实世界中的难题,例如辅助医生诊断疾病,或者优化金融市场的风险控制。这些内容让我看到了人工智能的巨大潜力,也引发了我对未来社会形态的思考。这本书的写作风格非常严谨,但又不失人文关怀。在探讨技术进步的同时,作者也始终关注着人类的未来,这种视角让我感到非常欣慰。它不是一本冰冷的技术手册,而是一本富有洞察力、能够激发思考的著作。

评分

这本书真是让我大开眼界,尽管我一直对科技充满好奇,但《人工智能》这本书还是以一种非常接地气的方式,将那些曾经遥不可及的概念变得生动易懂。它没有直接抛出晦涩难懂的术语,而是从人工智能的起源讲起,那种循序渐进的叙述方式,就像一位经验丰富的向导,一步步带领我穿越人工智能的漫长历史。我印象最深刻的是关于早期人工智能的探索,那些充满智慧和想象力的尝试,即使在今天看来也显得格外可贵。作者用了很多生动的例子,比如早期的下棋程序,它们看似简单,背后却蕴含着深刻的逻辑和计算原理。读到这里,我仿佛能看到那些先驱者们在灯光下,对着枯燥的公式和代码,眼中闪烁着对未知世界的好奇与探索。这本书让我重新认识到,人工智能并非一蹴而就,而是无数聪明才智,历经数十年甚至上百年的积累与沉淀,才逐渐展现在我们面前的。它教会我,理解一门技术,不仅要看它现在的辉煌,更要追溯它最初的萌芽,那些看似不起眼的小火苗,最终能够燎原。这本书的文字非常有力量,没有华丽的辞藻,却能触动人心,让人在字里行间感受到智慧的碰撞和思想的火花。

评分

《人工智能》这本书,给我最大的感受是它在“去神秘化”方面做得非常出色。我一直觉得人工智能是一个非常高深莫测的领域,但这本书通过深入浅出的讲解,将那些复杂的算法和模型,变得触手可及。它从人工智能的定义入手,然后逐步深入到各种具体的实现技术,比如神经网络、深度学习等等。作者在解释这些概念时,非常善于运用类比和图示,让我这个对数学和计算机科学不甚了解的读者,也能大致理解其原理。我特别喜欢书中关于“感知”和“决策”部分的内容,它解释了机器如何通过“看”、“听”来理解世界,以及如何根据获取的信息做出判断。这部分内容让我对人工智能的“智能”有了更具象的认识,也让我开始思考,人类的智能与机器的智能,究竟存在哪些本质的区别。这本书的语言风格非常流畅,读起来一点也不枯燥,反而充满了智识的乐趣。

评分

经典

评分

TA课的教材,得好好看一看,也属于AI的经典教材了。作者是Russell 和 Norvig,Russell 是 instructor 的导师,Norvig 是现在Google research 的主任,年纪不大但顶着一头白发。Norvig发的文章不多而且比较奇怪,但这本书实在是太有名了。 另外,封面有个彩蛋,哈哈

评分

绝对超级逼逼叨

评分

三主线,强化学习,图模型,神经网络

评分

绝对超级逼逼叨

本站所有内容均为互联网搜索引擎提供的公开搜索信息,本站不存储任何数据与内容,任何内容与数据均与本站无关,如有需要请联系相关搜索引擎包括但不限于百度google,bing,sogou

© 2026 book.quotespace.org All Rights Reserved. 小美书屋 版权所有