神經網絡與機器學習

神經網絡與機器學習 pdf epub mobi txt 電子書 下載2025

出版者:機械工業齣版社
作者:(加)海金
出品人:
頁數:906
译者:
出版時間:2009-3
價格:69.00元
裝幀:
isbn號碼:9787111265283
叢書系列:經典原版書庫
圖書標籤:
  • 神經網絡
  • 機器學習
  • 人工智能
  • 模式識彆
  • AI
  • 數據挖掘
  • 計算機
  • 智能
  • 神經網絡
  • 機器學習
  • 深度學習
  • 人工智能
  • 算法
  • 數據科學
  • 模型
  • 訓練
  • 特徵
  • 預測
想要找書就要到 小美書屋
立刻按 ctrl+D收藏本頁
你會得到大驚喜!!

具體描述

《神經網絡與機器學習(英文版第3版)》的可讀性非常強,作者舉重若輕地對神經網絡的基本模型和主要學習理論進行瞭深入探討和分析,通過大量的試驗報告、例題和習題來幫助讀者更好地學習神經網絡。神經網絡是計算智能和機器學習的重要分支,在諸多領域都取得瞭很大的成功。在眾多神經網絡著作中,影響最為廣泛的是SimonHaykin的《神經網絡原理》(第4版更名為《神經網絡與機器學習》)。在《神經網絡與機器學習(英文版第3版)》中,作者結閤近年來神經網絡和機器學習的最新進展,從理論和實際應用齣發,全麵。係統地介紹瞭神經網絡的基本模型、方法和技術,並將神經網絡和機器學習有機地結閤在一起。《神經網絡與機器學習(英文版第3版)》不但注重對數學分析方法和理論的探討,而且也非常關注神經網絡在模式識彆、信號處理以及控製係統等實際工程問題中的應用。

本版在前一版的基礎上進行瞭廣泛修訂,提供瞭神經網絡和機器學習這兩個越來越重要的學科的最新分析。

著者簡介

Simon Haykin,於1953年獲得英國伯明翰大學博士學位,目前為加拿大McMaster大學電子與計算機工程係教授、通信研究實驗室主任。他是國際電子電氣工程界的著名學者,曾獲得IEEE McNaughton金奬。他是加拿大皇傢學會院士、IEEE會士,在神經網絡、通信、自適應濾波器等領域成果頗豐,著有多部標準教材。

圖書目錄

Preface vAcknowledgements xivAbbreviations and Symbols xviGLOSSARY xxiIntroduction 11. What is a Neural Network? 12. The Human Brain 63. Models of a Neuron 104. Neural Networks Viewed As Directed Graphs 155. Feedback 186. Network Architectures 217. Knowledge Representation 248. Learning Processes 349. Learning Tasks 3810. Concluding Remarks 45 Notes and References 46Chapter 1 Rosenblatt'sPerceptron 471.1 Introduction 471.2 Perceptron 481.3 The Perceptron Convergence Theorem 501.4 Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment 551.5 Computer Experiment: Pattern Classification 601.6 The Batch Perceptron Algorithm 621.7 Summary and Discussion 65 Notes and References 66 Problems 66Chapter 2 Model Building through Regression 682.1 Introduction 682.2 Linear Regression Model: Preliminary Considerations 692.3 Maximum a Posteriori Estimation of the Parameter Vector 712.4 Relationship .Between Regularized Least-Squares Estimation and MAP Estimation 762.5 Computer Experiment: Pattern Classification 772.6 The Minimum-Description-Length Principle 792.7 Finite Sample-Size Considerations 822.8 The Instrumental-Variables Method 862.9 Summary and Discussion 88 Notes and References 89 Problems 89Chapter 3 The Least-Mean-Square Algorithm 913.1 Introduction 913.2 Filtering Structure of the LMS Algorithm 923.3 Unconstrained Optimization:a Review 943.4 The Wiener Filter 1003.5 The Least-Mean-Square Algorithm 1023.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter 1043.7 The Langevin Equation: Characterization of Brownian Motion 1063.8 Kushner's Direct-Averaging Method 1073.9 Statistical LMS LearningTheory for Small Learning-Rate Parameter 1083.10 Computer Experiment I:Linear Prediction 1103.11 Computer Experiment II:Pattern Classification 1123.12 Virtues and Limitations of the LMSAlgorithm 1133.13 Learning-Rate Annealing Schedules 1153.14 Summary and Discussion 117 Notes and References 118 Problems 119Chapter 4 Multilayer Perceptrons 1224.1 Introduction 1234.2 Some Preliminaries 1244.3 Batch Learning and On-Line Learning 1264.4 The Back-Propagation Algorithm 1294.5 XOR Problem 1414.6 Heuristics for Making the Back-Propagation Algorithm Perform Better 1444.7 Computer Experiment: Pattern Classification 1504.8 Back Propagation and Differentiation 1534.9 The Hessian and Its Role in On-Line Learning 1554.10 Optimal Annealing and Adaptive Control of the Learning Rate 1574.11 Generalization 1644.12 Approximations of Functions 1664.13 Cross-Validation 1714.14 Complexity Regularization and Network Pruning 1754.15 Virtues and Limitations of Back-Propagation Learning 1804.16 Supervised Learning Viewed as an Optimization Problem 1864.17 ConvolutionalNetworks 2014.18 Nonlinear Filtering 2034.19 Small-Scale Versus Large-Scale Learning Problems 2094.20 Summary and Discussion 217 Notes and References 219 Problems 221Chapter 5 Kernel Methods and Radial-Basis Function Networks 2305.1 Introduction 2305.2 Cover's Theorem on the Separability of Patterns 2315.3 The Interpolation Problem 2365.4 Radial-Basis-Function Networks 2395.5 K-Means Clustering 2425.6 Reeursive Least-Squares Estimation of the Weight Vector 2455.7 Hybrid Learning Procedure for RBF Networks 2495.8 Computer Experiment: Pattern Classification 2505.9 Interpretations of the Gaussian Hidden Units 2525.10 Kernel Regression and Its Relation to RBF Networks 2555.11 Summary and Discussion 259 Notes and References 261 Problems 263Chapter 6 Support Vector Machines 2686.1 Introduction 2686.2 Optimal Hyperplane for Linearly Separable Patterns 2696.3 Optimal Hyperplane for Nonseparable Patterns 2766.4 The Support Vector Machine Viewed as a Kernel Machine 2816.5 Design of Support Vector Machines 2846.6 XOR Problem 2866.7 Computer Experiment: Pattern Classification 2896.8 Regression: Robustness Considerations 2896.9 Optimal Solution of the Linear Regression Problem 2936.10 The Representer Theorem and Related Issues 2966.11 Summary and Discussion 302 Notes and References 304 Problems 307Chapter 7 RegularizationTheory 3137.1 Introduction 3137.2 Hadamard's Conditions for Well-Posedness 3147.3 Tikhonov's Regularization Theory 3157.4 Regularization Networks 3267.5 Generalized Radial-Basis-Function Networks 3277.6 The Regularized Least-Squares Estimator: Revisited 3317.7 Additional Notes of Interest on Regularization 3357.8 Estimation of the Regularization Parameter 3367.9 Semisupervised Learning 3427.10 Manifold Regularization: Preliminary Considerations 3437.11 Differentiable Manifolds 3457.12 Generalized RegularizationTheory 3487.13 Spectral Graph Theory 3507.14 Generalized Representer Theorem 3527.15 LaplacianRegularizedLeast-SquaresAlgorithm 3547.16 Experiments on Pattern Classification Using Semisupervised Learning 3567.17 Summary and Discussion 359 Notes and References 361 Problems 363Chapter 8 Principal-ComponentsAnalysis 3678.1 Introduction 3678.2 Principles of Self-Organization 3688.3 Self-Organized Feature Analysis 3728.4 Principal-Components Analysis: Perturbation Theory 3738.5 Hebbian-Based Maximum Eigenfilter 3838.6 Hebbian-Based Principal-Components Analysis 3928.7 Case Study: Image Coding 3988.8 Kernel Principal-Components Analysis 4018.9 Basic Issues Involved in the Coding of Natural Images 4068.10 Kernel Hebbian Algorithm 4078.11 Summary and Discussion 412 Notes and References 415 Problems 418Chapter 9 Self-OrganizingMaps 4259.1 Introduction 4259.2 Two Basic Feature-Mapping Models 4269.3 Self-Organizing Map 4289.4 Properties of the Feature Map 4379.5 Computer Experiments I: Disentangling Lattice Dynamics Using SOM 4459.6 Contextual Maps 4479.7 Hierarchical Vector Quantization 4509.8 Kernel Self-Organizing Map 4549.9 Computer Experiment II: Disentangling Lattice Dynamics Using Kernel SOM 4629.10 Relationship Between Kernel SOM and Kullback-Leibler Divergence 4649.11 Summary and Discussion 466 Notes and References 468 Problems 470Chapter 10 Information-Theoretic Learning Models 47510.1 Introduction 47610.2 Entropy 47710.3 Maximum-Entropy Principle 48110.4 Mutual Information 48410.5 Kullback-Leibler Divergence 48610.6 Copulas 48910.7 Mutual Information as an Objective Function to be Optimized 49310.8 Maximum Mutual Information Principle 49410.9 Infomax and Redundancy Reduction 49910.10 Spatially Coherent Features 50110.11 Spatially Incoherent Features 50410.12 Independent-Components Analysis 50810.13 Sparse Coding of Natural lmages and Comparison with lCA Coding 51410.14 Natural-Gradient Learning for lndependent-Components Analysis 51610.15 Maximum-Likelihood Estimation for lndependent-Components Analysis 52610.16 Maximum-Entropy Learning for Blind Source Separation 52910.17 Maximization of Negentropy for Independent-Components Analysis 53410.18 Coherent lndependent-Components Analysis 54110.19 Rate Distortion Theory and lnformation Bottleneck 54910.20 Optimal Manifold Representation of Data 55310.21 Computer Experiment: Pattern Classification 56010.22 Summary and Discussion 561 Notes and References 564 Problems 572Chapter 11 Stochastic Methods Rooted in Statistical Mechanics 57911.1 Introduction 58011.2 Statistical Mechanics 58011.3 Markov Chains 58211.4 Metropolis Algorithm 59111.5 Simulated Annealing 59411.6 Gibbs Sampling 59611.7 Boltzmann Machine 59811.8 Logistic Belief Nets 60411.9 Deep Belief Nets 60611.10 Deterministic Annealing 61011.11 Analogy of Deterministic Annealing with Expectation-Maximization Algorithm 61611.12 Summary and Discussion 617 Notes and References 619 Problems 621Chapter 12 Dynamic Programming 62712.1 Introduction 62712.2 Markov Decision Process 62912.3 Bellman's Optimality Criterion 63112.4 Policy Iteration 63512.5 Value Iteration 63712.6 Approximate Dynamic Programming: Direct Methods 64212.7 Temporal-Difference Learning 64312.8 Q-Learning 64812.9 Approximate Dynamic Programming: Indirect Methods 65212.10 Least-Squares Policy Evaluation 65512.11 Approximate Policy Iteration 66012.12 Summary and Discussion 663 Notes and References 665 Problems 668Chapter 13 Neurodynamics 67213.1 Introduction 67213.2 Dynamic Systems 67413.3 Stability of Equilibrium States 67813.4 Attractors 68413.5 Neurodynamic Models 68613.6 Manipulation of Attractors as a Recurrent Network Paradigm 68913.7 Hopfield Model 69013.8 The Cohen-Grossberg Theorem 70313.9 Brain-State-In-A-Box Model 70513.10 Strange Attractors and Chaos 71113.11 Dynamic Reconstruction of a Chaotic Process 71613.12 Summary and Discussion 722 Notes and References 724 Problems 727Chapter 14 Bayseian Filtering for State Estimation of Dynamic Systems 73114.1 Introduction 73114.2 State-Space Models 73214.3 Kalman Filters 73614.4 The Divergence-Phenomenon and Square-Root Filtering 74414.5 The Extended Kalman Filter 75014.6 The Bayesia.n Filter 75514.7 Cubature Kalman Filter: Building on the Kalman Filter 75914.8 Particle Filters 76514.9 Computer Experiment: Comparative Evaluation of Extended Kalman and Particle Filters 77514.10 Kalman Filtering in Modeling of Brain Functions 77714.11 Summary and Discussion 780 Notes and References 782 Problems 784Chapter 15 Dynamically Driven Recurrent Networks 79015.1 Introduction 79015.2 Recurrent Network Architectures 79115.3 Universal Approximation Theorem 79715.4 Controllability and Observability 79915.5 Computational Power of Recurrent Networks 80415.6 Learning Algorithms 80615.7 Back Propagation Through Time 80815.8 Real-Vane Recurrent Learning 81215.9 Vanishing Gradients in Recurrent Networks 81815.10 Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential State Estimators 82215.11 Computer Experiment: Dynamic Reconstruction of Mackay-Glass Attractor 82915.12 Adaptivity Considerations 83115.13 Case Study: Model Reference Applied to Neurocontrol 83315.14 Summary and Discussion 835 Notes and References 839 Problems 842Bibliography 845Index 889
· · · · · · (收起)

讀後感

評分

垃圾翻译。 P370 马尔克夫链的遍历性 the long-term proportion of time spent by the chain .. The proportion of time spent in state i after k returns, denoted by.. The return times T_i form a sequence of statistically independent and identically distributed ran...  

評分

我的研究生课程Neural Networks就是用的本书第二版。因为教授说了,他不喜欢更新的第三版。 感觉本书基本涵盖了神经网络的许多基础部分和重要方面。像Back Propagation, Radial-Basis Function,Self-Organizing Maps,以及single neuron中的Hebbian Learning, Competitive L...  

評分

原书:Neural Networks and Learning Machines 土豪,注意,这是 Learning Machines, 而不是 Machine Learning 神经网络与学习机会更好。  

評分

这本书还算有点名气,有不少的AI书籍的参考文献都提及了它。书名虽然是foundation,但却是偏重于数学的。对于ANN的几乎所有原理都没有给出可以在直觉上理解的原因,比如,为什么对于w的初始化要随机且尽可能小;冲向量的直观解释是什么;对于分布不均匀的结果类别应该如何对w正...  

評分

总体看来,原著的结构性是比较强的,而且原著作者是经过信号处理转过来的,以LMS作为BP 的引导这块感觉挺有新意,同时不仅从数学分析方法,更重要的是从贝叶斯估计入手,更容易理解机器学习是一种统计推断,而不是看起来完美的微积分推导。但是,翻译的人,对, 就是那个姓申的...  

用戶評價

评分

給四星吧,數學功底還是要有的,慢慢讀能讀進去,原著比較有邏輯性,翻譯emmm不多說,總之也幫我省瞭不少事

评分

基礎的東西講的很深,但是沒有最新的東西

评分

非常難讀的一本書,質量還行

评分

基礎的東西講的很深,但是沒有最新的東西

评分

總體說來還行把

本站所有內容均為互聯網搜索引擎提供的公開搜索信息,本站不存儲任何數據與內容,任何內容與數據均與本站無關,如有需要請聯繫相關搜索引擎包括但不限於百度google,bing,sogou

© 2025 book.quotespace.org All Rights Reserved. 小美書屋 版权所有