Python 语言构建机器学习系统 第2版(影印版)

Python 语言构建机器学习系统 第2版(影印版) pdf epub mobi txt 电子书 下载 2026

出版者:东南大学出版社
作者:Luis·Pedro·Coelho
出品人:
页数:301
译者:
出版时间:2016-1-1
价格:68.00元
装帧:平装
isbn号码:9787564160623
丛书系列:Packt Publishing 影印版丛书
图书标签:
  • 机器学习
  • Python
  • 计算科学
  • 数据分析
  • 工程
  • statistics
  • Programming
  • Python
  • 机器学习
  • 深度学习
  • 数据科学
  • 算法
  • 模型
  • 系统构建
  • 第2版
  • 影印版
  • 技术图书
想要找书就要到 小美书屋
立刻按 ctrl+D收藏本页
你会得到大惊喜!!

具体描述

运用机器学习获得对于数据的深入洞见,是现代应用开发者和分析师的关键技能。Python是一种可以用于开发机器学习应用的语言。作为一种动态语言,它可以进行快速探索和实验。利用其的开源机器学习库,你可以在快速尝试很多想法的同时专注于手头的任务。

《Python语言构建机器学习系统(第2版 影印版 英文版)》展示了如何在原始数据中寻找模式的具体方法,从复习Python机器学习知识和介绍程序库开始,你将很快进入应对正式而真实的数据集项目环节,运用建模技术,创建推荐系统。然后,《Python语言构建机器学习系统(第2版 影印版 英文版)》介绍了主题建模、篮子分析和云计算等高级主题。这些内容将拓展你的能力,让你能够创建大型复杂系统。

有了《Python语言构建机器学习系统(第2版 影印版 英文版)》,你就能获得构建自有系统所需的工具和知识,定制化解决实际的数据分析相关问题。

作者简介

目录信息

Preface
Chapter 1: Getting Started with Python Machine Learning
Machine learning and Python - a dream team
What the book will teach you (and what it will not)
What to do when you are stuck
Getting started
Introduction to NumPy, SciPy, and matplotlib
Installing Python
Chewing data efficiently with NumPy and intelligentlywith SciPy
Learning NumPy
Indexing
Handling nonexisting values
Comparing the runtime
Learning SciPy
Our first (tiny) application of machine learning
Reading in the data
Preprocessing and cleaning the data
Choosing the right model and learning algorithm
Beforebuilding our first model...
Starting with a simple straight line
Towards some advanced stuff
Stepping back to go forward - another look at our data
Training and testing
Answering our initial question
Summary
Chapter 2: Classifying with Real-world Examples
The Iris dataset
Visualization is a good first step
Building our first classification model
Evaluation - holding out data and cross-validation
Building more complex classifiers
A more complex dataset and a more complex classifim
Learning about the Seeds dataset
Features and feature engineering
Nearest neighbor classification
Classifying with scikit-learn
Looking at the decision boundaries
Binary and multiclass classification
Summary
Chapter 3: Clustering - Finding Related Posts
Measuring the relatedness of posts
How not to do it
How to do it
Preprocessing - similarity measured as a similar number of common words
Converting raw text into a bag of words
Counting words
Normalizing word count vectors
Removing less important words
Stemming
Stop words on steroids
Our achievements and goals
Clustering
K-means
Getting test data to evaluate our ideas on
Clustering posts
Solving our initial challenge
Another look at noise
Tweaking the parameters
Summary
Chapter 4: Topic Modeling
Latent Dirichlet allocation
Building a topic model
Comparing documents by topics
Modeling the whole of Wikipedia
Choosing the number of topics
Summary
Chapter 5: Classification - Detecting Poor Answers
Sketching our roadmap
Learning to classify classy answers
Tuning the instance
Tuning the classifier
Fetching the data
Slimming the data down to chewable chunks
Preselection and processing of attributes
Defining what is a good answer
Creating our first classifier
Starting with kNN
Engineering the features
Training the classifier
Measuring the classifier's performance
Designing more features
Deciding how to improve
Bias-variance and their tradeoff
Fixing high bias
Fixing high variance
High bias or low bias
Using logistic regression
A bit of math with a small example
Applying logistic regression to our post classification problem
Looking behind accuracy- precision and recall
Slimming the classifier
Ship it!
Summary
Chapter 6: Classification II - Sentiment Analysis
Sketching our roadmap
Fetching the Twitter data
Introducing the Naive Bayes classifier
Getting to know the Bayes' theorem
Being naive
Using Naive Bayes to classify
Accounting for unseen words and other oddities
Accounting for arithmetic underflows
Creating our first classifier and tuning it
Solving an easy problem first
Using all classes
Tuning the classifier's parameters
Cleaning tweets
Taking the word types into account
Determining the word types
Successfully cheating using SentiWordNet
Our first estimator
Putting everything together
Summary
Chapter 7: Regression
Predicting house prices with regression
Multidimensional regression
Cross-validation for regression
Penalized or regularized regression
L1 and L2 penalties
Using Lasso or ElasticNet in scikit-learn
Visualizing the Lasso path
P-greater-than-N scenarios
An example based on text documents
Setting hyperparameters in a principled way
Summary
Chapter 8: Recommendations
Rating predictions and recommendations
Splitting into training and testing
Normalizing the training data
A neighborhood approach to recommendations
A regression approach to recommendations
Combining multiple methods
Basket analysis
Obtaining useful predictions
Analyzing supermarket shopping baskets
Association rule mining
More advanced basket analysis
Summary
Chapter 9: Classification - Music Genre Classification
Sketching our roadmap
Fetching the music data
Converting into a WAV format
Looking at music
Decomposing music into sine wave components
Using FFT to build our first classifier
Increasing experimentation agility
Training the classifier
Using a confusion matrix to measure accuracy in
multiclass problems
An alternative way to measure classifier performance
using receiver-operator characteristics
Improving classification performance with Mel
Frequency Cepstral Coefficients
Summary
Chapter 10: Computer Vision
Introducing image processing
Loading and displaying images
Thresholding
Gaussian blurring
Putting the center in focus
Basic image classification
Computing features from images
Writing your own features
Using features to find similar images
Classifying a harder dataset
Local feature representations
Summary
Chapter 11: Dmensionality Reduction
Sketching our roadmap
Selecting features
Detecting redundant features using filters
Correlation
Mutual information
Asking the model about the features using wrappers
Other feature selection methods
Feature extraction
About principal component analysis
Sketching PCA
Applying PCA
Limitations of PCAand how LDA can help
Multidimensional scaling
Summary
Chapter 12: Bigger Data
Learning about big data
Using jug to break up your pipeline into tasks
An introduction to tasks in jug
Looking under the hood
Using jug for data analysis
Reusing partial results
Using Amazon Web Services
Creating your first virtual machines
Installing Python packages on Amazon Linux
Running jug on our cloud machine
Automating the generation of clusters with StarCluster
Summary
Appendix: Where to Learn More Machine Learning
Online courses
Books
Question and answer sites
Blogs
Data sources
Getting competitive
All that was left out
Summary
Index
· · · · · · (收起)

读后感

评分

评分

评分

评分

评分

用户评价

评分

这本书的深度和广度似乎达到了一个非常理想的平衡点,让人有一种“相见恨晚”的感觉。我翻阅了其中关于模型评估和调优的那几个章节,作者的处理方式非常成熟和务实,没有给出任何不切实际的“银弹”方案,而是强调了根据具体业务场景进行权衡和选择的重要性。这种成熟的行业洞察力,是教科书常常缺失的。更难得的是,虽然内容翔实,但整体上并没有给我带来强烈的压迫感,反而是激发了一种“我也可以做到”的积极情绪。这说明作者在内容组织上,非常懂得如何循序渐进地建立读者的信心,引导我们逐步攀登技术高峰,而不是直接把我们扔到悬崖边上。

评分

这本书的封面设计很抓人眼球,深邃的蓝色背景搭配清晰的白色字体,透露出一种专业和严谨的气质。我刚拿到手的时候,就被它厚重的质感吸引了。内页的纸张质量也出乎意料地好,油墨印刷清晰锐利,阅读起来非常舒适,长时间盯着也不会觉得眼睛疲劳。装帧看起来也挺结实的,估计能禁得起反复翻阅。不过,我也注意到一些细节,比如有些页码的对齐稍微有点偏差,可能是影印版的通病吧,但总体上来说,作为一本技术书籍,它的实体质量是让人满意的,放在书架上也是一件不错的藏品。我对它的内容充满期待,希望它能真正地把复杂的机器学习概念讲得透彻明白。

评分

这本书的排版布局简直是一场视觉的盛宴,它不像很多技术书籍那样堆砌文字,而是巧妙地运用了大量的图表和代码示例来辅助解释概念。我尤其欣赏作者在处理复杂算法时的清晰思路,每一步推导都像是在一步步引导我深入迷宫的核心,让人感觉豁然开朗。那些流程图和结构示意图设计得极其精妙,即便是一些我之前觉得晦涩难懂的部分,在配图的帮助下也变得直观易懂。代码块的格式也做得很好,缩进和高亮都恰到好处,使得阅读和复现代码的体验大大提升。这种注重细节的排版处理,无疑是提升学习效率的关键因素,让我对后续的学习过程充满了信心,真希望作者能把这份用心延续到每一个章节中。

评分

这本书的行文风格是那种非常接地气又不失深度的叙事方式。作者没有采用那种高高在上、纯粹理论化的说教口吻,而是更像一位经验丰富的同行在手把手地传授经验。他会适时地插入一些自己实践中遇到的“陷阱”或者“小技巧”,这对于初学者来说简直是雪中送炭。我特别喜欢它在讲解每一个模型时,都会追溯到其背后的数学原理,但又不会把人淹没在公式海洋里,总能在关键节点提供直观的类比或解释。这种平衡拿捏得非常好,既保证了理论的严谨性,又保持了阅读的流畅性和趣味性,让人感觉学习过程是充满探索乐趣的,而不是枯燥的任务。

评分

从目录上看,这本书的覆盖范围相当广泛,它似乎不仅仅停留在介绍基础算法的层面,而是深入到了系统构建的实践环节。我注意到它用了不少篇幅来讨论数据预处理、特征工程这些在实际项目中至关重要的环节,这让我感到非常欣慰,因为很多入门书籍往往会草草带过这些“繁琐”但实用的内容。此外,它对不同模型之间的比较分析也做得比较深入,不仅告诉你“怎么做”,还告诉你“为什么选这个而不是那个”。这种宏观的视角和微观的实现相结合的组织结构,预示着它将是一本能够伴随读者从理论学习走向实际项目落地的宝典,而不是一本只能束之高阁的参考书。

评分

Step2就靠这本书了,比机器学习实战更实用。缺点是概念相对忽略,SVM和神经网络没有涉及

评分

Step2就靠这本书了,比机器学习实战更实用。缺点是概念相对忽略,SVM和神经网络没有涉及

评分

Step2就靠这本书了,比机器学习实战更实用。缺点是概念相对忽略,SVM和神经网络没有涉及

评分

Step2就靠这本书了,比机器学习实战更实用。缺点是概念相对忽略,SVM和神经网络没有涉及

评分

Step2就靠这本书了,比机器学习实战更实用。缺点是概念相对忽略,SVM和神经网络没有涉及

本站所有内容均为互联网搜索引擎提供的公开搜索信息,本站不存储任何数据与内容,任何内容与数据均与本站无关,如有需要请联系相关搜索引擎包括但不限于百度google,bing,sogou

© 2026 book.quotespace.org All Rights Reserved. 小美书屋 版权所有