Evaluating Machine Learning Models

Evaluating Machine Learning Models pdf epub mobi txt 电子书 下载 2025

出版者:O'Reilly
作者:Alice Zheng
出品人:
页数:45
译者:
出版时间:2015-9
价格:0
装帧:平装
isbn号码:9781491932469
丛书系列:
图书标签:
  • 机器学习
  • 数据挖掘
  • MachineLearning
  • SEA
  • Experimentation&CausalInference
  • Data_Science
  • Machine Learning
  • Model Evaluation
  • Performance Metrics
  • Statistical Analysis
  • Data Science
  • Model Selection
  • Bias-Variance Tradeoff
  • Cross-Validation
  • Overfitting
  • Underfitting
想要找书就要到 小美书屋
立刻按 ctrl+D收藏本页
你会得到大惊喜!!

具体描述

Data science today is a lot like the Wild West: there’s endless opportunity and

excitement, but also a lot of chaos and confusion. If you’re new to data science and

applied machine learning, evaluating a machine-learning model can seem pretty overwhelming.

Now you have help. With this O’Reilly report, machine-learning expert Alice Zheng takes

you through the model evaluation basics.

In this overview, Zheng first introduces the machine-learning workflow, and then dives into

evaluation metrics and model selection. The latter half of the report focuses on

hyperparameter tuning and A/B testing, which may benefit more seasoned machine-learning

practitioners.

With this report, you will:

Learn the stages involved when developing a machine-learning model for use in a software

application

Understand the metrics used for supervised learning models, including classification,

regression, and ranking

Walk through evaluation mechanisms, such as hold?out validation, cross-validation, and

bootstrapping

Explore hyperparameter tuning in detail, and discover why it’s so difficult

Learn the pitfalls of A/B testing, and examine a promising alternative: multi-armed bandits

Get suggestions for further reading, as well as useful software packages

Alice Zheng is the Director of Data Science at Dato, a Seattle-based startup that offers

powerful large-scale machine learning and graph analytics tools. A tool builder and an

expert in machine-learning algorithms, her research spans software diagnosis, computer

network security, and social network analysis.

作者简介

目录信息

Preface
1. Orientation
2. Evaluation Metrics
3. Offline Evaluation
4. Hyperparameter Tunining
5. The Pitfalls of A/B testing
· · · · · · (收起)

读后感

评分

评分

评分

评分

评分

用户评价

评分

模型评估方面还是讲的不错的,而且A/B testing方面特别有启发。

评分

20171115:有关模型评估的小册子,实用。1)工作流程分为原型阶段与发布阶段,原型阶段需要对模型来验证和离线评估,发布阶段需要在线评估。离线评估和在线评估用的指标不一样,当然数据集也不同。有可能存在分布漂移。2)回归指标评价。3)A/B测试。

评分

模型评估方面还是讲的不错的,而且A/B testing方面特别有启发。

评分

梳理了下机器学习模型评估的体系,比较基础,但思路挺清晰。

评分

模型评估方面还是讲的不错的,而且A/B testing方面特别有启发。

本站所有内容均为互联网搜索引擎提供的公开搜索信息,本站不存储任何数据与内容,任何内容与数据均与本站无关,如有需要请联系相关搜索引擎包括但不限于百度google,bing,sogou

© 2025 book.quotespace.org All Rights Reserved. 小美书屋 版权所有