Nearest-Neighbor Methods(ESL读书笔记)

Nearest-neighbor methods use those observations in the training set T closest in input space to x  form Y-hat.

Specifically, the k-nearest neighbor fit for Y-hat is difined as follows: Y(x)=1/kΣyi,xi belong to Nk(x).

where Nk(x) is the neighborhood of x defined by the k closest points xi in the traing sample.

Closeness implies a metric, which for the moment we assume is Euclidean distance. So, in words, we find the k observations with xi closest to x in input space, and average their responses.

1.Kernel methods use weights that decrease smoothly to zero with distance from the target point, rather than the effective 0/1 weights used by k-nearest  neighbors.

2.In high-dimensional spaces the distance kernels are modified to emphasize some variable more than others.

3.Local regression fits linear models by locally weighted least squares, rather than fitting constants locally.

4.Local models fit to a basis expansion of the original inputs allow arbitrarily complex models.

5.Projection pursuit and neural network models consist of sums of nonlinearly transformed linear models.

posted @   东宫得臣  阅读(387)  评论(0编辑  收藏  举报
编辑推荐:
· AI与.NET技术实操系列:基于图像分类模型对图像进行分类
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
阅读排行:
· 分享一个免费、快速、无限量使用的满血 DeepSeek R1 模型,支持深度思考和联网搜索!
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· 25岁的心里话
· ollama系列01:轻松3步本地部署deepseek,普通电脑可用
· 按钮权限的设计及实现
点击右上角即可分享
微信分享提示