Machine learning
Machine learning is a scientific discipline that explores the construction and study of algorithms that can learn from data. Such algorithms operate by building a model based on inputs and using that to make predictions or decisions, rather than following only explicitly programmed instructions.
机器学习是一个科学规律,探索可以从数据中学习的算法的构造和学习的科学规律。这种算法通过建立一个模型来运作,该模型是基于输入并且使用它来做预测或者决策,而非遵循确切的编程指导。
Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines and computer vision. Machine learning is sometimes conflated with data mining, although that focuses more on exploratory data analysis. Machine learning and pattern recognition "can be viewed as two facets of the same field."
机器学习可以被认为是计算机科学和统计学的子领域。它与人工智能和最优化有很强的联系,它们为该领域传输了方法、理论和应用域。机器学习应用在广泛的计算任务中,而其中明确的设计和编程、基于规则的算法是不可行的。样例应用包括垃圾邮件过滤、字母识别、搜索引擎和计算机视觉。机器学习有时跟数据挖掘联系在一起,虽然后者更关注数据分析。机器学习和图像识别“可以被视为相同领域的两个方面”.
选择的文章
Selected article
A random forest is an ensemble model for classification or regression, that consists of a multitude of decision trees. The predictions of a random forest are averages of the predictions of the individual trees. Random forests correct for decision trees' habit of overfitting to their training set.
随机森林是分类或回归的集合模型,它们包含了许多的决策树。随机森林的预测是各个树的预测值的平均值。随机森林修正了决策树的坏习惯——对训练集的过度拟合。
The algorithm for inducing a random forest was developed by Leo Breiman and Adele Cutler. The method combines Breiman's "bagging" idea and random selection of features: each tree gets to see a bootstrap sample of the training set and a random sample of the features, in order to obtain uncorrelated trees.
引入随机森林的算法是由Leo Breiman和Adele Cutler所开发的。该方法结合了Breiman的“装袋子”观点和特征随机选择:每个树都会看到训练集的自举样本和特征的随机样本,从而获得不相关的树。
选择的自画像
Selected biography
Michael Irwin Jordan (born 1956) is an American scientist, Professor at the University of California, Berkeley and leading researcher in machine learning and artificial intelligence. He has worked on recurrent neural networks, Bayesian networks, and variational methods, and co-invented latent Dirichlet allocation.
Michael Irwin Jordan(1956年出生)是一个美国科学家,加州大学伯克利分校的教授,并且是在机器学习和人工智能方面的首席专家。他钻研递归神经网络、贝叶斯网络和变分方法,并且合作开发了隐含狄利克雷分布。
新闻上
In the news
More current events...
Current events on Wikinews
选择的图片
Selected picture
Credit: User:Alisneaky
The effect of the kernel trick in a
classifier. On the left, a non-linear decision boundary has been learned by a
"kernelized" classifier. This simulates the effect of a feature map φ, that transforms the problem space into one
where the decision boundary is linear (right).
内核方法(核函数)的效果是一个分类器。左侧,一个非线性决策边界由一个“核化”分类器来进行学习。它模拟了一个特征映射φ效果,它将问题空间变换到决策边界为线性的空间。
你知道吗?
Did you know?
- ... that the kernel perceptron was the first learning algorithm to employ the kernel trick, already in 1964?核感知器是第一个应用核函数的学习算法,在1964年?
- ... that AltaVista was the first web search engine to employ machine-learned ranking of its search results?AltaVista是第一个应用机器学习对搜索结果进行评分的网络搜索引擎?
- ... that the group method of data handling, invented in the USSR, was one of the first algorithms capable of training deep neural networks (ca. 1971)?数据处理的组合方法,在苏联发明的,是第一类能够训练深度神经网络的算法之一(1971年)?
目录
Categories
▼ Machine learning机器学习
► Applied machine learning应用机器学习
► Artificial neural networks人工神经网络
► Bayesian networks贝叶斯网络
► Classification algorithms分类算法
► Cluster analysis聚类分析
► Computational learning theory计算学习理论
► Artificial intelligence conferences人工智能会议
► Signal processing conferences信号处理会议
► Data mining and machine learning software数据挖掘和机器学习软件
► Datasets in machine learning机器学习数据集
► Dimension reduction维度下降
► Ensemble learning集成学习
► Evolutionary algorithms进化算法
► Genetic programming遗传算法
► Inductive logic programming归纳逻辑程序
► Kernel methods for machine learning机器学习的核方法
► Latent variable models隐含变量模型
► Learning in computer vision机器视觉的学习
► Log-linear models对数线性模型
► Loss functions损失函数
► Machine learning algorithms机器学习算法
► Machine learning portal机器学习门户
► Machine learning task机器学习任务
► Markov models马尔科夫模型
► Machine learning researchers机器学习研究者
► Semisupervised learning半监督学习
► Statistical natural language processing统计学自然语言处理
► Structured prediction结构化预测
► Supervised learning监督学习
► Support vector machines支持向量机
► Unsupervised learning非监督学习
讨论主题热点
Topics
Portal:Machine learning/Topics
Related portals相关门户
Computer science计算科学
Robotics机器人
Statistics统计学
维基项目
WikiProjects
- Computer science计算科学
- Computing计算
- Robotics机器人
- Statistics统计学
要做的事(公开任务)
Things to do
Portal:Machine learning/Opentask
维基媒介
Wikimedia
Portal:Machine learning/Wikimedia
- What are portals?什么是门户?
- List of portals门户列表
<img src="//en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1" alt="" title="" width="1" height="1" style="border: none; position: absolute;" />
Retrieved from "https://en.wikipedia.org/w/index.php?title=Portal:Machine_learning&oldid=676364806"
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 上周热点回顾(2.24-3.2)
2017-01-23 高翔《视觉SLAM十四讲》从理论到实践