PGM-Introduction
chaseandblack@gmail.com
正文
Probabilistic Graphical Models
Specifying a joint distribution over many variables is intractable, especially when some random variable has many states, even continuous(of course it can be continuous). Therefore we use PGM which decomposes a complex distribution into smaller structures. For example, the following figure shows one possible graph for medical diagnosis.
There are 2 perspectives to interpret the graph. One is the graph is a representation of a set of independencis, one is the graph is a skeleton that breaks up the distribution into smaller factors, each of which has a smaller space of possibilities, as shown in the figure above. It turns out that the 2 perspectives are equivalent. By factoring the joint distribution, much less parameters are needed to specify the distribution. For example, assume each variable F, H, M, C takes states of yes/no, and S takes 4 states of spring, summer, fall, winter, then the joint distribution needs 2×2×2×2×4−1=63 nonredundant parameters to be specified(because the sum over all entries muct sum to 1, so when 63 entries are determined, the rest one is fixed), while after factorization, the required number of parameters is 3+4+4+4+2=17 for P(S), P(F|S), P(H|S), P(C|H,F), P(M|F) respectively.
3 components-representation, inference and learning-are critical components in constructing an intelligent system. PGM did all of this 3 perspectives. It declares a graph-based representation that encodes our world. It use the representation to answer queries like P(F|S,M)(Inference). It can be learned by combining expert knowledge(like som main dependencies) and accumulated data.
Overview and Roadmap
Chapter 3 | Bayesian Network Representation |
Chapter 4 | Markov Network and its unification with Bayesian Network, Conditional Random Fields |
Chapter 5 | Deeper into the representation of the parameters in PGM |
Chapter 6 | PGM evolving with time |
Chapter 7 | Look into models that have continuous variables |
Chapter 8 | Exponential Family |
Chapter 9 | Exact Inference(Computationally Intractable) |
Chapter 10 | Alternative view of Exact Inference |
Chapter 11 | Approximate Inference(Less cost compared with Exact Inference) |
Chapter 12 | A very different approximate inference method: Particle-based method |
Chapter 13 | |
Chapter 14 | Inference in continuous and hybrid (continuous/discrete) networks |
Chapter 15 | Special-purpose methods for the particular settings of networks that model dynamical systems. |
Chapter 16 | Fundamental concepts underlying the general task of learning models from data |
Chapter 17 | Learning parameters for a Bayesian network with a given structure, from fully observable data |
Chapter 18 | The harder problem of learning both Bayesian network structure and the parameters, still from fully observed data |
Chapter 19 | Bayesian network learning task in a setting where we have access only to partial observations of the relevant variables |
Chapter 20 | Learning Markov networks from data, which is significantly harder than the corresponding problem for Bayesian networks |
Chapter 21 | Causal model |
Chapter 22 | Utility functions |
Chapter 23 | Influence diagrams which extend Bayesian networks by introducing actions and utilities |
A reader's guide
posted on 2017-04-12 16:35 chaseblack 阅读(309) 评论(0) 编辑 收藏 举报
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 深入理解 Mybatis 分库分表执行原理
· 如何打造一个高并发系统?
· .NET Core GC压缩(compact_phase)底层原理浅谈
· 现代计算机视觉入门之:什么是图片特征编码
· .NET 9 new features-C#13新的锁类型和语义
· Sdcb Chats 技术博客:数据库 ID 选型的曲折之路 - 从 Guid 到自增 ID,再到
· 语音处理 开源项目 EchoSharp
· 《HelloGitHub》第 106 期
· Spring AI + Ollama 实现 deepseek-r1 的API服务和调用
· 使用 Dify + LLM 构建精确任务处理应用