MLN 讨论 —— inference
We consider two types of inference:
- finding the most likely state of the world consistent with some evidence
- computing arbitrary conditional probabilities.
We then discuss two approaches to making inference more tractable on large , relational problems:
- lazy inference , in which only the groundings that deviate from a "default" value need to be instantiated;
- lifted inference , in which we group indistinguishable atoms together and treat them as a single unit during inference;
3.1 Inference the most probable explanation
A basic inference task is finding the most probable state of the world y given some evidence x, where x is a set of literals;
Formula
For Markov logic , this is formally defined as follows:
: the number of true groundings of clause ;
The problem reduces to finding the truth assignment that maximizes the sum of weights of satisfied clauses;
Method
MaxWalkSAT:
Repeatedly picking an unsatisfied clause at random and flipping the truth value of one of the atoms in it.
With a certain probability , the atom is chosen randomly;
Otherwise, the atom is chosen to maximize the sum of satisfied clause weights when flipped;
DeltaCost() computes the change in the sum of weights of unsatisfied clauses that results from flipping variable in the current solution.
Uniform(0,1) returns a uniform deviate from the interval [0,1]
Example
Step1: convert Formula to CNF
0.646696 Smokes(x) Cancer(x)
1.519900 ( Friends(x,y) Smokes(x) Smokes(y)) ( Friends(x,y) Smokes(y) Smokes(x))
Clauses:
- Smokes(x) Cancer(x)
- Friends(x,y) Smokes(x) Smokes(y)
- Friends(x,y) Smokes(y) Smokes(x)
Atoms: Smokes(x) 、Cancer(x) 、Friends(x,y)
Constant: {A, B, C}
Step2: Propositionalizing the domain
The truth value of evidences is depent on themselves;
The truth value of others is assigned randomly;
C:#constant ; : the arity of ;
world size = 指数级增长!!!
Step3: MaxWalkSAT
3.2 Computing conditional Probabilities
MLNs can answer arbitraty queries of the form "What is the probability that formula holds given that formula does?". If and are two formulas in first-order logic, C is a finite set of constants including any constants that appear in or , and L is an MLN, then
where is the set of worlds where holds, is the Markov network defined by L and C;
Problem
MLN inference subsumes probabilistic inference, which is #P-complete, and logical inference, which is NP-complete;
Method: 数据的预处理
We focus on the case where is a conjunction of ground literals , this is the most frequent type in practice.
In this scenario, further efficiency can be gained by applying a generalization of knowledge-based model construction.
The basic idea is to only construct the minimal subset of the ground network required to answer the query.
This network is construct by checking if the atoms that the query formula directly depends on are in the evidence. If they are, the construction is complete. Those that are not are added to the network, and we in turn check the atoms they depend on. This process is repeated until all relevant atoms have been retrieved;
Markov blanket : parents + children + children's parents , BFS
example
Once the network has been constructed, we can apply any standard inference technique for Markov networks, such as Gibbs sampling;
Problem
One problem with using Gibbs sampling for inference in MLNs is that it breaks down in the presence of deterministic or near-deterministic dependencies. Deterministc dependencies break up the space of possible worlds into regions that are not reachable from each other, violating a basic requirement of MCMC. Near-deterministic dependencies greatly slow down inference, by creating regions of low probability that are very difficult to traverse.
Method
The MC-SAT is a slice sampling MCMC algorithm which uses a combination of satisfiability testing and simulated annealing to sample from the slice. The advantage of using a satisfiability solver(WalkSAT) is that it efficiently finds isolated modes in the distribution, and as a result the Markov chain mixes very rapidly.
Slice sampling is an instance of a widely used approach in MCMC inference that introduces auxiliary variables, , to capture the dependencies between observed variables, . For example, to sample from P(X=) = (1/Z) , we can define P(X=, U=)=(1/Z)
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· Docker 太简单,K8s 太复杂?w7panel 让容器管理更轻松!