1. 

VANSAT  and  NegVanSAT

Abd El-Maksoud, M.A., Abdalla, A. A novel SAT solver for the Van der Waerden numbers. J Egypt Math Soc 27, 22 (2019). https://doi.org/10.1186/s42787-019-0021-1

 

VAN der Waerden numbers SAT (VANSAT) solver [12] is a modification of MINISAT where the activity of the variable is measured by its occurrences in the not yet satisfied clauses. 译文:变量的活动是通过它在not yet satisfied子句中出现的次数来度量的。

Hence, variable activities are changed dynamically (increased and decreased) by adding and removing clauses. In other words, the strategy of VANSAT was:

I. Increasing the activity of all variables that appear in each new added clause (learnt or problem clause)

译文:增加每个新添加的子句(学习到的或问题子句)中出现的所有变量的活动性

II. Decreasing the activity of all variables that were appearing in each deleted clause (where deletion of clauses occur in many situations) 译文:减少出现在每个被删除的子句中的所有变量的活动性(在许多情况下,子句被删除)

Experimental results showed that the VANSAT is better in computing Van der Waerden numbers. For example, it outperformed MINISAT in computing w(3;2,3,3), w(3;2,3,5), and w(4;2,2,3,3) in terms of the number of conflicts, decisions, restarts, propagations, and conflict literals [12].

   
 

Proposed NegVanSAT

From the above SAT encoding (of Van der Waerden numbers, r>2), it has been noted that the variables occur as negative literals in all clauses except one, the first type of clauses.译文:已经注意到,除了第一种类型的子句外,变量在所有子句中都以否定字面值出现。

This observation was the motive to develop the NegVanSAT.

 

This proposed solver is a modification of MINISAT1.14, where the constructor of the literals has been adjusted in such a way that the default literal of a variable has become the negative one.译文:这个提出的解算器是对MINISAT1.14的修改,在MINISAT1.14中,文字的构造函数被调整为这样一种方式,即变量的默认文字变成了负的文字。

Hence, it, in contrary to MINISAT, tries solving the given formula using the negative literal of the variable before trying its positive one. And that is for the variable with the highest activity.译文:因此,与minat相反,它尝试在尝试变量的正数之前,先使用变量的负数来解给定公式。这是对最活跃的变量来实施的。

阅读时理解:对于活跃度最高的变元作为决策变元时,首先考虑将其赋值为负值。

   
 
  阅读笔记:这是一种压缩SAT问题样例cnf格式文献的好方法。

 

2.Ramsey graphs

  SCSat is a SAT solver aimed at quickly finding a model for hard satisfiable instances using soft constraints. Soft constraints themselves are not necessarily maximally satisfied and may be relaxed if they are too strong to obtain a model. Appropriately given soft constraints can reduce search space drastically without losing many models, thus help find a model faster. In this way, we have succeeded to obtain several rare Ramsey graphs which contribute to raise the known best lower bound for the Ramsey number R(4,8) from 56 to 58.
 
  • ramsey极图——本文求解对象为该图
 

本文的新的技术(相对已知的)——译文:适当地给出软约束可以大幅度地减少搜索空间而不丢失许多模型、

可否理解为:已知部分赋值求解问题。

该软件网址:暂没有下载到。

 3

Off the Trail: Re-examining the CDCL Algorithm

Goultiaeva A., Bacchus F. (2012) Off the Trail: Re-examining the CDCL Algorithm. In: Cimatti A., Sebastiani R. (eds) Theory and Applications of Satisfiability Testing – SAT 2012. SAT 2012. Lecture Notes in Computer Science, vol 7317. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31612-8_4

 

创新点:

(1)认为CDCL不同于dpll,是一种不完备算法;

CDCL solvers do not systematically examine all possible truth assignments as does DPLL.

 

(2)从随机求解器的视角看待CDCL框架,认为CDCL可以重新表述为一个通过子句学习能够证明UNSAT的局部搜索算法。

Local search solvers are also non-systematic and in this paper we show that CDCL can be reformulated as a local search algorithm: a local search algorithm that through clause learning is able to prove UNSAT. We show that the standard formulation of CDCL as a backtracking search algorithm and our new formulation of CDCL as a local search algorithm are equivalent, up to tie breaking.

 

(3)trail不再是算法的核心,逐字序列的排序是一种有效控制子句学习的机制。

In the new formulation of CDCL as local search, the trail no longer plays a central role in the algorithm. Instead, the ordering of the literals on the trail is only a mechanism for efficiently controlling clause learning. 

译文:在CDCL作为局部搜索的新算法中,trail不再是算法的核心。相反,逐字序列的排序只是一种有效控制子句学习的机制。

 

This changes the paradigm and opens up avenues for further research and algorithm design.译文:这改变了范式,并为进一步的研究和算法设计开辟了途径。

 

For example, in QBF the quantifier places restrictions on the ordering of variables on the trail. By making the trail less important, an extension of our local search algorithm to QBF may provide a way of reducing the impact of these variable ordering restrictions.译文:例如,在QBF中,量词对trail上变量的顺序施加限制。通过降低路径的重要性,将我们的局部搜索算法扩展到QBF,可以提供一种减少这些可变排序限制的影响的方法。

 

4.发生赋值翻转的变元在trail的位置是一个动态的过程,体现了某种信息。

三种情况会发生翻转:在以下一种情况下执行从l到l的翻转。(1)新的冲突条款隐含了l。(2) l由一个变量隐含,该变量由于启发式值的升级而在trail中被向上移动译文:Or (3) l由另一个“翻转”变量隐含。

5.思考:中度重启中在4fnc处如果同一变元需要翻转。

   

Keywords

             Local Search Local            Search Algorithm          Decision Level               Complete Assignment           Implication Graph 

 

 

Abstract

 

Most state of the art SAT solvers for industrial problems are based on the Conflict Driven Clause Learning (CDCL) paradigm. Although this paradigm evolved from the systematic DPLL search algorithm, modern techniques of far backtracking and restarts make CDCL solvers non-systematic. CDCL solvers do not systematically examine all possible truth assignments as does DPLL.

 

Local search solvers are also non-systematic and in this paper we show that CDCL can be reformulated as a local search algorithm: a local search algorithm that through clause learning is able to prove UNSAT. We show that the standard formulation of CDCL as a backtracking search algorithm and our new formulation of CDCL as a local search algorithm are equivalent, up to tie breaking.

 

In the new formulation of CDCL as local search, the trail no longer plays a central role in the algorithm. Instead, the ordering of the literals on the trail is only a mechanism for efficiently controlling clause learning. This changes the paradigm and opens up avenues for further research and algorithm design. For example, in QBF the quantifier places restrictions on the ordering of variables on the trail. By making the trail less important, an extension of our local search algorithm to QBF may provide a way of reducing the impact of these variable ordering restrictions.

   
 

1 Introduction

 

The modern CDCL algorithm has evolved from DPLL, which is a systematic
search through variable assignments [4]. CDCL algorithms have evolved through
the years, various features and techniques have been added [10] that have demonstrated
empirical success. These features have moved CDCL away from exhaustive
search, and, for example, [9] has argued that modern CDCL algorithms are
better thought of as guided resolution rather than as exhaustive backtracking
search.

   
 

New features have been added as we have gained a better understanding of
CDCL both through theoretical developments and via empirical testing. For
example, the important technique of restarts was originally motivated by theoretical
and empirical studies of the effect of heavy-tailed run-time distributions
[7] on solver run-times.

   
 

Combinations of features, however, can sometimes interact in complex ways that can undermine the original motivation of individual features.译文:然而,特征的组合有时会以复杂的方式相互作用,从而破坏个体特征的原始动机。

For example, phase saving, also called light-weight component caching, was conceived as a progress saving technique, so that backtracking would not retract already discovered solutions of disjoint subproblems [12] and then have to spend time rediscovering these solutions.译文:例如,阶段保存(也称为轻量级组件缓存)被认为是一种进度保存技术,这样回溯就不会收回已经发现的不相关联的子问题[12]的解决方案,然后就不得不花时间重新发现这些解决方案。

However, when we add phase saving to restarts, we reduce some of the randomization introduced by restarts, potentially limiting the ability of restarts to short-circuit heavy-tailed run-times.译文:然而,当我们为重启添加阶段保存时,我们减少了由重启引入的一些随机性,潜在地限制了重启的能力,使其成为短路的大尾部运行时

Nevertheless, even when combined, restarts and phase saving both continue to provide a useful performance boost in practice and are both commonly used in CDCL solvers.译文:然而,即使将重启和阶段相位保持技术结合起来,它们在实践中也会继续提供有用的性能提升,而且它们在CDCL求解器中都是常用的。

   
 

When combined with a strong activity-based heuristic, phase saving further changes the behavior of restarts.

译文:当与强大的基于变元活跃度的启发式相结合时,阶段保存将进一步改变重启的行为。

 

In this context it is no longer obvious that restarts serve to move the solver to a different part of the search space.译文:在这种情况下,重启不再是将求解器移动到搜索空间的不同部分。

Instead, it can be shown empirically that after a restart a large percentage of the trail is re-created exactly as it was prior to the restart, indicating that the solver typically returns to the same part of the search space.译文:相反,它可以证明,在重新启动后,很大比例的trail被重新创建,完全与重新启动前一样,这表明求解器通常返回到搜索空间的相同部分。

In fact, there is evidence to support the conclusion that the main effect of restarts in current solvers is simply
to update the trail with respect to the changed heuristic scores.译文:事实上,有证据支持这一结论,即当前求解器中重启的主要作用是简单地根据改变的启发式分数更新轨迹

For example, [14] show that often a large part of the trail can be reused after backtracking.

With the appropriate implementation techniques reusing rather than reconstructing the trail can speed up the search by reducing the computational costs of restarts.译文:通过适当的实现技术,重用而不是重建trail可以通过减少重启的计算成本来加快搜索速度


In this paper we examine another feature of modern SAT solvers that ties them with the historical paradigm of DPLL: the trail used to keep track of the current set of variable assignments.译文:在这篇论文中,我们将现代SAT求解器与DPLL的历史范式联系,检查到现代SAT求解器的另一个特征:用来保持变量赋值的当前集的跟踪的轨迹

We show that modern SAT solvers, in which phase savings causes an extensive recreation of the trail after backtracking, can actually be reformulated as local search algorithms.译文:我们表明,现代的SAT解算器,其中相位节省导致回溯后的广泛重建的轨迹,实际上可以重新制定为局部搜索算法

   
 

Local search solvers work with complete truth assignments [15], and a single step usually consists of picking a variable and flipping its value.

Local search algorithms have borrowed techniques from CDCL. For example, unit propagation has been employed [6,8,2], and clause learning as also been used [1].译文:局部搜索算法借鉴了CDCL的技术。例如,单元传播已经被使用[6,8,2],子句学习也被使用[1]。

However, such solvers are usually limited to demonstrating satisfiability, and often cannot be used to reliably prove UNSAT.

Our reformulation of the CDCL algorithm yields a local search algorithm that is able to derive UNSAT since it can perform exactly the same steps as CDCL would. It also gives a different perspective on the role of the trail in CDCL solvers.译文:我们对CDCL算法的重新表述产生了一个能够导出UNSAT的局部搜索算法,因为它可以执行与CDCL完全相同的步骤。它也给出了CDCL求解器中trail的角色的不同视角。

In particular, we show that the trail can be viewed as providing an ordering of the literals in the current truth assignment, an ordering that can be used to guide clause learning.译文:特别地,我们展示了trail可以被视为在当前的真理赋值中提供一个文字的顺序,一个可以用来指导子句学习的顺序。

This view allows more flexible clause learning techniques to be developed, and different types of heuristics to be
supported.译文:这个观点允许开发更灵活的子句学习技术,并支持不同类型的启发式。

It also opens the door for potentially reformulating QBF algorithms, which suffer from strong restrictions on the ordering of the variables on the trail.译文:这也为潜在的重新制定QBF算法打开了大门,这种算法受到路径上变量排序的强烈限制。

   
 

Section 2 examines the existing CDCL algorithm and describes our intuition in more detail.

Section 3 presents a local search formulation of the modern CDCL algorithm and proves that the two formulations are equivalent.译文:给出了现代CDCL算法的一个局部搜索公式,并证明了这两个公式是等价的。

Section 4 presents some simple experiments which suggest further directions for research.

Section 5 concludes the paper.

   
 

 3.1 Connection to CDCL

   

In this section, we will focus on Algorithm 2 guided by the variable selection and
clause learning technique presented in Algorithm 3 and with no restarts. We will
refer to this as A2. We will refer to Algorithm 1 as A1.


Define a trace of an algorithm A to be a sequence of flips performed and
clauses learned by A. Note that this definition applies to both A2 and A1: recall
that for A1 a flip is an assignment where the variable’s new value is different
from its phase setting.


Theorem 1. For any heuristic h there is a heuristic h such that for any input
formula φ, A1 with h would produce the same trace as A2 with h (provided they
make the same decisions in the presence of ties).


Proof. We will say that a heuristic h is stable for (a version of) CDCL algorithm
A if during any execution of A with h we have h(v1) ≥ h(v2) for some decision
variable v2 only if h(v1) ≥ h(v2) also held just before v2 was last assigned.
Intuitively, a heuristic is stable if the ordering of decision variables is always
correct with respect to the heuristic, and is not simply historical. One way to
ensure that a heuristic is stable is to restart after every change to the heuristic.
For example, the VSIDS heuristic is stable for a version of A1 which restarts
after every conflict.

We will first prove the claim for a heuristic h that is stable for A1. Then, we
will show that for any h, we can find an equivalent h that is stable for A1 and
such that A1 with h produces the same trace as with h.


Let the initial assignment of A1 be the same as the initial phase setting of
A2, and let both algorithms use the same heuristic h. It is easy to verify that if
a partial execution of A1 has the same trace as a partial execution of A2, that
means that the phase setting of any variable in A1 matches its value in A2.
To show that A1 and A2 would produce the same trace when run on the same
formula φ, we will consider partial executions. By induction on n, we can show
that if we stop each algorithm just after it has produced a trace of length n, the
traces will be identical.


If n = 0, the claim trivially holds. Also note that if one algorithm halts after
producing trace T , so will the other, and their returned values will match. Both
algorithms will return false iff T ends with an empty clause. If A2 has no failed
implications, then A1 will restore all variables to their phase values and obtain
a solution, and vice versa. Suppose the algorithms have produced a trace T .
Let S be the Next Flip or Learned Clause of A1. Let π be the trail of A1
just before it produced S.


Because h is stable for A1, then the heuristic values of the decision variables in
π are non-increasing. That is because if h(v1) > h(v2) for two decision variables,
then the same must have held when v2 was assigned. If v1 had been unassigned
at that point, it would have been chosen as the decision variable instead. So, v1
must have been assigned before v2.


So, π is a UP-compatible ordering respecting h over the partial assignment:
any implication is placed as early as possible in π, and non-implied (decision)
literals have non-increasing heuristic value. Because unit propagation was performed
to completion (except for possibly the last decision level), and because
the heuristic value of all unassigned literals is less than that of the last decision
literal, π can always be extended to a UP-compatible ordering ψ on I.


Let C = {α, v} be the clause that caused v to be flipped to true if S is a
flip; otherwise, let it be the conflicting clause that started clause learning, with
v being the trail-deepest of its literals. In both cases, C is false at P1, so v is a
failed implication in ψ. This is the first conflict encountered by A1, so there are
no false clauses that consist entirely of literals with earlier decision levels. So, v
is the first failed implication in ψ.


If S is a flip, then v is non-conflicting, and A2 would match the flip. Otherwise,
v is a conflicting failed implication, and will cause clause learning. For
the ψ which matches π, clause learning would produce a clause identical to that
produced by A1. So, the next entry in the trace of A2 will also be S.
Let S be the Next Flip or Learned Clause of A2. Let v be the first failed
implication just before S was performed, and let ψ be the corresponding UPcompatible
ordering. Let π be the trail of A1 just before it produces its next
flip or a learned clause. We will show that whenever π differs from ψ, A1 could
have broken ties differently to make them match. Let v1 ∈ π and v2 ∈ ψ be the
first pair of literals that are different between π and ψ. Suppose v1 is implied.

 

 

 

Then, because ψ is UP-compatible, v2 must also be implied by preceding literals.
So, A1 could have propagated v2 before v1. If v1 is a decision literal, then so
is v2. Otherwise, v2 should have been unit propagated before v1 was assigned.
If h(v2) < h(v1), this would break the fact that ψ respects the heuristic. So,
h(v2) ≥ h(v1). Then the same must have been true at the time v1 was assigned.
So, v2 was at worst an equal candidate for the decision variable, and could have
been picked instead.


So, provided A1 breaks ties accordingly, it would have the trail π that is a
prefix of ψ. It can continue assigning variables until the trail includes all the
variables at the decision level at which v is f-implied. Because v is the first failed
implication in ψ, no conflicts or flip would be performed up to that point. At
this point, there will be some clause C = (α, v). If S is a flip, then v is not
conflicting, C will be unit, and a flip will be performed. If S is a learned clause,
then v is conflicting, which means that v was among the implied literals at this
level. So, clause learning will be performed. Either way, the next entry in the
trace of A1 will also be S.


So, we have shown that A2 and A1 would produce the same trace given the
same heuristic h which is stable for A1. Now we will sketch a proof that given
any variable heuristic h, we can construct a heuristic h which is stable for A1
and such that A1 with h would produce the same trace as with h.


We will define h(v) = h(v) whenever v is unassigned. Otherwise, we will set
h(v) = M + V − D + 0.5d, where M is some value greater than the maximum
h(v) of all non-frozen variables, V is the number of variables in the problem,
and D is the decision level at which v was assigned when it became frozen, and
d is 1 if v is decision and 0 otherwise.


Because a heuristic is only considered for unassigned variables, then the behavior
of the algorithm is unaffected, and it will produce the same traces. Also,
unassigned values always have a smaller heuristic value than those that are assigned;
those assigned later always have a smaller heuristic value than earlier
decision literals. So, the heuristic is stable for A1.
As a corollary: because Algorithm 1 is complete, so is Algorithm 2.

   
 

 2 Examining the CDCL Algorithm

 
 

A modern CDCL algorithm is outlined in Algorithm 1. Each iteration starts
by adding literals implied by unit propagation to the trail π. If a conflict is
discovered clause learning is performed to obtain a new clause c = (α → y).
The new clause is guaranteed to be empowering, which means that it is able
to produce unit implications in situations when none of the old clauses can [13].
In this case, c generates a new implication y earlier in the trail, and the solver
backtracks to the point where the new implication would have been made if the
clause had previously been known. Backtracking removes part of the trail in
order to add the new implication in the right place. On the next iteration unit
propagation will continue adding implications, starting with the newly implied
literal y. If all variables are assigned without a conflict, the formula is satisfied.
Otherwise, the algorithm picks a decision variable to add to the trail. It picks
an unassigned variable with the largest heuristic value, and restores its value
to the value it had when it was last assigned. The technique of restoring the
variable’s value is called phase saving. We will say that the phase of a variable
v, phase[v], is the most recent value it had; if v has never been assigned, phase[v]
will be an arbitrary value set at the beginning of the algorithm; if v is assigned,
phase[v] will be its current value.

 

Lastly, sometimes the solver restarts: it removes everything from the trail
except for literals unit propagated at the top level. This might be done according
to a set schedule, or some heuristic [3].

 


As already mentioned, after backtracking or restarting, the solver often recreates much of the trail. 译文:如前所述,在回溯或重新启动后,求解器通常会重新创建大部分的路径。

For example, we found that the overwhelming majority of assignments Minisat makes simply restore a variable’s previous value. 译文:例如,我们发现绝大多数的minat赋值都是简单地恢复一个变量以前的值。

We have ran Minisat on the 150 problems from the SAT11 dataset of the SAT competition, with a timeout of 1000 seconds.

 

Fig. 1. Assignments and flips on both solved and unsolved (after a 1000s timeout)
instances of SAT11 dataset. Sorted by the number of assignments.

 

Figure 1 shows the distribution of assignments Minisat made, and the number of “flips” it made, where flips are when a variable is assigned a different value than it had before.译文:图1显示了minat所做的分配,以及它所做的“翻转”次数,其中的翻转是当一个变量被分配了一个与之前不同的值时。

On average, the solver performed 165.08 flips per conflict, and 3530.4 assignments per conflict. It has already been noted that flips can be correlated with the progress that the solver is making [3].译文:求解器平均每次冲突执行165.08次翻转,每次冲突执行3530.4次分配。我们已经注意到,翻转可以与求解器生成[3]的过程相关联。

 

Whenever the solver with phase saving backtracks, it removes variable assignments, but unless something forces the variable to get a different value, it would restore the old value when it gets to it.译文:当使用阶段保存的解算器回溯时,它会删除变量赋值,但除非有什么东西强制变量得到不同的值,否则它会在变量到达时恢复旧的值。

 

So, we can imagine that the solver is working with a complete assignment, which is the phase settings for all the variables phase[v], and performing a flip from ¬l to l only in one of the following cases. (1) l is implied by a new conflict clause. (2) l is implied by a variable that was moved up in the trail because its heuristic value was upgraded. Or (3) l is implied by another “flipped” variable. Phase saving ensures that unforced literals, i.e., decisions, cannot be flipped.译文:因此,我们可以想象求解器正在处理一个完整的任务,即所有变量phase[v]的相位设置,并且只在以下一种情况下执行从l到l的翻转。(1)新的冲突条款隐含了l。(2) l由一个变量隐含,该变量由于启发式值的升级而在trail中被向上移动译文:Or (3) l由另一个“翻转”变量隐含。相位节省确保非强制文字,即决策,不能被翻转。


In all of these cases l is part of some clause c that is falsified by the current
“complete” assignment (consisting of the phase set variables); c would then become
its reason clause; at the point when l is flipped, c is the earliest encountered
false clause; and l is the single unassigned variable in c (i.e., without c, l would
have been assigned later in the search).

As we will see below, we can use these conditions to determine which variable to flip in a local search algorithm.译文:正如我们将在下面看到的,我们可以使用这些条件来决定在局部搜索算法中翻转哪个变量。


Note that we will not consider the randomization of decision variables in this paper, although this could be accommodated by making random flips in the local search algorithm.译文:注意,我们在本文中将不考虑决策变量的随机化,尽管这可以通过在局部搜索算法中进行随机翻转来解决。

The benefits of randomizing the decision variables are still poorly understood.译文:随机化决策变量的好处仍然没有得到很好的理解。

In our experiments we found that turning off randomization does not noticeably harm performance of Minisat. Among ten runs with different seeds, Minisat solved between 51 and 59 instances, on average 55. With
randomization turned off, it solved 56.译文:在我们的实验中,我们发现关闭随机并不会明显损害Minisat的性能。在使用不同种子的10次测试中,Minisat解决了51到59次,平均55次。在关闭随机化后,它解决了56个问题。

   
 

 3 Local Search

   

Algorithm 2 presents a generic local search algorithm. A local search solver works
with a complete assignment I. At each stage in the search, it picks a variable
and flips its value. There are different techniques for choosing which variable to
flip, from simple heuristics such as minimizing the number of falsified clauses
[15], to complicated multi-stage filtering procedures [16].

译文:算法2给出了一种通用的局部搜索算法。一个局部搜索求解器使用一个完整的赋值i。在搜索的每个阶段,它选择一个变量并翻转它的值。

译文:有不同的技巧来选择要翻转哪个变量,从简单的启发式,如尽量减少证伪子句的数量从[15]到复杂的多级过滤程序[16]。

   

Typically, the algorithm tries to flip a variable that will reduce the distance between the current complete assignment and a satisfying assignment.译文:通常,该算法试图翻转一个变量,以减少当前完成的分配与满意的分配之间的距离。

However,
estimating the distance to a solution is difficult and unreliable, and local search
solvers often get stuck in local minima. It was noted that it is possible to escape
the local minimum by generating new clauses that would steer the search.

Also,
if new non-duplicated clauses are being generated at every local minimum, the
resulting algorithm can be shown to be complete. An approach exploiting this
fact was proposed, using a single resolution step to generate one new clause at
each such point [5]. The approach was then extended to utilize an implication
graph, and incorporate more powerful clause learning into a local search solver,
resulting in the CDLS algorithm [1]. However, as we will see below, CDLS cannot
ensure completeness because the clause learning scheme it employs can generate
redundant clauses.
The main difficulty for such an approach is the generation of an implication
graph from the complete assignment I. The first step consists of identifying
once-satisfied clauses. A clause c is considered to be once-satisfied by a literal
x and a complete assignment I if there is exactly one literal x ∈ c that is true
in I (c ∩ I = {x}).
Theoretically, any clause cf with ¬x ∈ cf that is false under I can be resolved
with any clause co that is once-satisfied by literal x. This resolution would produce
a non-tautological clause cR which is false under I and which can potentially
be further resolved with other once-satisfied clauses. However, in order to be useful,
the algorithm performing such resolutions needs to ensure that it does not
follow a cycle or produce a subsumed clause.

   
   
   
 

Note that the resultant clause c is guaranteed to be false under I, and it would
produce a failed implication at an earlier level. If an empty clause is derived, the
formula is proven unsatisfiable. Otherwise, the new failed implication is now the
earliest, and is non-conflicting, so the new implicate needs to be flipped.


One detail that is left out of the above algorithm is how to pick the ordering
ψ. Note that if we are given some base ordering ψb, we can construct a UPcompatible
ordering ψ in which decision literals respect ψb. In this case ψb plays
the role of the variable selection heuristic. Of course, the heuristic must be chosen
carefully so as not to lead the algorithm in cycles. An easy sufficient condition
is when ψb is only updated after clause learning, as VSIDS is.

   
 

 3.2 Other Failed Implications

 

In Algorithm 3 we always choose the first failed implication. However, it is not
a necessary condition to generate empowering clauses.


Theorem 2. Suppose that ψ is UP-compatible ordering on I. Let c be a clause
generated by 1UIP on some failed implication x. Suppose c = (α, y) where y is
the new implicant. If no failed implication that is earlier than x can be derived
by unit propagation from α, then c is empowering.


Proof. Suppose that c is not empowering. Then y can be derived by unit propagation
from α. Because y was not implied by α at that level, then the unit
propagation chain contains at least one literal that contradicts the current assignment.
Let p be such a literal which occurs first during unit propagation.
Then p is a failed implication that can be derived from α.

Note that this is sufficient, but not a necessary condition. It is possible that
an earlier failed implication x can be derived from α, but α∩x still do not allow
the derivation of y.

   
 

3.3 Potential Extension to QBF Solving

   

The trail has always played a central role in the formalization of the SAT algorithm.
It added semantic meaning to the the chronological sequence of assignments
by linking it to the way clause learning is performed.


In SAT, this restriction has no major consequences, since the variables can
be assigned in any order. However, in an extension of SAT, Quantified Boolean
Formula (QBF) solving, this restriction becomes important.


In QBF variables are either existentially or universally quantified, and the
inner variables can depend on the preceding ones. Clause learning utilizes a
special universal reduction step, which allows a universal to be dropped from
a clause if there are no existential variables that depend on it. In order to work,
clause learning requires the implication graph to be of a particular form, with
deeper variables having larger decision levels. Because of the tight link between
the trail and clause learning, the same restriction is applied to the order in which
the algorithm was permitted to consider variables. Only outermost variables were
allowed to be picked as decision literals.


This restriction is a big impediment to performance in QBF. One illustration
of this fact is that there is still a big discrepancy between search-based and
expansion based solvers in QBF. The former are constrained to consider variables
according to the quantifier prefix, while the latter are constrained to consider the
variables in reverse of the quantifier prefix. The fact that the two approaches are
incomparable, and that there are sets of benchmarks challenging for one but not
the other, suggests that the ordering restriction plays a big role in QBF. Another
indication of this is the success of dependency schemes, which are attempts to
partially relax this restriction [11].


The reformulation presented here is a step towards relaxing this restriction.
We show that the chronological sequence of assignments does not have any semantic
meaning, and thus should not impose constraints on the solver. Extending
the present approach to QBF should allow one to get an algorithm with the
freedom to choose the order in which the search is performed.


To extend to QBF, the definition of UP-compatible ordering would need to
be augmented to allow for universal reduction. One way to do this would be to
constrain the ordering by quantifier level, to ensure that universal reduction is
possible and any false clause would have an implicate. However, this ordering
is no longer linked to the chronological sequence of variables considered by the
solver, and will be well-defined after any variable flip. At each step, the solver
will be able to choose which of the failed implications to consider, according to
some heuristic not necessarily linked to its UP-compatible ordering.


So, decoupling the chronological variable assignments from clause learning
would allow one to construct a solver that would be free to consider variables in
any order, and would still have well-defined clause learning procedure when it
encounters a conflict.

   
 

4 Experiments

 

We have investigated whether subsequent failed implications, mentioned in Section
3.2, can be useful in practice. To evaluate this, we have equipped Minisat
with the ability to continue the search ignoring conflict clauses. Note: here we
use a version of Minisat with phase saving turned on.


This is equivalent to building a UP-compatible assignment with no nonconflicting
failed implications.


For each decision level, it would only store the first conflict clause encountered,
because learning multiple clauses from the same decision level is likely to
produce redundant clauses. After all the variables are assigned, it would backtrack,
performing clause learning on each stored conflict, and adding the new
clauses to the database. We will say that one iteration of this cycle is a full run.
Obviously, not yet having any method of guiding the selection, the algorithm
could end up producing many unhelpful clauses. To offset this problem, and to
evaluate whether the other clauses are sometimes helpful, we constructed an
algorithm that performs a full run only some of the time.


We have added a parameter n so that at every restart, the next n runs of the
solver would be full runs. We experimented with n ∈ {1, 5, 10, 100, 1000} and
with a version which only performs full runs.


We ran the modified version on the 150 benchmarks from SAT11 set of the
Sat Competition, with timeout of 1000 seconds. The tests were run on a 2.8GHz
machine with 12GB of RAM.

 


Table 1 summarizes the results. As expected with an untuned method, some
families show improvement, while for others the performance is reduced. However,
we see that the addition of the new clauses can improve the results for
both satisfiable instances (as in benchmarks sets leberre and kullmann), and
unsatisfiable ones (as in fuhs and rintanen).

 

Fig. 2. The number of conflicts needed to solve the problem. Below the line are instances
on which C 100 encountered fewer conflicts than Minisat.


Figure 2 compares the number of conflicts learned while solving the problems
in Minisat and C 100. For instances which only one solver solved, the other
solver’s value is set to the number of conflicts it learned within the 1000s timeout.

 

We note that in the conflict count for C 100 we include all the conflicts it
ever learned, so a single full run might add many conflicts at once. These are
unfiltered, so we expect that good heuristics and pruning methods can greatly
reduce this number. However, even with all the extra conflicts C 100 encounters,
there is a fair number of cases where it needs fewer conflicts to solve the problem
than Minisat.

   
 

5 Conclusion

 

We have presented a reformulation of the CDCL algorithm as local search. The
trail is shown to be simply an efficient way to control clause learning. By decoupling
clause learning from the chronological sequence in which variables are
considered, we introduce new flexibility to be studied.译文:我们提出了一种新的局部搜索算法。实验证明,这是一种控制子句学习的有效方法。通过对考虑变量的时间序列的解耦子句学习,我们引入了新的灵活性来研究。


One potential application of this flexibility would be to produce QBF solvers
whose search space is not so heavily constrained by the variable ordering. Another
is to find good heuristics to choose which conflict clauses are considered
during search.译文:这种灵活性的一个潜在应用将是产生QBF求解器,其搜索空间不受变量排序的严重限制。另一种方法是找到好的启发式方法来选择在搜索过程中考虑哪些冲突子句。


Current CDCL solvers effectively maintain a UP-compatible ordering on the trail by removing the order up to the place affected by a flip, and recomputing it again. An interesting question worth investigating is whether it is possible to develop algorithms to update the order more efficiently.

译文:当前的CDCL求解器通过将顺序移到受翻转影响的位置,并重新计算,有效地维护了跟踪上的向上兼容顺序。一个值得研究的有趣问题是,是否有可能开发算法来更有效地更新排序。

   
 

References

1. Audemard, G., Lagniez, J.-M., Mazure, B., Sais, L.: Learning in local search. In:

ICTAI, pp. 417–424 (2009)
2. Belov, A., Stachniak, Z.: Improved Local Search for Circuit Satisfiability. In: Strichman,
O., Szeider, S. (eds.) SAT 2010. LNCS, vol. 6175, pp. 293–299. Springer,
Heidelberg (2010)
3. Biere, A.: Adaptive Restart Strategies for Conflict Driven SAT Solvers. In: Kleine
B¨uning, H., Zhao, X. (eds.) SAT 2008. LNCS, vol. 4996, pp. 28–33. Springer,
Heidelberg (2008)
4. Davis, M., Logemann, G., Loveland, D.: A machine program for theorem-proving.
Commun. ACM 5, 394–397 (1962)
5. Fang, H.: Complete local search for propositional satisfiability. In: Proceedings of
AAAI, pp. 161–166 (2004)
6. Gableske, O., Heule, M.J.H.: EagleUP: Solving Random 3-SAT Using SLS with
Unit Propagation. In: Sakallah, K.A., Simon, L. (eds.) SAT 2011. LNCS, vol. 6695,
pp. 367–368. Springer, Heidelberg (2011)
7. Gomes, C.P., Selman, B., Crato, N.: Heavy-Tailed Distributions in Combinatorial
Search. In: Smolka, G. (ed.) CP 1997. LNCS, vol. 1330, pp. 121–135. Springer,
Heidelberg (1997)
8. Hirsch, E.A., Kojevnikov, A.: Unitwalk: A new sat solver that uses local search
guided by unit clause elimination. Ann. Math. Artif. Intell. 43(1), 91–111 (2005)
9. Huang, J.: A Case for Simple SAT Solvers. In: Bessi`ere, C. (ed.) CP 2007. LNCS,
vol. 4741, pp. 839–846. Springer, Heidelberg (2007)
10. Katebi, H., Sakallah, K.A., Marques-Silva, J.P.: Empirical Study of the Anatomy
of Modern Sat Solvers. In: Sakallah, K.A., Simon, L. (eds.) SAT 2011. LNCS,
vol. 6695, pp. 343–356. Springer, Heidelberg (2011)
11. Lonsing, F., Biere, A.: Depqbf: A dependency-aware qbf solver. JSAT 7(2-3), 71–76
(2010)
12. Pipatsrisawat, K., Darwiche, A.: A lightweight component caching scheme for satisfiability
solvers. In: 10th International Conference on Theory and Applications of
Satisfiability Testing, pp. 294–299 (2007)
13. Pipatsrisawat, K., Darwiche, A.: On the Power of Clause-Learning SAT Solvers
with Restarts. In: Gent, I.P. (ed.) CP 2009. LNCS, vol. 5732, pp. 654–668. Springer,
Heidelberg (2009)
14. Ramos, A., van der Tak, P., Heule, M.J.H.: Between Restarts and Backjumps. In:
Sakallah, K.A., Simon, L. (eds.) SAT 2011. LNCS, vol. 6695, pp. 216–229. Springer,
Heidelberg (2011)
15. Selman, B., Kautz, H., Cohen, B.: Local Search Strategies for Satisfiability Testing.
In: Tamassia, R., Tollis, I.G. (eds.) GD 1994. LNCS, vol. 894, pp. 521–532. Springer,
Heidelberg (1995)
16. Tompkins, D.A.D., Balint, A., Hoos, H.H.: Captain Jack: New Variable Selection
Heuristics in Local Search for SAT. In: Sakallah, K.A., Simon, L. (eds.) SAT 2011.
LNCS, vol. 6695, pp. 302–316. Springer, Heidelberg (2011)

posted on 2020-12-29 13:48  海阔凭鱼跃越  阅读(263)  评论(0编辑  收藏  举报