Logo: TUD Logo: ICCL


The results of the PB evaluation 2015 are presented on a separate page. The slides presented at SAT2015 are online as well.


Evaluation Environment

The evaluation is running on the the following hardware: Intel Xeon CPU E5-2680 v3 2.50GHz, MultiThreading disabled, 128 GB SSD local disk. The operation system is Bullx Linux 6.3 (kernel version 2.6.32).

We thank the Center for Information Services and High Performance Computing (ZIH) at TU Dresden for generous allocations of computer time.

Registered Solvers

Important Dates

2015-06-01 Registration opens
2015-06-30 Benchmark submission deadline
2015-06-30Registration closes
2015-07-01Complete benchmark online
2015-07-31 Final solver submission
2015-08-01 Benchmark random selection seed calculated

About the Pseudo-Boolean Evaluation 2015

The pseudo-Boolean evaluation 2015 analyzes the current state of the art of pseudo-Boolean solving technology in order to determine the progress that has been made since the last evaluation. Therefore, we invite solver developers to submit the latest version of their tools. Furthermore, we will include selected solvers of the previous pseudo-Boolean competition 2012.

Additionally, we would like to see whether there is a need for novel solving technology. This should reflect in novel pseudo-Boolean formulas both from academia and industry.

This evaluation tries to show the current state of the art as well as possible. Hence, we will evaluate both open source reasoners, as well as closed-source solutions. We will make all collected data public afterwards.

If you have suggestions for the organization of the evaluation, please do not hesitate to contact use.

Benchmarks for the Evaluation

We are planning to run the complete benchmark, since we have the necessary resources

So far we have the following new benchmarks:

There are in total 15022 instances in the benchmark for the PB Evaluation 2015, where 2955 instances are new in this year. We thank all benchmark submitters for their contribution!

Some of the newly submitted files were not in the correct format, hence we fixed theses error. In files that ends with:

For the partition of the instances into the different categories you can consult the following file: categories.txt. For instance, the following line:
./PB10/normalized-PB10/OPT-SMALLINT-NLC/oren/keeloq_tasca_instances/normalized-90_rounds_15_errors.opb OPT-NLC-32-PB
means that the file ./PB10/normalized-PB10/OPT-SMALLINT-NLC/oren/keeloq_tasca_instances/normalized-90_rounds_15_errors.opb is used in the category non linear optimization problems with general pseudo-Boolean constraints and each constraint can be handled with 32 bit integers.

Benchmarks of the evaluation will be specified in the format of previous events. The format is specified here. The web page of the pseudo-Boolean competition 2012 furthermore provides sample code for parsers and programming tips.

As a basis of the evaluation we will use the benchmarks that have been collected in previous events, PB12 PB11 PB10 PB09 PB07 PB06 PB05, however, we strongly encourage the submission of novel benchmarks.

We will consider only formulas for the evaluation that can be handled within 63bits (signed 64bit integer). Hence, all weights and the degree of all constraints have to be represented with at most 63bit integers. Furthermore, the sum of all weights of a constraint has to fit into such an integer. The same rules apply for the optimization constraint.

In case no submitted solver treats non-linear constraints differently than with linearization, we will not include non-linear constraints. Otherwise, we will set up extra tracks for these problems with constraints, and provide the linearized problems for the other systems.

Benchmark Selection Process

Call for Benchmarks

We encourage users and developers of pseudo-Boolean solvers to make their benchmarks available, such that they can be used for the evaluation. As soon as we receive new benchmarks, we will publish them on this web page, such that they are available for the research community. We encourage benchmark submitters to classify the benchmark, such that we can evaluate it in the corresponding track.

If you are willing to submit benchmarks for the competition, please contact us via email. For now we do not plan to setup a web submission system.

Call for Participation

We encourage solver developers from both industry and academia to submit their solvers to the competition. We will furthermore provide tools on this web page, such that the solver can be tested for compatibility offline. Currently, we plan to run only sequential solvers. We will ask for a some information on the solver, for example whether if non-linear constraints are supported and how they are handled.

Each author is allowed to submit at most two solvers.

If you are willing to participate in the competition, please contact us via email. For now we do not plan to setup a web submission system.


For decision problems, a solver has to specify correctly, whether a formula is satisfiable or unsatisfiable. If satisfiability is claimed, a model has to be produced. In this case, we check whether the model satisfies the given formula. If unsatisfiability is claimed, we check whether there is no other solver that correctly claims satisfiability. For optimization problems, the solver has to specify whether it found an optimum or not. In case of an optimum, we check whether the assignment of the solver produces the correct value of the optimization function and satisfies the formula, and check whether no other solver is able to show a lower bound (hence no need to prove optimality). In any case we check the value of the optimization function for the last model that has been reported by the solver.

The format for the output of the solver is the same format that has been used in previous events. For more details please have a look at the format descritpion.

A solver that produces at least one false result is marked as unsound and is not evaluated further.

Execution Environment


We will classify the formulas of the benchmark into sets, depending on their syntax: CNF, cardinality and pseudo-Boolean. We then divide each class into decision and optimization problems. The goal is to also allow solvers that support only cardinality constraints to participate. We will not allow systems that support only CNF, as these systems are evaluated in SAT competitions. Likewise, systems that support only CNF and their optimization variant are not evaluated, as there is the MaxSAT evaluation.

Furthermore, we will cluster the formulas according to labels that are specified during submission: industrial theoretically interesting, or unknown. The goal is to evaluate the performance of the different tools in different categories.

For optimization problems we will run another track: incomplete optimization. For this track we evaluate the best found solution within a given timeout. We set this timeout to 10 minutes.

Minimum Solver Requirments and Optional Features

Minimum Requirments

Optional Features

We run each solver on the selected benchmarks the solver is able to handle.



Please use the contact information from the above web pages. To register or write an email, use: first name dot last name at tu-dresden.de