- The solvers pbsolver and pbo2maxsat submitted by a PB evaluation organizer are available on the PBLib project page.
- There was an error in the categories.txt file: the objective function was not considered for the CARD or PB category check. The file is updated now.
- The final benchmark is online now
- Note that we will send a submission confirmation via mail to every participant on 2015-08-01. Please check your mailbox on this day.
- To submit your system, please contact us via email
- Please submit your system statically linked
- Note that we will test the submitted solvers on our evaluation system. So you have to submit your solver befor the final solver submission deadline (2015-07-31) to be able to fix potential issues.
The evaluation is running on the the following hardware: Intel Xeon CPU E5-2680 v3 2.50GHz, MultiThreading disabled, 128 GB SSD local disk. The operation system is Bullx Linux 6.3 (kernel version 2.6.32).
We thank the Center for Information Services and High Performance Computing (ZIH) at TU Dresden for generous allocations of computer time.
- PB cplex
|2015-06-30||Benchmark submission deadline|
|2015-07-01||Complete benchmark online|
|2015-07-31||Final solver submission|
|2015-08-01||Benchmark random selection seed calculated|
The pseudo-Boolean evaluation 2015 analyzes the current state of the art of pseudo-Boolean solving technology in order to determine the progress that has been made since the last evaluation. Therefore, we invite solver developers to submit the latest version of their tools. Furthermore, we will include selected solvers of the previous pseudo-Boolean competition 2012.
Additionally, we would like to see whether there is a need for novel solving technology. This should reflect in novel pseudo-Boolean formulas both from academia and industry.
This evaluation tries to show the current state of the art as well as possible. Hence, we will evaluate both open source reasoners, as well as closed-source solutions. We will make all collected data public afterwards.
If you have suggestions for the organization of the evaluation, please do not hesitate to contact use.
We are planning to run the complete benchmark, since we have the necessary resources
So far we have the following new benchmarks:
- Pseudo-Boolean Formulation to Virtual-Machine Consolidation: pbfvmc - Description
- Employee scheduling: EmployeeScheduling - Description
- Pseudo-Boolean optimization problems converted from MINLPLIB2: minlplib2-pb-0.1.0 - Description
- 3-Colouring Formulas: 3COL - Description
- 3-Colouring Formulas: 3COL-FromCNF - Description
- ASP Instances from the Application Track: ASP -
- Digital Tomography: dt-problems -
- Even Colouring Formulas: ECGRID-FromCNF - Description
- Even Colouring Formulas: EC_ODD_GRIDS - Description
- Even Colouring Formulas: EC-RANDOM-GRAPHS-FromCNF - Description
- Even Colouring Formulas: EC_RANDOM_GRAPHS - Description
- Proof Complexity: ProofComplexity - Description
- Sum Inequality formulas: SUMINEQ - Description
- Benchmarks from the previous evaluations and competitions: PB06-12
- All new benchmarks for this years evaluation including linearization from previous years: PB015-new
- The complete benchmark for this years evaluation: PB015-all
- Assurance Level Allocation: sampleA - Description
- Assurance Level Allocation: sampleB
- Tolerant Algebraic Side-Channel Attack on the Keeloq cipher: keeloq_tasca_instances - Description
- AES Encryption: template_tasca_instances - Description
There are in total 15022 instances in the benchmark for the PB Evaluation 2015, where 2955 instances are new in this year. We thank all benchmark submitters for their contribution!
Some of the newly submitted files were not in the correct format, hence we fixed theses error. In files that ends with:
- ".negationfix.opb" we fixed single variables with a negation,
- ".metafix.pob" we fixed a wrong or missing meta line,
- ".lin.opb" are the linearization of the corresponding non linear instance, and
- files that were submitted with the ending ".pb" are renamed to ".opb"
For the partition of the instances into the different categories you can consult the following file: categories.txt. For instance, the following line:
means that the file ./PB10/normalized-PB10/OPT-SMALLINT-NLC/oren/keeloq_tasca_instances/normalized-90_rounds_15_errors.opb is used in the category non linear optimization problems with general pseudo-Boolean constraints and each constraint can be handled with 32 bit integers.
Benchmarks of the evaluation will be specified in the format of previous events. The format is specified here. The web page of the pseudo-Boolean competition 2012 furthermore provides sample code for parsers and programming tips.
As a basis of the evaluation we will use the benchmarks that have been collected in previous events, PB12 PB11 PB10 PB09 PB07 PB06 PB05, however, we strongly encourage the submission of novel benchmarks.
We will consider only formulas for the evaluation that can be handled within 63bits (signed 64bit integer). Hence, all weights and the degree of all constraints have to be represented with at most 63bit integers. Furthermore, the sum of all weights of a constraint has to fit into such an integer. The same rules apply for the optimization constraint.
In case no submitted solver treats non-linear constraints differently than with linearization, we will not include non-linear constraints. Otherwise, we will set up extra tracks for these problems with constraints, and provide the linearized problems for the other systems.
Benchmark Selection Process
- All submitted benchmarks are published on this webpage as soon as possible.
- We randomly select the formulas such that the size of the benchmark is at most 300 000 divided by the number of submitted solvers.
- If the number of new benchmarks submitted to this years evaluation is lower then the required size we will select all new benchmarks and select the rest randomly form the benchmarks of previous PB competition.
- The random seed for the final benchmark selection will be the sum of the random seeds received from the solver submitters.
- Organizer of the evaluation are not allowed to submit a random seed.
- We will publish the selection tool soon.
We encourage users and developers of pseudo-Boolean solvers to make their benchmarks available, such that they can be used for the evaluation. As soon as we receive new benchmarks, we will publish them on this web page, such that they are available for the research community. We encourage benchmark submitters to classify the benchmark, such that we can evaluate it in the corresponding track.
If you are willing to submit benchmarks for the competition, please contact us via email. For now we do not plan to setup a web submission system.
We encourage solver developers from both industry and academia to submit their solvers to the competition. We will furthermore provide tools on this web page, such that the solver can be tested for compatibility offline. Currently, we plan to run only sequential solvers. We will ask for a some information on the solver, for example whether if non-linear constraints are supported and how they are handled.
Each author is allowed to submit at most two solvers.
If you are willing to participate in the competition, please contact us via email. For now we do not plan to setup a web submission system.
For decision problems, a solver has to specify correctly, whether a formula is satisfiable or unsatisfiable. If satisfiability is claimed, a model has to be produced. In this case, we check whether the model satisfies the given formula. If unsatisfiability is claimed, we check whether there is no other solver that correctly claims satisfiability. For optimization problems, the solver has to specify whether it found an optimum or not. In case of an optimum, we check whether the assignment of the solver produces the correct value of the optimization function and satisfies the formula, and check whether no other solver is able to show a lower bound (hence no need to prove optimality). In any case we check the value of the optimization function for the last model that has been reported by the solver.
The format for the output of the solver is the same format that has been used in previous events. For more details please have a look at the format descritpion.
A solver that produces at least one false result is marked as unsound and is not evaluated further.
- A solver will be able to access 7.5 GB of main memory, and is allowed to use a single core.
- The timeout for each problem instance will be 60 minutes.
We will classify the formulas of the benchmark into sets, depending on their syntax: CNF, cardinality and pseudo-Boolean. We then divide each class into decision and optimization problems. The goal is to also allow solvers that support only cardinality constraints to participate. We will not allow systems that support only CNF, as these systems are evaluated in SAT competitions. Likewise, systems that support only CNF and their optimization variant are not evaluated, as there is the MaxSAT evaluation.
Furthermore, we will cluster the formulas according to labels that are specified during submission: industrial theoretically interesting, or unknown. The goal is to evaluate the performance of the different tools in different categories.
For optimization problems we will run another track: incomplete optimization. For this track we evaluate the best found solution within a given timeout. We set this timeout to 10 minutes.
Minimum Solver Requirments and Optional Features
- solve decision problems
- handle 32 bit Integers
- handle cardinality constraints
- solve optimization problems
- incomplete optimization
- complete optimization
- handle 64 bit Integers
- handle general pseudo-Boolean constraints
We run each solver on the selected benchmarks the solver is able to handle.