Publications

Last update

July 21, 2024

My research spans over aspects of Theoretical Computer Science, in particular

Journals and Confereces

A full list of my works is also available on dblp.

\(\blacksquare\) = the publication has a journal full version
\(\blacksquare\) = the publication has (at the time being) only a preliminary conference version

Polynomial Calculus for optimization
joint work with Maria Luisa Bonet, Jordi Jevy.
Artificial Intelligence, 2024
Preliminary (conference) version: Polynomial Calculus for MaxSAT. In Proceedings of SAT, 2023
Abstract

MaxSAT is the problem of finding an assignment satisfying the maximum number of clauses in a CNF formula. We consider a natural generalization of this problem to generic sets of polynomials and propose a weighted version of Polynomial Calculus to address this problem. Weighted Polynomial Calculus is a natural generalization of MaxSAT-Resolution and weighted Resolution that manipulates polynomials with coefficients in a finite field and either weights in \(\mathbb N\) or \(\mathbb Z\). We show the soundness and completeness of these systems via an algorithmic procedure. Weighted Polynomial Calculus, with weights in \(\mathbb N\) and coefficients in \(\mathbb F_2\), is able to prove efficiently that Tseitin formulas on a connected graph are minimally unsatisfiable. Using weights in \(\mathbb Z\), it also proves efficiently that the Pigeonhole Principle is minimally unsatisfiable.

MaxSAT Resolution with Inclusion Redundancy
joint work with Maria Luisa Bonet, Massimo Lauria.
In Proceedings of SAT, 2024
Abstract

Popular redundancy rules for SAT are not necessarily sound for MaxSAT. The works of [Bonacina-Bonet-Buss-Lauria’24] and [Ihalainen-Berg-Järvisalo’22] proposed ways to adapt them, but required specific encodings and more sophisticated checks during proof verification. Here, we propose a different way to adapt redundancy rules from SAT to MaxSAT. Our rules do not require specific encodings, their correctness is simpler to check, but they are slightly less expressive. However, the proposed redundancy rules, when added to MaxSAT-Resolution, are already strong enough to capture Branch-and-bound algorithms, enable short proofs of the optimal cost of notable principles (e.g., the Pigeonhole Principle and the Parity Principle), and allow to break simple symmetries (e.g., XOR-ification does not make formulas harder).

Weighted, Circular and Semi-Algebraic Proofs
joint work with Maria Luisa Bonet, Jordi Levy.
Journal of Artificial Intelligence Research, 2024
Abstract

In recent years there has been an increasing interest in studying proof systems stronger than Resolution, with the aim of building more efficient SAT solvers based on them. In defining these proof systems, we try to find a balance between the power of the proof system (the size of the proofs required to refute a formula) and the difficulty of finding the proofs. In this paper we consider the proof systems circular Resolution, Sherali-Adams, Nullstellensatz and Weighted Resolution and we study their relative power from a theoretical perspective. We prove that circular Resolution, Sherali-Adams and Weighted Resolution are polynomially equivalent proof systems. We also prove that Nullstellensatz is polynomially equivalent to a restricted version of Weighted Resolution. The equivalences carry on also for versions of the systems where the coefficients/weights are expressed in unary. The practical interest in these systems comes from the fact that they admit efficient algorithms to find proofs in case these have small width/degree.

On vanishing sums of roots of unity in polynomial calculus and sum-of-squares
joint work with Nicola Galesi, Massimo Lauria.
Computational Complexity, 2023
Preliminary (conference) version: In Proceedings of MFCS, 2022
Abstract

We introduce a novel take on sum-of-squares that is able to reason with complex numbers and still make use of polynomial inequalities. This proof system might be of independent interest since it allows to represent multivalued domains both with Boolean and Fourier encoding. We show degree and size lower bounds in this system for a natural generalization of knapsack: the vanishing sums of roots of unity. These lower bounds naturally apply to polynomial calculus as-well.

On the strength of Sherali-Adams and Nullstellensatz as propositional proof systems
joint work with Maria Luisa Bonet.
In Proceedings of LICS, 2022
Abstract

We characterize the strength of the algebraic proof systems Sherali-Adams (SA) and Nullstellensatz (NS) in terms of Frege-style proof systems. Unlike bounded-depth Frege, SA has polynomial-size proofs of the pigeonhole principle (PHP). A natural question is whether adding PHP to bounded-depth Frege is enough to simulate SA. We show that SA, with unary integer coefficients, lies strictly between tree-like depth-1 Frege + PHP and tree-like Resolution. We introduce a weighted version of PHP (wPHP) and we show that SA with integer coefficients lies strictly between tree-like depth-1 Frege + wPHP and Resolution. Analogous results are shown for NS using the bijective (i.e. onto and functional) pigeonhole principle and a weighted version of it.

Clique Is Hard on Average for Regular Resolution
joint work with Albert Atserias, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, Alexander A. Razborov.
Journal of the ACM, 2021
Preliminary (conference) version: In Proceedings of STOC, 2018
Abstract

We prove that for \(k \ll 4\sqrt{n}\) regular resolution requires length \(n^{\Omega(k)}\) to establish that an Erdős-Rényi graph with appropriately chosen edge density does not contain a \(k\)-clique. This lower bound is optimal up to the multiplicative constant in the exponent and also implies unconditional \(n^{\Omega(k)}\) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs.

Frege Systems for Quantified Boolean Logic
joint work with Olaf Beyersdorff, Leroy Chew, Ján Pich.
Journal of the ACM, 2020
Preliminary (conference) version: Lower Bounds: From Circuits to QBF Proof Systems. In Proceedings of ITCS, 2016
Abstract

We define and investigate Frege systems for quantified Boolean formulas (QBF). For these new proof systems, we develop a lower bound technique that directly lifts circuit lower bounds for a circuit class C to the QBF Frege system operating with lines from C. Such a direct transfer from circuit to proof complexity lower bounds has often been postulated for propositional systems but had not been formally established in such generality for any proof systems prior to this work. This leads to strong lower bounds for restricted versions of QBF Frege, in particular an exponential lower bound for QBF Frege systems operating with \(AC^0[p]\)-circuits. In contrast, any non-trivial lower bound for propositional \(AC^0[p]\)-Frege constitutes a major open problem. Improving these lower bounds to unrestricted QBF Frege tightly corresponds to the major problems in circuit complexity and propositional proof complexity. In particular, proving a lower bound for QBF Frege systems operating with arbitrary P/poly circuits is equivalent to either showing a lower bound for P/poly or for propositional extended Frege (which operates with P/poly circuits). We also compare our new QBF Frege systems to standard sequent calculi for QBF and establish a correspondence to intuitionistic bounded arithmetic.

Strong ETH and Resolution via Games and the Multiplicity of Strategies
joint work with Navid Talebanfard.
Algorithmica, 2017
Preliminary (conference) version: In Proceedings of IPEC, 2015
Abstract

We consider a proof system intermediate between regular Resolution, in which no variable can be resolved more than once along any refutation path, and general Resolution. We call \(\delta\)-regular Resolution such system, in which at most a fraction \(\delta\) of the variables can be resolved more than once along each refutation path (however, the re-resolved variables along different paths do not need to be the same). We show that when for \(\delta\) not too large, \(\delta\)-regular Resolution is consistent with the Strong Exponential Time Hypothesis (SETH). More precisely, for large \(n\) and \(k\), we show that there are unsatisfiable \(k\)-CNF formulas which require \(\delta\)-regular Resolution refutations of size \(2^{(1-\epsilon_k)n}\), where \(n\) is the number of variables and \(\epsilon_k = \tilde O(k^{-1/4})\) and \(\delta = \tilde O(k^{-1/4})\) is the number of variables that can be resolved multiple times.

Space proof complexity for random 3-CNFs
joint work with Patrick Bennett, Nicola Galesi, Tony Huynh, Mike Molloy, Paul Wollan.
Information and Computation, 2017
Abstract

We investigate the space complexity of refuting 3-CNFs in Resolution and algebraic systems. We prove that every Polynomial Calculus with Resolution refutation of a random 3-CNF \(\phi\) in \(n\) variables requires, with high probability, \(\Omega(n)\) distinct monomials to be kept simultaneously in memory. The same construction also proves that every Resolution refutation of \(\phi\) requires, with high probability, \(\Omega(n)\) clauses each of width \(\Omega(n)\) to be kept at the same time in memory. This gives a \(\Omega(n^2)\) lower bound for the total space needed in Resolution to refute \(\phi.\) These results are best possible (up to a constant factor) and answer questions about space complexity of 3-CNFs. The main technical innovation is a variant of Hall’s Lemma. We show that in bipartite graphs with bipartition (L, R) and left-degree at most 3, L can be covered by certain families of disjoint paths, called VW-matchings, provided that L expands in R by a factor of \((2 −\epsilon)\), for \(\epsilon < 1\).

Total space in Resolution is at least width squared
In Proceedings of ICALP,
Abstract

Given an unsatisfiable \(k\)-CNF formula \(\varphi\) we consider two complexity measures in Resolution: width and total space. The width is the minimal \(W\) such that there exists a Resolution refutation of \(\varphi\) with clauses of at most \(W\) literals. The total space is the minimal size \(T\) of a memory used to write down a Resolution refutation of \(\varphi\), where the size of the memory is measured as the total number of literals it can contain. We prove that \(T = \Omega((W - k)^2)\).

Improving resolution width lower bounds for k-CNFs with applications to the Strong Exponential Time Hypothesis
joint work with Navid Talebanfard.
Information Processing Letters, 2016
Abstract

A Strong Exponential Time Hypothesis lower bound for resolution has the form \(2^{(1-\epsilon_k)n}\) for some \(k\)-CNF on \(n\) variables such that \(\epsilon_k \to 0\) as \(k \to \infty\). For every large \(k\) we prove that there exists an unsatisfiable \(k\)-CNF formula on \(n\) variables which requires resolution width \((1-\tilde O(k^{-1/3}))n\) and hence tree-like resolution refutations of size at least \(2^{(1-\tilde O(k^{-1/3}))n}\). We also show that for every unsatisfiable \(k\)-CNF \(\varphi\) on \(n\) variables, there exists a tree-like resolution refutation of \(\varphi\) of size at most \(2^{(1-\Omega(1/k))n}\).

Total Space in Resolution
joint work with Nicola Galesi, Neil Thapen.
SIAM Journal on Computing, 2016
Preliminary (conference) version: In Proceedings of FOCS, 2014
Abstract

We show quadratic lower bounds on the total space used in resolution refutations of random \(k\)-CNFs over n variables and of the graph pigeonhole principle and the bit pigeonhole principle for \(n\) holes. This answers the open problem of whether there are families of \(k\)-CNF formulas of polynomial size that require quadratic total space in resolution. The results follow from a more general theorem showing that, for formulas satisfying certain conditions, in every resolution refutation there is a memory configuration containing many clauses of large width.

A Framework for Space Complexity in Algebraic Proof Systems
joint work with Nicola Galesi.
Journal of the ACM, 2015
Preliminary (conference) version: Pseudo-partitions, Transversality and Locality: A Combinatorial Characterization for the Space Measure in Algebraic Proof Systems. In Proceedings of ITCS, 2013
Abstract

Algebraic proof systems, such as Polynomial Calculus (PC) and Polynomial Calculus with Resolution (PCR), refute contradictions using polynomials. Space complexity for such systems measures the number of distinct monomials to be kept in memory while verifying a proof. We introduce a new combinatorial framework for proving space lower bounds in algebraic proof systems. As an immediate application, we obtain the space lower bounds previously provided for PC/PCR [Alekhnovich et al. 2002; Filmus et al. 2012]. More importantly, using our approach in its full potential, we prove \(\Omega(n)\) space lower bounds in PC/PCR for random \(k\)-CNFs (\(k\ge 4\)) in \(n\) variables, thus solving an open problem posed in Alekhnovich et al. [2002] and Filmus et al. [2012]. Our method also applies to the Graph Pigeonhole Principle, which is a variant of the Pigeonhole Principle defined over a constant (left) degree expander graph.

Proofs of Space: When Space Is of the Essence
joint work with Giuseppe Ateniese, Antonio Faonio, Nicola Galesi.
In Proceedings of SCN, 2014
Abstract

Proofs of computational effort were devised to control denial of service attacks. Dwork and Naor (CRYPTO ’92), for example, proposed to use such proofs to discourage spam. The idea is to couple each email message with a proof of work that demonstrates the sender performed some computational task. A proof of work can be either CPU-bound or memory-bound. In a CPU-bound proof, the prover must com- pute a CPU-intensive function that is easy to check by the verifier. A memory-bound proof, instead, forces the prover to access the main memory several times, effectively replacing CPU cycles with memory accesses. In this paper we put forward a new concept dubbed proof of space. To compute such a proof, the prover must use a specified amount of space, i.e., we are not interested in the number of accesses to the main memory (as in memory-bound proof of work) but rather on the amount of actual memory the prover must employ to compute the proof. We give a complete and detailed algorithmic description of our model. We develop a full theoretical analysis which uses combinatorial tools from Complexity Theory (such as pebbling games) which are essential in studying space lower bounds.

No matching items

Manuscripts and Preprints

This page does not index preprints of works which have been published either in conference proceedings or journals. For those you can have a look at ECCC or Arxiv.

\(\blacksquare\) = this indicates a pre-print not peer reviewed

Redundancy for MaxSAT
joint work with Maria Luisa Bonet, Sam Buss, Massimo Lauria.
ECCC, TR24–045, 2024
Abstract

The concept of redundancy in SAT lead to more expressive and powerful proof search techniques, e.g. able to express various inprocessing techniques, and to interesting hierarchies of proof systems [Heule et.al’20, Buss-Thapen’19]. We propose a general way to integrate redundancy rules in MaxSAT, that is we define MaxSAT variants of proof systems such as SPR, PR, SR, and others. The main difference compared to the recent alternative approach in [Ihalainen et.al’22] is that our redundancy rules are polynomially checkable. We discuss the strength of the systems introduced and we give a short cost-SR proof that any assignment for the weak pigeonhole principle \(PHP^m_n\) falsifies at least \(m-n\) clauses.

A separation of PLS from PPP
joint work with Neil Thapen.
ECCC, TR22–089, 2022
Abstract

Recently it was shown that PLS is not contained in PPADS (ECCC report TR22-058). We show that this separation already implies that PLS is not contained in PPP. These separations are shown for the decision tree model of TFNP and imply similar separations in the type-2, relativized model. Important note. This manuscript is based on an early preprint of report TR22-058, which did not contain the result that PLS is not contained in PPP. Our work is superseded by the revised version of the report of 10 June 2022 which, independently, contains this result. The manuscript has not been edited to reflect this, and in particular references to the report are to the early version.

No matching items

Books and PhD Thesis

Space in Weak Propositional Proof Systems

Springer, 2017
This book is based on my PhD thesis defended on December 14, 2015 at the Sapienza University of Rome (supervision of Prof. Nicola Galesi). It was awarded “Best Italian PhD Thesis in Theoretical Computer Science, 2016” by the Italian Chapter of the European Association for Theoretical Computer Science (EATCS).

A ~14 pages abstract appeared in the Bulletin of the EATCS, 2016
A preliminary version appeared as my PhD thesis. Sapienza University, Rome, December 2015

Abstract

This book considers logical proof systems from the point of view of their space complexity. After an introduction to propositional proof complexity the author structures the book into three main parts. Part I contains two chapters on resolution, one containing results already known in the literature before this work and one focused on space in resolution, and the author then moves on to polynomial calculus and its space complexity with a focus on the combinatorial technique to prove monomial space lower bounds. The first chapter in Part II addresses the proof complexity and space complexity of the pigeon principles. Then there is an interlude on a new type of game, defined on bipartite graphs, essentially independent from the rest of the book, collecting some results on graph theory. Finally Part III analyzes the size of resolution proofs in connection with the Strong Exponential Time Hypothesis (SETH) in complexity theory.

No matching items