Masters Thesis

Research

InterestsPublicationsTalksWorkshops and SchoolsPhD Thesis ⸱ Masters Thesis

On the Infeasibility of Polynomial Kernelization

Read the abstract

In approaching a problem that is deemed to be computationally hard, preprocessing steps are often employed as a first attempt at problem simplification. The notion of problem kernelization is a formal framework in which algorithms that simplify problems (instead of solving them) can be analyzed for efficiency. A kernelization algorithm performs data reduction with provable performance bounds. Many NP-complete problems are known to admit kernelization algorithms. However, there are also NP-complete problems for which we do not expect such algorithms (indeed, under reasonable complexity theoretic assumptions, it is known that such algorithms are not possible). This immediately gives us a finer classification of problems that are only understood as intractable in the classical context, in particular, this is one way to discriminate between NP-complete problems.

In the context of parameterized complexity, it is well known that kernelization corresponds exactly to efficient parameterized computation, or fixed parameter tractability. However, not all problems that belong here are “equally efficient”, and a more thorough classification may be obtained by examining the efficiency of the kernelization procedures that are known for these problems. Here, a natural measure of efficiency would correspond to the kernel size. We survey methods for establishing that some problems cannot have polynomial-size kernels, under the assumption that the polynomial heirarchy does not collapse to the third level. These methods involve devising “composition algorithms” which are demonstrated for many problems. We suggest a standardization under which the similarity of the compositional techniques used for different problems becomes evident. We also illustrate a problem which, while unlikely to have a polynomial kernel, has multiple (n, to be precise) polynomial sized kernels.

We conclude with pointers to the many directions for future work that have opened up because of these studies.

This thesis is a leisurely survey of lower-bound techniques in the context of kernelization (polynomial kernels in particular - see this survey for a compressed version), which saw a number of exciting breakthroughs during the time that the thesis was written, and continues to enjoy very active development.

Remark: Methods in showing kernel lower bounds have advanced considerably. New developments include — but are not limited to — the technique of cross-composition and the framework for demonstrating finer lower bounds. Also, the AND Conjecture, stated as one of the major open problems in the area, has since been resolved. Here’s a blog post about the result. Techniques for “coping” with the hardness of polynomial kernelization are fast evolving as well, and one of the relatively recent paradigms in this direction is Lossy Kernelization, which is a robust notion of “approximate” kernels.