On The Infeasibility of Polynomial Kernelization
My Masters' thesis written under the guidance of Prof. Venkatesh Raman, and this survey is a compressed version. The thesis is a more leisurely survey of lower-bound techniques in the context of kernelization (polynomial kernels in particular), which saw a number of exciting breakthroughs during the time that the thesis was written, and continues to enjoy very active development.
Remark: Methods in showing kernel lower bounds have advanced considerably. Some significant omissions include - but are not limited to - the technique of cross-composition and the framework for demonstrating finer lower bounds. Also, the AND Conjecture, stated as one of the major open problems in the area, has since been resolved. Here's a blog post about the result, and here's a talk by Andrew Drucker on his work in the context of AND compression.
Also, here is a survey that explores both upper and lower bounds in kernelization.
In approaching a problem that is deemed to be computationally hard, preprocessing steps are often employed as a first attempt at problem simplification. The notion of problem kernelization is a formal framework in which algorithms that simplify problems (instead of solving them) can be analyzed for efficiency. A kernelization algorithm performs data reduction with provable performance bounds. Many NP-complete problems are known to admit kernelization algorithms. However, there are also NP-complete problems for which we do not expect such algorithms (indeed, under reasonable complexity theoretic assumptions, it is known that such algorithms are not possible). This immediately gives us a finer classification of problems that are only understood as intractable in the classical context, in particular, this is one way to discriminate between NP-complete problems.
In the context of parameterized complexity, it is well known that kernelization corresponds exactly to efficient parameterized computation, or fixed parameter tractability. However, not all problems that belong here are "equally efficient", and a more thorough classification may be obtained by examining the efficiency of the kernelization procedures that are known for these problems. Here, a natural measure of efficiency would correspond to the kernel size. We survey methods for establishing that some problems cannot have polynomial-size kernels, under the assumption that the polynomial heirarchy does not collapse to the third level. These methods involve devising "composition algorithms" which are demonstrated for many problems. We suggest a standardization under which the similarity of the compositional techniques used for different problems becomes evident. We also illustrate a problem which, while unlikely to have a polynomial kernel, has multiple (n, to be precise) polynomial sized kernels.
We conclude with pointers to the many directions for future work that have opened up because of these studies.
(Ideally viewed in full screen - use the button at the bottom-right corner if you are not using a mobile device.)
A random sample of figures - made with PGF/TikZ.