191014K02: Randomized Methods for Approximation and Parameterized Algorithms
191014K02: Randomized Methods for Approximation and Parameterized Algorithms
A GIAN Course by Prof Daniel Lokshtanov
December 5—9 2022
No matching items
A GIAN Course by Prof Daniel Lokshtanov
Most computational problems that model real-world issues are not known to admit efficient algorithms that are provably correct on all inputs. Many of these problems can be reduced to one of the classical problems called NP-complete problems which are unlikely to admit efficient algorithms in practice, and the issue of whether they do is a fundamental open problem in computer science. Although these problems are very unlikely to be solvable efficiently in the immediate future, computer scientists, over the last few decades, have come up with several “workarounds” to “cope” with NP-hardness.
Two fundamental approaches in this program include approximation and fixed- parameter tractability. An approximate algorithm is a way of dealing with NP- completeness for optimization problem. This technique does not guarantee the best solution. The goal of an approximation algorithm is to come as close as possible to the optimum value in a reasonable amount of time which is at most polynomial time. On the other hand, parameterized algorithms aim to restrict the exponential blow-up to an identified parameter of the problem, leading to efficient exact algorithms whenever the said parameter is reasonably small. In recent times, there has been substantial research that involves an interplay of techniques from both approaches as well.
All paradigms of algorithm design, including efficient polynomial time algorithms as well as the methods of approximation and parameterization discussed above, are substantially more powerful when combined with techniques based on randomness. Carefully employed, randomization leads to approaches that are faster and easier to implement than their deterministic counterparts, making them particularly well-suited to practice.
Over the last two decades, sophisticated probabilistic techniques have been developed for a broad range of challenging computing applications. To begin with, this course will introduce the basic probabilistic techniques used in the design of randomized algorithms and in probabilistic analysis of algorithms. The course covers the basic probability theory required for working with these techniques and demonstrates their use in various computing applications, especially in the context of parameterized and approximation algorithms.
This course will demonstrate the algorithmic techniques in the context of a variety of combinatorial optimization problems that have significant real-world applications. These include: Longest Path, Minimum Cut, Maximum Cut, Clustering, Vertex Cover, Feedback Vertex Set, and Closest String.
Anyone who is biting their nails from the NP-completeness cliffhanger at the end of their introduction to algorithms will probably enjoy this course. The course is open to students, postdocs, faculty, industry professionals, and anyone who is interested and is confident about the prerequisites enlisted below.
This is a theoretical course that will require mathematical maturity (in particular, the ability to understand and write formal mathematical proofs), and some background in the design and analysis of algorithms.
Probability Prerequisites
Discrete probability spaces • Events • Random variables • Independence (of events, of random variables) • Conditional probability • Expectation of random variable. • Linearity of Expectation • Conditional Expectation (of random variable on event, and on another random variable) • Binomial coefficients (Pascal’s Triangle) • Bernoulli, Binomial, Geometric random variables.
General Prerequsites
Correctness proofs for algorithms • Paradigms: Greedy, DP, Divide and Conquer • Big-Oh and asymptotic runtime analysis • Formulating and solving recurrences • P, NP, NP-completeness
Math Prerequisites
Linear algebra (Matrices, vectors, rank, basis, linear independence)
5th December 2022: Registration + Coffee: 9AM to 10AM | Outside 1/101
5th December 2022: Inaugaral Event: 10AM | 1/101
Lectures: Mon - Thu • 11AM — 12:30PM • 2:30PM — 4PM | Fri • 11AM — 12:30PM
Tutorial: Mon - Thu • 4:30PM — 6PM | Fri • 2:30PM — 4PM
Venue: Mon - Wed: 1/101 • Thu: 7/101 • Fri: 7/102
Zoom links and invitations to a Whatsapp group and Discord server were sent out to all registered participants.
Register here. Registrations are now closed. You can follow along on Youtube if you missed registering!
Daniel Lokshtanov is a Professor at the Department of Computer Science at the University of California Santa Barbara, before which he was a Professor at the Department of Informatics at the University of Bergen. He received his PhD in Computer Science (2009), from the University of Bergen. He spent two years (2010- 2012) as a Simons Postdoctoral Fellow at University of California at San Diego.
His research interests span a wide area of algorithms, and he has made several fundamental contributions in the areas of exact exponential algorithms, parameterized and fine-grained algorithms and approximation algorithms. He has been awarded the Meltzer Prize for Young Researchers for his work at the University of Bergen. He is a recipient of the Bergen Research Foundation young researcher grant and of an ERC starting grant on parameterized algorithms. He is a co-author of two recently published texts — Kernelization (Cambridge University Press, 2019) and Parameterized Algorithms (Springer, 2015).
© 2022 • Neeldhara Misra • Credits •
Corrections? Please leave a comment here or a PR in this repository, thanks!
I’d rather be a failure at something I love than a success at something I hate.
George Burns
You live and you learn — at any rate, you live.
Douglas Adams
A problem worthy of attack proves its worth by fighting back.
Paul Erdos