Home | Contact | About MOS | Membership | Publications | Prizes | Meetings | IPCO | ISMP | Links | Album | Jobs
Mathematical Optimization Society

MOS Prizes

MOS Prizes

 

2021 Tucker Prize Citation


The Tucker Prize for an outstanding doctoral thesis has been awarded to Jakub Tarnawski for his thesis New Graph Algorithms via Polyhedral Techniques.

Tarnawski's thesis achieves a breakthrough on a fundamental optimization problem: the Traveling Salesperson Problem (TSP). Given n cities with inter pair distances, the goal is to find the shortest tour that visits all the cities. The most successful algorithms for the problem are based on linear programming methods, both theoretically and computationally. One of the most well-studied linear programming based formulations is known as the Held-Karp relaxation and a central question has been to give tight bounds on how good this relaxation is, along with the study of algorithms that can produce tours that are provably close to being optimal. In the case of the asymmetric version of the problem, where the distance between two cities may be different depending on the direction of travel, for decades it was not clear if the quality of the Held-Karp relaxation degrades with the size of the problem instance, or remains the same no matter how many cities are involved. Tarnawski's thesis shows that the latter is true by giving an algorithm that produces a tour whose cost is at most a constant (independent of instance size) times larger than the Held-Karp lower bound, resolving a question that remained open for many decades in spite of sustained efforts from the optimization community.

Tarnawski's thesis also makes progress on another open question in combinatorial optimization: parallel algorithms for the matching problem. Randomized parallel algorithms for matching had been known for three decades, but deterministic versions had eluded researchers until a recent result by Fenner, Gurjar, and Thierauf that worked for bipartite graphs. Tarnawski's work extends this result to the general setting and introduces significant new ideas to deal with the general perfect matching polytope.

Tarnawski's thesis is groundbreaking because it resolves one of the most well-studied and hardest questions in optimization and discrete mathematics. Along with other significant achievements in fundamental and core problems in combinatorial optimization, this makes Tarnawski's thesis highly deserving of the 2021 A. W. Tucker Prize.



The other two Tucker Prize finalists chosen by this year's Tucker Prize Committee are Georgina Hall and Yair Carmon.

Georgina Hall's thesis Optimization over Nonnegative and Convex Polynomials with and without Semidefinite Programming is outstanding in both its scope and its depth. The chapters of this thesis make several exceptional contributions to the field of mathematical optimization and its interface with computational convex algebraic geometry. Together, Hall's results span deep mathematical theory involving real algebraic geometry, computational techniques to make her ideas more tractable in software, and applications of these optimization models to robotics, statistics and optimal control. Amongst the many theoretical results dealing with algebraic aspects of convexity and polynomial optimization, one remarkable result shows how to avoid the use of semidefinite programming (SDP) in certain contexts of polynomial optimization, replacing SDPs with much simpler algebraic operations. On the computational side, Hall shows how to take advantage of techniques from discrete optimization to design more efficient algorithms for polynomial optimization. The final two chapters of Hall's thesis apply these algebraic and polynomial optimization techniques to the problem of representing 3D objects in computer vision and robotics, and to shape regression in statistics.

In summary, Hall's thesis makes significant contributions to the active area of polynomial optimization and its interface with real algebraic geometry. It is impressive in its scope, straddling theory, computation and applications, making it exceptionally well-rounded while simultaneously opening new directions of research in the area.



Yair Carmon's dissertation The Complexity of Optimization beyond Convexity effectively settles the analytical (oracle) and algorithmic complexity of smooth nonconvex optimization. The corresponding theory for convex optimization had been established in the 70s and 80s by seminal work from Yudin, Nemirovskii and Nesterov. Carmon establishes information-theoretic lower bounds on the complexity of computing stationary points for smooth, nonconvex functions and gives algorithms that either match these lower bounds exactly or get very close to these lower bounds in complexity. Some of these algorithms are classical ones from the literature while others are newly designed in this thesis. In some of the key algorithm design innovations, the thesis combines insights from smooth convex optimization with new ways to exploit downward curvature, in a framework that Carmon terms "Convex until proven guilty".

Carmon's thesis reads like a textbook on the complexity theory of smooth nonconvex optimization. It is a self-contained treatise on a topic of broad interest, nailing several fundamental open questions in the area of smooth nonconvex optimization.