Our Theory Seminar kicked off in Spring 2018.
Watch this space for upcoming talks, and/or subscribe to our cs-theory-announce mailing list!
Currently on break for the summer, but will return (likely still remote) in Fall 2020.
Abstract: All known efficient algorithms for constraint satisfaction problems are stymied by random instances. For example, no efficient algorithm is known that can q-color a random graph with average degree (1 + ε)q ln q, even though random graphs remain q-colorable for average degree up to (2 − o(1))q ln q. Similar failure to find solutions at relatively low constraint densities is known for random CSPs such as random k-SAT and other hypergraph-based problems. The constraint density where algorithms break down for each CSP is known as the “algorithmic barrier” and provably corresponds to a phase transition in the geometry of the space of solutions [Achlioptas and Coja-Oghlan 2008]. In this talk, I will discuss my recent paper which aims to shed light on the following question: Can algorithmic success up to the barrier for each CSP be ascribed to some simple deterministic property of the inputs? In particular, I will focus on the problem of coloring graphs and hypergraphs.
Abstract: Quantum information science is an interdisciplinary field closely related to computer science and physics. There are algorithmic tools from this field with computational applications in classical computer science and quantum physics. I this talk, I will introduce my work on developing these tools for solving problems in optimization, machine learning, and studying quantum systems. In particular, on the computer science side, I will discuss quantum speedups for some computational geometry problems with applications in machine learning and optimization. I will also describe quantum-inspired classical algorithms for solving matrix-related machine learning problems. One the physics side, I will introduce quantum algorithms for simulating open quantum systems, as well as efficient constructions of pseudo-random quantum operators.
Abstract: A Boolean function f on the n-dimensional hypercube is said to be a k-junta if it is dependent only on some k coordinates of the input. These functions have been widely studied in the context of PCPs, learning theory and property testing. In particular, a flagship result in property testing is that k-juntas can be tested with Õ(k) queries - i.e., there is an algorithm which when given a black box to f, makes only Õ(k) queries and decides between the following two cases:
- f is a k-junta.
- f is at least 0.01 far from every k-junta g in Hamming distance.
Surprisingly, the query complexity is completely independent of the ambient dimension n. In this work, we achieve the same qualitative guarantee for the noise tolerant version of this problem. In particular, we give a 2k query algorithm to distinguish between the following two cases.
- f is 0.48-close to some k-junta.
- f is at least 0.49 far from every k-junta.
The algorithm and its proof of correctness are simple and modular employing some classical tools like "random restrictions" with elementary properties of the noise operator on the cube.
Joint work with Elchanan Mossel and Joe Neeman.
Abstract: In this talk, I will discuss computational complexity theory and some interesting quantum generalizations of some popular complexity classes. In particular, I will discuss two quantum variants of the polynomial hierarchy and present a few results that bound their computational power. No knowledge about complexity theory or quantum theory is needed for this talk. This is joint work with Miklos Santha, Sevag Gharibian, Aarthi Sundaram, and Justin Yirka and the preprint can be found at https://arxiv.org/abs/1805.11139.
Abstract: In this talk, I will discuss several natural quantum problems and, in particular, how the problems change as the quantum resources change. I will show how to take an economics perspective to assign a "shadow price" to each quantum resource. To do this, I will use optimization theory and show that shadow prices are often given "for free" if you know where to look for them. No knowledge about economics, optimization theory, or quantum theory is needed for this talk. This is joint work with Gary Au (University of Saskatchewan).
Abstract: The first demonstration of quantum supremacy in October 2019 was a major achievement of experimental physics. At the same time, it relied on important developments in theoretical computer science. In this talk I will describe my recent work laying the complexity-theoretic foundations for Google/UCSB’s quantum supremacy experiment, providing evidence that their device is exponentially difficult to simulate with a classical computer. This crossroad between complexity theory and quantum physics also offers new insights into both disciplines. For example, I will explain how techniques from quantum complexity theory can be used to settle purely classical problems. Specifically, I will describe a quantum argument which nearly resolves the approximate degree composition conjecture, generalizing nearly 20 years of prior work. In a different direction, I will show that the notion of computational pseudorandomness from complexity-based cryptography has fundamental implications for black hole physics and the theory of quantum gravity.
Abstract: We show how two techniques from statistical physics can be adapted to solve a variant of the notorious Unique Games problem, potentially opening new avenues towards the Unique Games Conjecture. We exhibit efficient algorithms for a natural generalisation of Unique Games based on approximating a suitable partition function via (i) a zero-free region and polynomial interpolation, and (ii) the cluster expansion. We also show that a modest improvement to the parameters for which we give results would refute the Unique Games Conjecture. Based on joint work with M. Coulson, A. Kolla, V. Patel, and G. Regts.
Abstract: We study the hard-core model (independent sets) on bipartite graphs using the cluster expansion from statistical physics. When there is a sufficient imbalance in the degrees or fugacities between the sides (L,R) of the bipartition, we can rewrite the hard-core partition function in terms of deviations from independent sets that are empty on one side of the bipartition and show this expression has a convergent cluster expansion. This has interesting algorithmic and probabilistic consequences. On the algorithmic side, we address an open problem in approximate counting and give a polynomial-time algorithm for approximating the partition function for a large class of bounded degree bipartite graphs; this includes, among others, the unweighted biregular case where the degrees satisfy dR >= 7 dL log dL. Our approximation algorithm is based on truncating the cluster expansion. As a consequence of the method, we also prove that the hard-core model on such graphs exhibits exponential decay of correlations by utilizing connections between the cluster expansion and joint cumulants. Joint work with Will Perkins.
Abstract: Maximally recoverable codes are codes designed for distributed storage which combine quick recovery from single node failure and optimal recovery from catastrophic failure. Gopalan et al. [SODA 2017] studied the alphabet size needed for such codes in grid topologies and gave a combinatorial characterization for it. Consider a labeling of the edges of the complete bipartite graph Kn,n with labels coming from F2d, that satisfies the following condition: for any simple cycle, the sum of the labels over its edges is nonzero. The minimal d where this is possible controls the alphabet size needed for maximally recoverable codes in n x n grid topologies. Prior to this work, it was known that d is between (log n)2 and n log n. We improve both the bounds and show that d is linear in n. The upper bound is a recursive construction which beats the random construction. The lower bound follows by first relating the problem to the independence number of the Birkhoff polytope graph, and then providing tight bounds for it using the representation theory of the symmetric group.
(Joint w/ the RCDS seminar)
Abstract: The web of interconnections between today's technology and society is upending many traditional ways of doing things: the internet of things, bitcoin, the sharing economy, and stories of "fake news" spreading on social media are increasingly in the public mind. As such, computer scientists and engineers must be increasingly conscious of the interplay between the technical performance of their systems and the personal objectives of users, customers, and adversaries. I will present recent work on two problem domains: robust incentive design for socially- networked systems, and resilient coordination and optimization in distributed multi-agent engineered systems. In each, by rigorously examining the fundamental tradeoffs associated with optimal designs, I seek to develop a deeper theoretical understanding of tools which will help address today's emerging challenges.
Abstract: In a classical online decision problem, a decision-maker who is trying to maximize her value inspects a sequence of arriving items to learn their values (drawn from known distributions), and decides when to stop the process by taking the current item. The goal is to prove a “prophet inequality”: that she can do approximately as well as a prophet with foreknowledge of all the values. In this work, we investigate this problem when the values are allowed to be correlated. We consider a natural “linear” correlation structure that models many kinds of real-world search problems. A key challenge is that threshold-based algorithms, which are commonly used for prophet inequalities, no longer guarantee good performance. We relate this roadblock to another challenge: “augmenting” values of the arriving items slightly can destroy the performance of many algorithms. We leverage this intuition to prove bounds (matching up to constant factors) that decay gracefully with the amount of correlation of the arriving items. We extend these results to the case of selecting multiple items.
Joint work with Nicole Immorlica and Sahil Singla
Abstract: When you program your own
C.isSame(x,y); your program’s logic makes more sense, you decrease duplicates, speed up search, and filter your data on what matters. But you can't predict that
f(x)==f(y). This is a talk on improving equivalence with identity types to make sure that when you judge data is equal, your program believes you. The math is real but it has practical and complexity challenges. We discuss recent theory, practical tools for the JVM, and what it means for the complexity of black-box computation.
Abstract: In understanding natural systems over hundreds of years, physicists have developed a wealth of dynamics and viewpoints. Some of these methods, when viewed (and abstracted) through the lens of computation, could lead to new optimization and sampling techniques for problems in machine learning, statistics, and theoretical computer science. I will present some recent examples from my research on such interactions between physics and algorithms, e.g., a Hamiltonian dynamics inspired algorithm for sampling from continuous distributions.
Based on joint works with Oren Mangoubi.
Abstract: Conjugacy is the natural notion of isomorphism in symbolic dynamics. A major open problem in the field is that of finding an algorithm to determine conjugacy of shifts of finite type (SFTs). In this talk, we consider several related computational problems restricted to k-block codes. We show verifying a proposed k-block conjugacy is in P, finding a k-block conjugacy is GI-hard, reducing the representation size of a SFT via a 1-block conjugacy is NP-complete, and recognizing if a sofic shift is a SFT is in P. This talk will not assume any prior knowledge of symbolic dynamics. Based on joint work with Raf Frongillo.
Abstract: We will discuss the notion of a certified algorithm. Certified algorithms provide worst-case and beyond-worst-case performance guarantees. First, a γ-certified algorithm is also a γ-approximation algorithm – it finds a γ no matter what the input is. Second, it exactly solves γ-stable instances (γ-stable instances model real-life instances). Additionally, certified algorithms have a number of other desirable properties: they solve both maximization and minimization versions of a problem (e.g. Max Cut and Min Uncut), solve weakly stable instances, and solve problems with hard constraints.
We will define certified algorithms, describe their properties, present a framework for designing certified algorithms, provide examples of certified algorithms for Max Cut/Min Uncut, Minimum Multiway Cut, k-medians and k-means.
The talk is based on a joint work with Konstantin Makarychev.
Abstract: We discuss a probabilistic method for a variety of formulations of graph colouring, and show how 'local occupancy' gives a common language for many recent results, and leads to some new applications.
Abstract: Reinforcement learning is an approach to controller synthesis where agents rely on reward signals to choose actions in order to satisfy the requirements implicit in reward signals. Oftentimes non-experts have to come up with the requirements and their translation to rewards under significant time pressure, even though manual translation is time-consuming and error-prone. For safety-critical applications of reinforcement learning, a rigorous design methodology is needed and, in particular, a principled approach to requirement specification and to the translation of objectives into the form required by reinforcement learning algorithms.
Formal logic provides a foundation for the rigorous and unambiguous requirement specification of learning objectives. However, reinforcement learning algorithms require requirements to be expressed as scalar reward signals. We discuss a recent technique, called limit-reachability, that bridges this gap by faithfully translating logic-based requirements into the scalar reward form needed in model-free reinforcement learning. This technique enables the synthesis of controllers that maximize the probability to satisfy given logical requirements using off-the-shelf, model-free reinforcement learning algorithms.
Abstract: All correlation measures, classical and quantum, must be monotonic under local operations. In this talk, I’ll characterize monotonic formulas that are linear combinations of the von Neumann entropies associated with the quantum state of a physical system which has n parts. Then I’ll show that these formulas form a polyhedral convex cone, which we call the monotonicity cone, and enumerate its facets. We illustrate its structure and prove that it is equivalent to the cone of monotonic formulas implied by strong subadditivity. We explicitly compute its extremal rays for n up to 5. I’ll also consider the symmetric monotonicity cone, in which the formulas are required to be invariant under subsystem permutations. This cone can be described fully for all n. In addition, we show that these results hold even when states and operations are constrained to be classical.
Abstract: In network routing users often tradeoff different objectives in selecting their best route. An example is transportation networks, where due to uncertainty of travel times, drivers may tradeoff the average travel time versus the variance of a route. Or they might tradeoff time and cost, such as the cost paid in tolls.
We wish to understand the effect of two conflicting criteria in route selection, by studying the resulting traffic assignment (equilibrium) in the network. We investigate two perspectives of this topic: (1) How does the equilibrium cost of a risk-averse population compare to that of a risk-neutral population? (i.e., how much longer do we spend in traffic due to being risk-averse) (2) How does the equilibrium cost of a heterogeneous population compare to that of a comparable homogeneous user population?
We provide characterizations to both questions above.
Based on joint work with Richard Cole, Thanasis Lianeas and Nicolas Stier-Moses.
At the end I will mention current work of my research group on algorithms and mechanism design for power systems.
This talk aims to describe my research, which seeks to develop the tools needed to characterize the power of quantum computation, both in the very near-term and the indefinite future. These tools will provide the foundation for building the next generation of useful quantum algorithms, but will also help guide the course of quantum experiment.
The talk will be accessible to a general computer science audience.
Some relevant papers are as follows.
(1) Generalized Wong sequences and their applications to Edmonds’ problems, by G. Ivanyos, M. Karpinski, Y Qiao, M. Santha. arXiv:1307.6429.
(2) Constructive non-commutative rank is in deterministic polynomial time, by G. Ivanyos, Y Qiao, K. V. Subrahmanyam. arXiv:1512.03531.
(3) Operator scaling: theory and applications, by A. Garg, L. Gurvits, R. Oliveira, A. Wigderson. arXiv:1511.03730.
(4) Linear algebraic analogues of the graph isomorphism problem and the Erdős-Rényi model, by Y. Li, Y. Qiao. arXiv:1708.04501.
(5) From independent sets and vertex colorings to isotropic spaces and isotropic decompositions, by X. Bei, S. Chen, J. Guan, Y. Qiao, X. Sun. In preparation; paper available soon.
Our starting point is the Aldous-Broder algorithm, which samples a random spanning tree using a random walk. As in prior work, we use fast Laplacian linear system solvers to shortcut the random walk from a vertex v to the boundary of a set of vertices assigned to v called a "shortcutter." We depart from prior work by introducing a new way of employing Laplacian solvers to shortcut the walk. To bound the amount of shortcutting work, we show that most random walk steps occur far away from an unvisited vertex. We apply this observation by charging uses of a shortcutter S to random walk steps in the Schur complement obtained by eliminating all vertices in S that are not assigned to it.
In this talk, I introduce parametrized complexity: a branch of computational complexity that classifies problems based on their hardness with respect to several parameters of the input in addition to the input size; hence it gives a much more fine-grained classification compared to the traditional worst-case analysis that is based on only the input size. Then, I present several parametrized algorithms for computing low-cost maps between geometric objects. The running time of these algorithms are parametrized with respect to topological and geometric parameters of the input objects. For example, when the input is a graph, a topological parameter can be its treewidth that measures to what extent the graph looks like a tree, and a geometric parameter can be the intrinsic dimensionality of the metric space induced by shortest paths in the graph.
Such algorithms work so well that, in certain applications unrelated to network analysis, such as image segmentation, it is useful to associate a network to the data, and then apply spectral clustering to the network. In addition to its application to clustering, spectral embeddings are a valuable tool for dimension-reduction and data visualization.
The performance of spectral clustering algorithms has been justified rigorously when applied to networks coming from certain probabilistic generative models.
A more recent development, which is the focus of this lecture, is a worst-case analysis of spectral clustering, showing that, for every graph that exhibits a certain cluster structure, such structure can be found by geometric algorithms applied to a spectral embedding.
Such results generalize the graph Cheeger’s inequality (a classical result in spectral graph theory), and they have additional applications in computational complexity theory and in pure mathematics.
I will first describe negative results for private data analysis via a connection to cryptographic objects called fingerprinting codes. These results show that an (asymptotically) optimal way to solve natural high-dimensional tasks is to decompose them into many simpler tasks. In the second part of the talk, I will discuss concentrated differential privacy, a framework which enables more accurate analyses by precisely capturing how simpler tasks compose.
In this talk, we overview the algebraic structure of perfect matchings from the viewpoint of representation theory. We give perfect matching analogues of some familiar notions and objects in the field of Boolean functions and discuss a few fundamental differences that arise in the non-Abelian setting. The talk will conclude with a summary of some results in extremal combinatorics, optimization, and computational complexity that have been obtained from this algebraic point of view.
In this talk, we will take a road trip together through a small part of the landscape. We will start with representations of planar curves and their deformations, together with their appearance and many uses in various disciplines of math and cs. Then we focus on a discrete representation of curve deformations called homotopy moves. We will sketch the ideas behind a few recent results on untangling planar curves using homotopy moves, its generalization on surfaces, and the implications on solving planar graph optimization problems using electrical transformations.
There are no assumptions on background knowledge in topology. Open questions (not restricted to planar curves) will be provided during the talk.