Our Theory Seminar kicked off in Spring 2018. 

For Fall 2019, the Theory Seminar will typically meet Wednesday 3-4pm in ECCE 1B16. Watch this space for upcoming talks, and/or subscribe to our cs-theory-announce mailing list!

Upcoming talks 

Wed Dec 11 Sankeerth Rao (UC San Diego) The independence number of the Birkhoff polytope graph, and applications to maximally recoverable codes, ECCE 1B16, 3-4pm

Abstract: Maximally recoverable codes are codes designed for distributed storage which combine quick recovery from single node failure and optimal recovery from catastrophic failure. Gopalan et al. [SODA 2017] studied the alphabet size needed for such codes in grid topologies and gave a combinatorial characterization for it. Consider a labeling of the edges of the complete bipartite graph Kn,n with labels coming from F2d, that satisfies the following condition: for any simple cycle, the sum of the labels over its edges is nonzero. The minimal d where this is possible controls the alphabet size needed for maximally recoverable codes in n x n grid topologies. Prior to this work, it was known that d is between (log n)2 and n log n. We improve both the bounds and show that d is linear in n. The upper bound is a recursive construction which beats the random construction. The lower bound follows by first relating the problem to the independence number of the Birkhoff polytope graph, and then providing tight bounds for it using the representation theory of the symmetric group.  

Past talks

(Joint w/ the RCDS seminar)

Abstract: The web of interconnections between today's technology and society is upending many traditional ways of doing things: the internet of things, bitcoin, the sharing economy, and stories of "fake news" spreading on social media are increasingly in the public mind. As such, computer scientists and engineers must be increasingly conscious of the interplay between the technical performance of their systems and the personal objectives of users, customers, and adversaries. I will present recent work on two problem domains: robust incentive design for socially- networked systems, and resilient coordination and optimization in distributed multi-agent engineered systems. In each, by rigorously examining the fundamental tradeoffs associated with optimal designs, I seek to develop a deeper theoretical understanding of tools which will help address today's emerging challenges.

Abstract: In a classical online decision problem, a decision-maker who is trying to maximize her value inspects a sequence of arriving items to learn their values (drawn from known distributions), and decides when to stop the process by taking the current item. The goal is to prove a “prophet inequality”: that she can do approximately as well as a prophet with foreknowledge of all the values. In this work, we investigate this problem when the values are allowed to be correlated. We consider a natural “linear” correlation structure that models many kinds of real-world search problems. A key challenge is that threshold-based algorithms, which are commonly used for prophet inequalities, no longer guarantee good performance. We relate this roadblock to another challenge: “augmenting” values of the arriving items slightly can destroy the performance of many algorithms. We leverage this intuition to prove bounds (matching up to constant factors) that decay gracefully with the amount of correlation of the arriving items. We extend these results to the case of selecting multiple items.

Joint work with Nicole Immorlica and Sahil Singla

Abstract: When you program your own x==yx.equals(y), or C.isSame(x,y); your program’s logic makes more sense, you decrease duplicates, speed up search, and filter your data on what matters.  But you can't predict that f(x)==f(y).  This is a talk on improving equivalence with identity types to make sure that when you judge data is equal, your program believes you.  The math is real but it has practical and complexity challenges.  We discuss recent theory, practical tools for the JVM, and what it means for the complexity of black-box computation.

Abstract: In understanding natural systems over hundreds of years, physicists have developed a wealth of dynamics and viewpoints. Some of these methods, when viewed (and abstracted) through the lens of computation, could lead to new optimization and sampling techniques for problems in machine learning, statistics, and theoretical computer science. I will present some recent examples from my research on such interactions between physics and algorithms, e.g., a Hamiltonian dynamics inspired algorithm for sampling from continuous distributions.

Based on joint works with Oren Mangoubi.

Abstract: Conjugacy is the natural notion of isomorphism in symbolic dynamics. A major open problem in the field is that of finding an algorithm to determine conjugacy of shifts of finite type (SFTs). In this talk, we consider several related computational problems restricted to k-block codes. We show verifying a proposed k-block conjugacy is in P, finding a k-block conjugacy is GI-hard, reducing the representation size of a SFT via a 1-block conjugacy is NP-complete, and recognizing if a sofic shift is a SFT is in P. This talk will not assume any prior knowledge of symbolic dynamics. Based on joint work with Raf Frongillo.

Abstract: We will discuss the notion of a certified algorithm. Certified algorithms provide worst-case and beyond-worst-case performance guarantees. First, a γ-certified algorithm is also a γ-approximation algorithm – it finds a γ no matter what the input is. Second, it exactly solves γ-stable instances (γ-stable instances model real-life instances). Additionally, certified algorithms have a number of other desirable properties: they solve both maximization and minimization versions of a problem (e.g. Max Cut and Min Uncut), solve weakly stable instances, and solve problems with hard constraints.

We will define certified algorithms, describe their properties, present a framework for designing certified algorithms, provide examples of certified algorithms for Max Cut/Min Uncut, Minimum Multiway Cut, k-medians and k-means. 

The talk is based on a joint work with Konstantin Makarychev.

Abstract: We discuss a probabilistic method for a variety of formulations of graph colouring, and show how 'local occupancy' gives a common language for many recent results, and leads to some new applications.

Abstract: Reinforcement learning is an approach to controller synthesis where agents rely on reward signals to choose actions in order to satisfy the requirements implicit in reward signals. Oftentimes non-experts have to come up with the requirements and their translation to rewards under significant time pressure, even though manual translation is time-consuming and error-prone. For safety-critical applications of reinforcement learning, a rigorous design methodology is needed and, in particular, a principled approach to requirement specification and to the translation of objectives into the form required by reinforcement learning algorithms.

Formal logic provides a foundation for the rigorous and unambiguous requirement specification of learning objectives. However, reinforcement learning algorithms require requirements to be expressed as scalar reward signals. We discuss a recent technique, called limit-reachability, that bridges this gap by faithfully translating logic-based requirements into the scalar reward form needed in model-free reinforcement learning. This technique enables the synthesis of controllers that maximize the probability to satisfy given logical requirements using off-the-shelf, model-free reinforcement learning algorithms.

Abstract: All correlation measures, classical and quantum, must be monotonic under local operations. In this talk, I’ll characterize monotonic formulas that are linear combinations of the von Neumann entropies associated with the quantum state of a physical system which has n parts. Then I’ll show that these formulas form a polyhedral convex cone, which we call the monotonicity cone, and enumerate its facets. We illustrate its structure and prove that it is equivalent to the cone of monotonic formulas implied by strong subadditivity. We explicitly compute its extremal rays for n up to 5. I’ll  also consider the symmetric monotonicity cone, in which the formulas are required to be invariant under subsystem permutations. This cone can be described fully for all n. In addition, we show that these results hold even when states and operations are constrained to be classical.

Abstract: In network routing users often tradeoff different objectives in selecting their best route.  An example is transportation networks, where due to uncertainty of travel times, drivers may tradeoff the average travel time versus the variance of a route.  Or they might tradeoff time and cost, such as the cost paid in tolls.

We wish to understand the effect of two conflicting criteria in route selection, by studying the resulting traffic assignment (equilibrium) in the network.  We investigate two perspectives of this topic: (1) How does the equilibrium cost of a risk-averse population compare to that of a risk-neutral population?  (i.e., how much longer do we spend in traffic due to being risk-averse) (2) How does the equilibrium cost of a heterogeneous population compare to that of a comparable homogeneous user population?

We provide characterizations to both questions above.  

Based on joint work with Richard Cole, Thanasis Lianeas and Nicolas Stier-Moses.

At the end I will mention current work of my research group on algorithms and mechanism design for power systems.

Abstract: Quantum computation is rapidly transitioning from science fiction to reality.  Recent experimental progress from efforts in academia and industry has taken us into a new era in which scalable quantum computers are on the horizon and imperfect quantum systems on the scale of 50-100 qubits will be implemented in the very near term.  As this era progresses, the gap between theory and experiment is shrinking and insights from theoretical computer science are increasingly becoming crucial to understanding these quantum devices.

This talk aims to describe my research, which seeks to develop the tools needed to characterize the power of quantum computation, both in the very near-term and the indefinite future. These tools will provide the foundation for building the next generation of useful quantum algorithms, but will also help guide the course of quantum experiment.

The talk will be accessible to a general computer science audience.

Abstract:Automated decisions and policies informed by machine learning and the pursuit of efficiency permeate today's social life.  I will discuss three quite diverse manifestations of this reality, and certain challenging computational problems resulting from them: Classifying selfish agents, for example for college admissions, may lead to game-playing, inefficiency, and unfairness; when the input of machine learning algorithms is provided by profit-making producers of data, novel forms of computational mechanism design are needed; finally, optimizing efficiency in congestion games, for example through tolls in congested routes, can be proved under assumptions to necessarily increase wealth inequality.  We ponder whether these are not symptoms and parts of an emerging new social order.

Abstract: In the past few years, several research projects of mine (joint with collaborators) suggest that it is fruitful to view linear spaces of matrices as a linear algebraic analogue of graphs. In this talk I will propose such an analogue, and explain why such an analogue may make sense. I will then list several correspondences between structures on graphs and structures on matrix spaces. These include perfect matchings vs full-rank matrices, shrunk subsets vs shrunk spaces, independent sets vs isotropic spaces, random graphs vs random matrix spaces, and isomorphism concepts. These correspondences play an important role in the recent progress on non-commutative rational identity testing, form one bottleneck of putting graph isomorphism in P, and suggest a possible route for post-quantum cryptography. 

Some relevant papers are as follows. 

(1) Generalized Wong sequences and their applications to Edmonds’ problems, by G. Ivanyos, M. Karpinski, Y Qiao, M. Santha. arXiv:1307.6429. 
(2) Constructive non-commutative rank is in deterministic polynomial time, by G. Ivanyos, Y Qiao, K. V. Subrahmanyam. arXiv:1512.03531.
(3) Operator scaling: theory and applications, by A. Garg, L. Gurvits, R. Oliveira, A. Wigderson. arXiv:1511.03730.
(4) Linear algebraic analogues of the graph isomorphism problem and the Erdős-Rényi model, by Y. Li, Y. Qiao. arXiv:1708.04501. 
(5) From independent sets and vertex colorings to isotropic spaces and isotropic decompositions, by X. Bei, S. Chen, J. Guan, Y. Qiao, X. Sun. In preparation; paper available soon.

Abstract: In the hypergraph k-cut problem, the input is a hypergraph along with a constant k and the goal is to find a smallest subset of hyperedges whose removal ensures that the remaining hypergraph has at least k connected components. The graph k-cut problem is solvable in polynomial-time (Goldschmidt and Hochbaum, 1994) while the complexity of the hypergraph k-cut problem is open. In this talk, I will present a randomized polynomial-time algorithm to solve the hypergraph k-cut problem. Based on joint work with Chao Xu and Xilin Yu.

Abstract: In the near future, there will likely be special-purpose quantum computers with 50-70 high-quality qubits and controllable nearest-neighbor couplings.  In this talk, I'll discuss general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for *some* task, motivated by the goal of overturning the Extended Church-Turing Thesis (which says that all physical systems can be efficiently simulated by classical computers) as confidently as possible.  This part of the talk is based on joint work with Lijie Chen, https://arxiv.org/abs/1612.05903.  Then, in a second part of the talk, I'll discuss new, not-yet-published work on how these experiments could be used to generate cryptographically certified random bits.

Abstract: We give an m1+o(1)βo(1)-time algorithm for generating uniformly random spanning trees in weighted graphs with max-to-min weight ratio β. In the process, we illustrate how fundamental tradeoffs in graph partitioning can be overcome by eliminating vertices from a graph using Schur complements of the associated Laplacian matrix.

Our starting point is the Aldous-Broder algorithm, which samples a random spanning tree using a random walk. As in prior work, we use fast Laplacian linear system solvers to shortcut the random walk from a vertex v to the boundary of a set of vertices assigned to v called a "shortcutter." We depart from prior work by introducing a new way of employing Laplacian solvers to shortcut the walk. To bound the amount of shortcutting work, we show that most random walk steps occur far away from an unvisited vertex. We apply this observation by charging uses of a shortcutter S to random walk steps in the Schur complement obtained by eliminating all vertices in S that are not assigned to it.

Abstract: The Ring-Learning-With-Errors problem is a candidate quantum-safe hard ideal-lattice problem upon which to base future post-quantum cryptography.  It has already been deployed by the likes of Google, but is it really hard?  I'll explain the problem and how it can be used for cryptography, its relationship to other lattice problems, and a number theorist's perspective on its difficulty.

Abstract: The general mathematical problem of computing low-cost maps between two metric spaces is prevalent in many applications, including registration in medical image processing, function detection in protein modeling, reconstructing evolutionary trees in phylogenomics, finding recurrent patterns in data analysis, and finding simple representations of data in visualization. Many instances of the general problem are provably hard, so we have a dichotomy. On one hand, we have general algorithms with theoretical worst-case guarantees that are prohibitively slow in practice, and on the other we have faster heuristics engineered for specific applications, but lacking guarantees. In dichotomy is opportunity – I use parameterized complexity to create a finer understanding of the complexity of computing low-cost maps between metric spaces based on their geometric and topological properties. I specifically target new algorithms with better performance guarantees, in particular, for cases of practical interest.

In this talk, I introduce parametrized complexity: a branch of computational complexity that classifies problems based on their hardness with respect to several parameters of the input in addition to the input size; hence it gives a much more fine-grained classification compared to the traditional worst-case analysis that is based on only the input size.  Then, I present several parametrized algorithms for computing low-cost maps between geometric objects.  The running time of these algorithms are parametrized with respect to topological and geometric parameters of the input objects.  For example, when the input is a graph, a topological parameter can be its treewidth that measures to what extent the graph looks like a tree, and a geometric parameter can be the intrinsic dimensionality of the metric space induced by shortest paths in the graph.

Abstract: One useful application of character theory is in studying mixing times of random walks on groups. The notion of a supercharacter theory encourages a further refinement of techniques in this application; in particular, it gives a spectrum of options that allow tailoring the theory to meet the needs of the underlying walk. This talk focuses on induced random walks on combinatorial objects that arise from set partitions on groups.  We explore what properties of set partitions give desirable behaviors and how to use resulting representation theories to compute eigenvalues and eigenvectors for the walk. This project is joint with a  multitude of REU students.

Abstract: Spectral clustering algorithms find clusters in a given network by exploiting properties of the eigenvectors of matrices associated with the network. As a first step, one computes a spectral embedding, that is a mapping of nodes to points in a low-dimensional real space; then one uses geometric clustering algorithms such as k-means to cluster the points corresponding to the nodes.

Such algorithms work so well that, in certain applications unrelated to network analysis, such as image segmentation, it is useful to associate a network to the data, and then apply spectral clustering to the network. In addition to its application to clustering, spectral embeddings are a valuable tool for dimension-reduction and data visualization.

The performance of spectral clustering algorithms has been justified rigorously when applied to networks coming from certain probabilistic generative models.

A more recent development, which is the focus of this lecture, is a worst-case analysis of spectral clustering, showing that, for every graph that exhibits a certain cluster structure, such structure can be found by geometric algorithms applied to a spectral embedding.

Such results generalize the graph Cheeger’s inequality (a classical result in spectral graph theory), and they have additional applications in computational complexity theory and in pure mathematics.

Abstract: Differential privacy offers a mathematical framework for balancing two goals: obtaining useful information about sensitive data, and protecting individual-level privacy. Discovering the limitations of differential privacy yields insights as to what analyses are incompatible with privacy and why. These insights further aid the quest to discover optimal privacy-preserving algorithms. In this talk, I will give examples of how both follow from new understandings of the structure of differential privacy.

I will first describe negative results for private data analysis via a connection to cryptographic objects called fingerprinting codes. These results show that an (asymptotically) optimal way to solve natural high-dimensional tasks is to decompose them into many simpler tasks. In the second part of the talk, I will discuss concentrated differential privacy, a framework which enables more accurate analyses by precisely capturing how simpler tasks compose.

Abstract: Fourier-analytical and algebraic methods have driven several recent advancements in theoretical computer science. These techniques are typically tailored for the hypercube or other so-called product domains; however, there has been a growing interest towards broadening the horizon to include non-product domains. An important such example is the space of perfect matchings, whose non-trivial algebraic structure has proven to be problematic for the current hypercube-based algebraic techniques.

In this talk, we overview the algebraic structure of perfect matchings from the viewpoint of representation theory. We give perfect matching analogues of some familiar notions and objects in the field of Boolean functions and discuss a few fundamental differences that arise in the non-Abelian setting. The talk will conclude with a summary of some results in extremal combinatorics, optimization, and computational complexity that have been obtained from this algebraic point of view.

Abstract: In this talk I will describe a result of Guentner, Tessera, and myself about approximation and rigidity properties of relatively hyperbolic groups.  Along the way we'll see what it means for a graph to be hyperbolic and what the Banach-Tarski paradox has to do with expanders.  No prior knowledge is necessary.

Abstract: We study the space complexity of sketching cuts and Laplacian quadratic forms of graphs. We show that any data structure which approximately stores the sizes of all cuts in an undirected graph on n vertices up to a 1+ϵ error must use Ω(n logn/ϵ2) bits of space in the worst case, improving the Ω(n/ϵ2) bound of Andoni et al. and matching the best known upper bound achieved by spectral sparsifiers. Our proof is based on a rigidity phenomenon for cut (and spectral) approximation which may be of independent interest: any two d−regular graphs which approximate each other's cuts significantly better than a random graph approximates the complete graph must overlap in a constant fraction of their edges.

Abstract: Planar curves are one of the most fundamental objects in both mathematics and computer science.  Apart from their obvious usage in computational geometry and topology, interesting applications have been found in optimizations, knot theory, and more.

In this talk, we will take a road trip together through a small part of the landscape.  We will start with representations of planar curves and their deformations, together with their appearance and many uses in various disciplines of math and cs.  Then we focus on a discrete representation of curve deformations called homotopy moves.  We will sketch the ideas behind a few recent results on untangling planar curves using homotopy moves, its generalization on surfaces, and the implications on solving planar graph optimization problems using electrical transformations.  

There are no assumptions on background knowledge in topology.  Open questions (not restricted to planar curves) will be provided during the talk.