Seminar of the CS Theory Group at the University of Colorado Boulder.

Our Theory Seminar kicked off in Spring 2018. Watch this space for upcoming talks, and/or subscribe to our cs-theory-announce mailing list!

Talks will be hybrid (in-person & online) for the foreseeable future; to receive the Zoom link, please subscribe to our mailing list.

For Spring 2024 we are aiming for talks every other week, and will use the "off weeks" for other Theory group activities (such as the Submodular Optimization reading group).

Upcoming talks

Feb 9, 2024, 10am Mountain Time: no seminar because of prospective PhD student visit day. Submodular Optimization reading group instead.

Past talks

Abstract: We study the Matrix Multiplication Verification Problem (MMV) where the goal is, given three n × n matrices A, B, and C as input, to decide whether AB = C. A classic randomized algorithm by Freivalds (MFCS, 1979) solves MMV in O(n2) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in o(nω) time).

To that end, we give two algorithms for MMV in the case where AB - C is *sparse*. Specifically, when AB - C has at most O(nδ) non-zero entries for a constant 0 <= δ < 2, we give (1) a deterministic O(nω−ε)-time algorithm for constant ε = ε(δ) > 0, and (2) a randomized O(n2)-time algorithm using (δ/2) * log2(n) + O(1) random bits. The former algorithm is faster than the deterministic algorithm of Künnemann (ESA, 2018) when δ >= 1.056, and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses log2(n) + O(1) random bits (in turn fewer than Freivalds's algorithm). Our algorithms are simple and use techniques from coding theory.

Joint work with Karthik Gajulapalli, Alexander Golovnev, and Philip G. Warton.

Abstract: Machine learning on personal and sensitive data raises privacy concerns and creates potential for inadvertent information leakage (e.g., extraction of one’s text messages or images from generative models). However, incorporating analysis of such data in decision making can benefit individuals and society at large (e.g., in healthcare and transportation). In order to strike a balance between these two conflicting objectives, one has to ensure that data analysis with strong confidentiality guarantees is deployed and securely implemented.

My talk will discuss challenges and opportunities in achieving this goal. I will first describe attacks against not only machine learning algorithms but also naïve implementations of algorithms with rigorous theoretical guarantees such as differential privacy. I will then discuss approaches to mitigate some of these attack vectors, including property-preserving data analysis. To this end, I will give an overview of our work on ensuring confidentiality of dataset properties that goes beyond traditional record-level privacy (e.g., focusing on protection of subpopulation information as compared to that of a single person).

Bio: Olya Ohrimenko is an Associate Professor at The University of Melbourne which she joined in 2020. Prior to that she was a Principal Researcher at Microsoft Research in Cambridge, UK, where she started as a Postdoctoral Researcher in 2014. Her research interests include privacy and integrity of machine learning algorithms, data analysis tools and cloud computing, including topics such as differential privacy, dataset confidentiality, verifiable and data-oblivious computation, trusted execution environments, side-channel attacks and mitigations. Recently Olya has worked with the Australian Bureau of Statistics, National Bank Australia and Microsoft. She has received solo and joint research grants from Facebook and Oracle and is currently a PI on an AUSMURI grant. See https://oohrimenko.github.io for more information.

 

Abstract: Computational complexity theory investigates which problems can(not) be solved with appropriately constrained computational resources. However, there is a major resource constraint in real world computation that is not accounted for by conventional complexity theory: Energy. Developing a physical theory of computation, to understand which problems can be ''practically'' solved, is an open challenge at the interface of thermodynamics and computer science. An important issue in addressing this challenge is formulating a general framework to analyze the energetic costs associated with a computational task, applicable to all physical scenarios. In this talk, I will elaborate on this issue, and show how to analyze the energetic costs for a finite automaton processing strings.

Abstract: A shortcut set is a (small) set of edges that when added to a directed graph G produces a graph with the same transitive closure as G while reducing the diameter as much as possible. A hopset is defined similarly but for approximate distances instead of reachability. One motivation for such structures is that in many settings (e.g. distributed, parallel, dynamic, streaming), computation is faster when the diameter of the graph is small. Thus, to compute, for instance, st-reachability in such a setting, it is useful to find a shortcut set as a preprocessing step, and then run an st-reachability algorithm on the resulting graph.

This talk is about existential bounds for shortcut sets and hopsets: given a budget of ~O(n) added edges, what is the smallest diameter achievable? A folklore algorithm says that diameter O(n1/2) is possible for any graph. A recent breakthrough of Kogan and Parter improves this to O(n1/3) for shortcut sets and O(n2/5) for hopsets. In recent work, we ask whether hopsets (approximate distances) are truly harder than shortcut sets (reachability). We close the gap between hopsets and shortcut sets by providing an O(n1/3)-diameter construction for hopsets. 

Based on joint work (SODA '23) with Aaron Bernstein

 

Abstract: Which Boolean functions f: {0,1}n → {0,1} satisfy f(x⊕y)=f(x)⊕f(y)

... for all x,y? linear functions (100% regime).
... for almost all x,y? functions close to linear functions (99% regime).
... for many x,y, more than random functions? functions correlating with a linear function (51% regime).
All of these results are classical.
We study what happens when the XOR function is replaced by an arbitrary Boolean function.
 
Based on joint work with Gilad Chase (Technion), Dor Minzer (MIT), Elchanan Mossel (MIT), Nitin Saurabh (IIIT Hyderabad).

Abstract: We model social choices as acts mapping states of nature to (public) outcomes. A social choice function (or SCF) assigns an act to every profile of subjective expected utility preferences over acts. A SCF is strategyproof if no agent ever has an incentive to misrepresent her beliefs about the states of nature or her valuation of the outcomes; it is ex-post efficient if the act selected at any given preference profile picks a Pareto-efficient outcome in every state of nature. We offer a complete characterization of all strategyproof and ex-post efficient SCFs. The chosen act must pick the most preferred outcome of some (possibly different) agent in every state of nature. The set of states in which an agent’s top outcome is selected may vary with the reported belief profile; it is the union of all the states assigned to her by a collection of bilaterally dictatorial and bilaterally consensual assignment rules. Paper: http://www.cireqmontreal.com/wp-content/uploads/cahiers/03-2017-cah.pdf

Abstract: Given a system of linear equations ℓi(x)=βi in an n-vector x of 0-1 variables, we compute the expectation of exp{−∑iγi(ℓi(x)−βi)2}, where x is a vector of independent Bernoulli random variables and γi>0 are constants. The algorithm runs in quasi-polynomial nO(lnn) time under some sparseness condition on the matrix of the system. The result is based on the absence of the zeros of the analytic continuation of the expectation for complex probabilities, which can also be interpreted as the absence of a phase transition in the Ising model with a sufficiently strong external field. As an example, we consider the problem of "smoothed counting" of perfect matchings in hypergraphs. https://arxiv.org/abs/2103.05488

Abstract: The Weisfeiler-Leman (WL) procedure is a widely-used approach for graph-isomorphism testing. It works by iteratively computing an isomorphism-invariant coloring of vertex tuples. Meanwhile, decompositions are a fundamental tool in structural graph theory and often exploited in approaches to tackle the graph-isomorphism problem.
 
In this talk, I discuss the ability of the WL algorithm to identify graphs by decomposing them. In particular, I give an overview of our proof that the 2-dimensional WL algorithm (2-WL) implicitly computes the decomposition of a graph into its 3-connected components. The proof relies on the insight that 2-WL detects pairs of vertices that separate a graph. Our result yields that determining the dimension of the algorithm needed to distinguish two given graphs reduces to determining the dimension required to distinguish 3-connected components of the graphs. This quite technical statement allows us to draw strong conclusions for the class of graphs of bounded treewidth and the class of planar graphs.

Abstract: Immanants are matrix functions that generalize determinants and permanents. Given an irreducible character χλ of Sn for some partition λ of n, the immanant associated with λ is a sum-product over permutations π in Sn, much like the determinant, but with χλ(π) playing the role of sgn(π).

Hartmann showed in 1985 that immanants can be evaluated in polynomial time for sign-ish characters. More precisely, for a partition λ of n with s parts, let b(λ) := n−s count the boxes to the right of the first column in the Young diagram of λ. The immanant associated with λ can be evaluated in nO(b(λ)) time.

Since this initial result, complementing hardness results have been obtained for several families of immanants derived from partitions with unbounded b(λ). This includes permanents, immanants associated with hook characters, and other classes. In this talk, we complete the picture of hard immanant families: Under a standard assumption from parameterized complexity, we rule out polynomial-time algorithms for (well-behaved) immanant families with unbounded b(λ). For immanant families in which b(λ) even grows polynomially, we establish hardness for #P and VNP. (See arXiv:2102.04340.)

 

Abstract: We give simple iterative methods for computing approximately optimal primal and dual solutions for the problem of maximizing a linear functional over a convex set K given by a separation oracle. Our methods are based on variants of the classical Von Neumann and Frank-Wolfe algorithms.

Abstract: In this talk I will present some recent results on optimal deception in Stackelberg games. Recent work in the ML community has revealed that learning algorithms used to compute the optimal strategy for the leader to commit to in a Stackelberg game, are susceptible to manipulation by the follower. Such a learning algorithm operates by querying the best responses or the payoffs of the follower, who consequently can deceive the algorithm by responding as if his payoffs were much different than what they actually are. For this strategic behavior to be successful, the main challenge faced by the follower is to pinpoint the payoffs that would make the learning algorithm compute a commitment so that best responding to it maximizes the follower's utility, according to his true payoffs. While this problem has been considered before, the related literature has only focused on the simplified scenario in which the follower payoff space is finite, thus leaving the general version of the problem unanswered. Our results fill in this gap, by showing that it is always possible for the follower to compute (near-)optimal fake payoffs in various learning scenarios between the leader and follower.

Abstract: We describe how combinatorial enumeration/listing problems taken in conjunction with structural properties of hypermatrices/tensors determine critical aspect of Boolean function such as their De Morgan formula complexity. The talk will not assume any prior familiarity with hypermatrices/tensors.

Abstract: The theory-of-approximation orthodoxy holds that linear programs (LPs) perform poorly for max-cut and other constraint satisfaction problems, and that semidefinite programming and spectral methods are needed for nontrivial approximations. In this talk, I will describe two recent results that defy this creed: we show that LPs of subexponential size can (1) approximate max-cut to a factor strictly greater than 0.5, and (2) certify max-cut in graphs of bounded spectral radius. This resolves the linear extension complexity of max-cut. Similar results hold for Unique Games and other constraint satisfaction problems.

Based on joint works with Ryan O'Donnell, Sam Hopkins, and Luca Trevisan.

arXiv: https://arxiv.org/abs/1911.10304

Abstract: We provide an information-theoretic framework for studying the generalization properties of machine learning algorithms. Our framework ties together existing approaches, including uniform convergence bounds and recent methods for adaptive data analysis. Specifically, we use Conditional Mutual Information (CMI) to quantify how well the input (i.e., the training data) can be recognized given the output (i.e., the trained model) of the learning algorithm. We show that bounds on CMI can be obtained from VC dimension, compression schemes, differential privacy, and other methods. We then show that bounded CMI implies various forms of generalization.

Joint work with Lydia Zakynthinou. See https://arxiv.org/abs/2001.09122

Abstract: We study the problem of learning causal-effect maximizing personalized decision policies from observational data while accounting for possible unobserved confounding. Since policy value and regret may not be point-identifiable, we study a method that minimizes the worst-case estimated regret over an uncertainty set for propensity weights that controls the extent of unobserved confounding. We prove generalization guarantees that ensure our policy will be safe when applied in practice and will in fact obtain the best-possible uniform control on the range of all possible population regrets that agree with the possible extent of confounding. We develop efficient algorithmic solutions to compute this confounding-robust policy. Finally, we assess and compare our methods on synthetic and semi-synthetic data. In particular, we consider a case study on personalizing hormone replacement therapy based on the parallel WHI observational study and clinical trial. We demonstrate that hidden confounding can hinder existing policy learning approaches and lead to unwarranted harm, while our robust approach guarantees safety and focuses on well-evidenced improvement.

This work is joint with Nathan Kallus. An earlier version of this work was circulated as "Confounding-Robust Policy Improvement".

Abstract: Aaronson and Ambainis (2009) and Chailloux (2018) showed that fully symmetric (partial) functions do not admit exponential quantum query speedups. This raises a natural question: how symmetric must a function be before it cannot exhibit a large quantum speedup? In this work, we prove that hypergraph symmetries in the adjacency matrix model allow at most a polynomial separation between randomized and quantum query complexities. We also show that, remarkably, permutation groups constructed out of these symmetries are essentially the only permutation groups that prevent super-polynomial quantum speedups. We prove this by fully characterizing the primitive permutation groups that allow super-polynomial quantum speedups. In contrast, in the adjacency list model for bounded-degree graphs (where graph symmetry is manifested differently), we exhibit a property testing problem that shows an exponential quantum speedup. These results resolve open questions posed by Ambainis, Childs, and Liu (2010) and Montanaro and de Wolf (2013).

Abstract: All known efficient algorithms for constraint satisfaction problems are stymied by random instances. For example, no efficient algorithm is known that can q-color a random graph with average degree (1 + ε)q ln q, even though random graphs remain q-colorable for average degree up to (2 − o(1))q ln q. Similar failure to find solutions at relatively low constraint densities is known for random CSPs such as random k-SAT and other hypergraph-based problems. The constraint density where algorithms break down for each CSP is known as the “algorithmic barrier” and provably corresponds to a phase transition in the geometry of the space of solutions [Achlioptas and Coja-Oghlan 2008]. In this talk, I will discuss my recent paper which aims to shed light on the following question: Can algorithmic success up to the barrier for each CSP be ascribed to some simple deterministic property of the inputs?   In particular, I will focus on the problem of coloring graphs and hypergraphs.

Abstract: Quantum information science is an interdisciplinary field closely related to computer science and physics. There are algorithmic tools from this field with computational applications in classical computer science and quantum physics. I this talk, I will introduce my work on developing these tools for solving problems in optimization, machine learning, and studying quantum systems. In particular, on the computer science side, I will discuss quantum speedups for some computational geometry problems with applications in machine learning and optimization. I will also describe quantum-inspired classical algorithms for solving matrix-related machine learning problems. One the physics side, I will introduce quantum algorithms for simulating open quantum systems, as well as efficient constructions of pseudo-random quantum operators.

Abstract: A Boolean function f on the n-dimensional hypercube is said to be a k-junta if it is dependent only on some k coordinates of the input. These functions have been widely studied in the context of PCPs, learning theory and property testing. In particular, a flagship result in property testing is that k-juntas can be tested with Õ(k) queries - i.e., there is an algorithm which when given a black box to f, makes only Õ(k) queries and decides between the following two cases:

  1. f is a k-junta.
  2. f is at least 0.01 far from every k-junta g in Hamming distance.

Surprisingly, the query complexity is completely independent of the ambient dimension n. In this work, we achieve the same qualitative guarantee for the noise tolerant version of this problem. In particular, we give a 2k query algorithm to distinguish between the following two cases.

  1. f is 0.48-close to some k-junta.
  2. f is at least 0.49 far from every k-junta.

The algorithm and its proof of correctness are simple and modular employing some classical tools like "random restrictions" with elementary properties of the noise operator on the cube.

Joint work with Elchanan Mossel and Joe Neeman.

Abstract: In this talk, I will discuss computational complexity theory and some interesting quantum generalizations of some popular complexity classes. In particular, I will discuss two quantum variants of the polynomial hierarchy and present a few results that bound their computational power. No knowledge about complexity theory or quantum theory is needed for this talk. This is joint work with Miklos Santha, Sevag Gharibian, Aarthi Sundaram, and Justin Yirka and the preprint can be found at https://arxiv.org/abs/1805.11139.

Abstract: In this talk, I will discuss several natural quantum problems and, in particular, how the problems change as the quantum resources change. I will show how to take an economics perspective to assign a "shadow price" to each quantum resource. To do this, I will use optimization theory and show that shadow prices are often given "for free" if you know where to look for them. No knowledge about economics, optimization theory, or quantum theory is needed for this talk. This is joint work with Gary Au (University of Saskatchewan).

Abstract: The first demonstration of quantum supremacy in October 2019 was a major achievement of experimental physics. At the same time, it relied on important developments in theoretical computer science. In this talk I will describe my recent work laying the complexity-theoretic foundations for Google/UCSB’s quantum supremacy experiment, providing evidence that their device is exponentially difficult to simulate with a classical computer. This crossroad between complexity theory and quantum physics also offers new insights into both disciplines. For example, I will explain how techniques from quantum complexity theory can be used to settle purely classical problems. Specifically, I will describe a quantum argument which nearly resolves the approximate degree composition conjecture, generalizing nearly 20 years of prior work. In a different direction, I will show that the notion of computational pseudorandomness from complexity-based cryptography has fundamental implications for black hole physics and the theory of quantum gravity.

Abstract: We show how two techniques from statistical physics can be adapted to solve a variant of the notorious Unique Games problem, potentially opening new avenues towards the Unique Games Conjecture.  We exhibit efficient algorithms for a natural generalisation of Unique Games based on approximating a suitable partition function via (i) a zero-free region and polynomial interpolation, and (ii) the cluster expansion.  We also show that a modest improvement to the parameters for which we give results would refute the Unique Games Conjecture. Based on joint work with M. Coulson, A. Kolla, V. Patel, and G. Regts.

Abstract: We study the hard-core model (independent sets) on bipartite graphs using the cluster expansion from statistical physics. When there is a sufficient imbalance in the degrees or fugacities between the sides (L,R) of the bipartition, we can rewrite the hard-core partition function in terms of deviations from independent sets that are empty on one side of the bipartition and show this expression has a convergent cluster expansion. This has interesting algorithmic and probabilistic consequences. On the algorithmic side, we address an open problem in approximate counting and give a polynomial-time algorithm for approximating the partition function for a large class of bounded degree bipartite graphs; this includes, among others, the unweighted biregular case where the degrees satisfy  dR >= 7 dL log dL. Our approximation algorithm is based on truncating the cluster expansion. As a consequence of the method, we also prove that the hard-core model on such graphs exhibits exponential decay of correlations by utilizing connections between the cluster expansion and joint cumulants. Joint work with Will Perkins.

Abstract: Maximally recoverable codes are codes designed for distributed storage which combine quick recovery from single node failure and optimal recovery from catastrophic failure. Gopalan et al. [SODA 2017] studied the alphabet size needed for such codes in grid topologies and gave a combinatorial characterization for it. Consider a labeling of the edges of the complete bipartite graph Kn,n with labels coming from F2d, that satisfies the following condition: for any simple cycle, the sum of the labels over its edges is nonzero. The minimal d where this is possible controls the alphabet size needed for maximally recoverable codes in n x n grid topologies. Prior to this work, it was known that d is between (log n)2 and n log n. We improve both the bounds and show that d is linear in n. The upper bound is a recursive construction which beats the random construction. The lower bound follows by first relating the problem to the independence number of the Birkhoff polytope graph, and then providing tight bounds for it using the representation theory of the symmetric group.

  

(Joint w/ the RCDS seminar)

Abstract: The web of interconnections between today's technology and society is upending many traditional ways of doing things: the internet of things, bitcoin, the sharing economy, and stories of "fake news" spreading on social media are increasingly in the public mind. As such, computer scientists and engineers must be increasingly conscious of the interplay between the technical performance of their systems and the personal objectives of users, customers, and adversaries. I will present recent work on two problem domains: robust incentive design for socially- networked systems, and resilient coordination and optimization in distributed multi-agent engineered systems. In each, by rigorously examining the fundamental tradeoffs associated with optimal designs, I seek to develop a deeper theoretical understanding of tools which will help address today's emerging challenges.

Abstract: In a classical online decision problem, a decision-maker who is trying to maximize her value inspects a sequence of arriving items to learn their values (drawn from known distributions), and decides when to stop the process by taking the current item. The goal is to prove a “prophet inequality”: that she can do approximately as well as a prophet with foreknowledge of all the values. In this work, we investigate this problem when the values are allowed to be correlated. We consider a natural “linear” correlation structure that models many kinds of real-world search problems. A key challenge is that threshold-based algorithms, which are commonly used for prophet inequalities, no longer guarantee good performance. We relate this roadblock to another challenge: “augmenting” values of the arriving items slightly can destroy the performance of many algorithms. We leverage this intuition to prove bounds (matching up to constant factors) that decay gracefully with the amount of correlation of the arriving items. We extend these results to the case of selecting multiple items.

Joint work with Nicole Immorlica and Sahil Singla

Abstract: When you program your own x==yx.equals(y), or C.isSame(x,y); your program’s logic makes more sense, you decrease duplicates, speed up search, and filter your data on what matters.  But you can't predict that f(x)==f(y).  This is a talk on improving equivalence with identity types to make sure that when you judge data is equal, your program believes you.  The math is real but it has practical and complexity challenges.  We discuss recent theory, practical tools for the JVM, and what it means for the complexity of black-box computation.

Abstract: In understanding natural systems over hundreds of years, physicists have developed a wealth of dynamics and viewpoints. Some of these methods, when viewed (and abstracted) through the lens of computation, could lead to new optimization and sampling techniques for problems in machine learning, statistics, and theoretical computer science. I will present some recent examples from my research on such interactions between physics and algorithms, e.g., a Hamiltonian dynamics inspired algorithm for sampling from continuous distributions.

Based on joint works with Oren Mangoubi.

Abstract: Conjugacy is the natural notion of isomorphism in symbolic dynamics. A major open problem in the field is that of finding an algorithm to determine conjugacy of shifts of finite type (SFTs). In this talk, we consider several related computational problems restricted to k-block codes. We show verifying a proposed k-block conjugacy is in P, finding a k-block conjugacy is GI-hard, reducing the representation size of a SFT via a 1-block conjugacy is NP-complete, and recognizing if a sofic shift is a SFT is in P. This talk will not assume any prior knowledge of symbolic dynamics. Based on joint work with Raf Frongillo.

Abstract: We will discuss the notion of a certified algorithm. Certified algorithms provide worst-case and beyond-worst-case performance guarantees. First, a γ-certified algorithm is also a γ-approximation algorithm – it finds a γ no matter what the input is. Second, it exactly solves γ-stable instances (γ-stable instances model real-life instances). Additionally, certified algorithms have a number of other desirable properties: they solve both maximization and minimization versions of a problem (e.g. Max Cut and Min Uncut), solve weakly stable instances, and solve problems with hard constraints.

We will define certified algorithms, describe their properties, present a framework for designing certified algorithms, provide examples of certified algorithms for Max Cut/Min Uncut, Minimum Multiway Cut, k-medians and k-means. 

The talk is based on a joint work with Konstantin Makarychev.

Abstract: We discuss a probabilistic method for a variety of formulations of graph colouring, and show how 'local occupancy' gives a common language for many recent results, and leads to some new applications.

Abstract: Reinforcement learning is an approach to controller synthesis where agents rely on reward signals to choose actions in order to satisfy the requirements implicit in reward signals. Oftentimes non-experts have to come up with the requirements and their translation to rewards under significant time pressure, even though manual translation is time-consuming and error-prone. For safety-critical applications of reinforcement learning, a rigorous design methodology is needed and, in particular, a principled approach to requirement specification and to the translation of objectives into the form required by reinforcement learning algorithms.

Formal logic provides a foundation for the rigorous and unambiguous requirement specification of learning objectives. However, reinforcement learning algorithms require requirements to be expressed as scalar reward signals. We discuss a recent technique, called limit-reachability, that bridges this gap by faithfully translating logic-based requirements into the scalar reward form needed in model-free reinforcement learning. This technique enables the synthesis of controllers that maximize the probability to satisfy given logical requirements using off-the-shelf, model-free reinforcement learning algorithms.

Abstract: All correlation measures, classical and quantum, must be monotonic under local operations. In this talk, I’ll characterize monotonic formulas that are linear combinations of the von Neumann entropies associated with the quantum state of a physical system which has n parts. Then I’ll show that these formulas form a polyhedral convex cone, which we call the monotonicity cone, and enumerate its facets. We illustrate its structure and prove that it is equivalent to the cone of monotonic formulas implied by strong subadditivity. We explicitly compute its extremal rays for n up to 5. I’ll  also consider the symmetric monotonicity cone, in which the formulas are required to be invariant under subsystem permutations. This cone can be described fully for all n. In addition, we show that these results hold even when states and operations are constrained to be classical.

Abstract: In network routing users often tradeoff different objectives in selecting their best route.  An example is transportation networks, where due to uncertainty of travel times, drivers may tradeoff the average travel time versus the variance of a route.  Or they might tradeoff time and cost, such as the cost paid in tolls.

We wish to understand the effect of two conflicting criteria in route selection, by studying the resulting traffic assignment (equilibrium) in the network.  We investigate two perspectives of this topic: (1) How does the equilibrium cost of a risk-averse population compare to that of a risk-neutral population?  (i.e., how much longer do we spend in traffic due to being risk-averse) (2) How does the equilibrium cost of a heterogeneous population compare to that of a comparable homogeneous user population?

We provide characterizations to both questions above.  

Based on joint work with Richard Cole, Thanasis Lianeas and Nicolas Stier-Moses.

At the end I will mention current work of my research group on algorithms and mechanism design for power systems.

Abstract: Quantum computation is rapidly transitioning from science fiction to reality.  Recent experimental progress from efforts in academia and industry has taken us into a new era in which scalable quantum computers are on the horizon and imperfect quantum systems on the scale of 50-100 qubits will be implemented in the very near term.  As this era progresses, the gap between theory and experiment is shrinking and insights from theoretical computer science are increasingly becoming crucial to understanding these quantum devices.

This talk aims to describe my research, which seeks to develop the tools needed to characterize the power of quantum computation, both in the very near-term and the indefinite future. These tools will provide the foundation for building the next generation of useful quantum algorithms, but will also help guide the course of quantum experiment.

The talk will be accessible to a general computer science audience.

Abstract:Automated decisions and policies informed by machine learning and the pursuit of efficiency permeate today's social life.  I will discuss three quite diverse manifestations of this reality, and certain challenging computational problems resulting from them: Classifying selfish agents, for example for college admissions, may lead to game-playing, inefficiency, and unfairness; when the input of machine learning algorithms is provided by profit-making producers of data, novel forms of computational mechanism design are needed; finally, optimizing efficiency in congestion games, for example through tolls in congested routes, can be proved under assumptions to necessarily increase wealth inequality.  We ponder whether these are not symptoms and parts of an emerging new social order.

Abstract: In the past few years, several research projects of mine (joint with collaborators) suggest that it is fruitful to view linear spaces of matrices as a linear algebraic analogue of graphs. In this talk I will propose such an analogue, and explain why such an analogue may make sense. I will then list several correspondences between structures on graphs and structures on matrix spaces. These include perfect matchings vs full-rank matrices, shrunk subsets vs shrunk spaces, independent sets vs isotropic spaces, random graphs vs random matrix spaces, and isomorphism concepts. These correspondences play an important role in the recent progress on non-commutative rational identity testing, form one bottleneck of putting graph isomorphism in P, and suggest a possible route for post-quantum cryptography. 

Some relevant papers are as follows. 

(1) Generalized Wong sequences and their applications to Edmonds’ problems, by G. Ivanyos, M. Karpinski, Y Qiao, M. Santha. arXiv:1307.6429. 
(2) Constructive non-commutative rank is in deterministic polynomial time, by G. Ivanyos, Y Qiao, K. V. Subrahmanyam. arXiv:1512.03531.
(3) Operator scaling: theory and applications, by A. Garg, L. Gurvits, R. Oliveira, A. Wigderson. arXiv:1511.03730.
(4) Linear algebraic analogues of the graph isomorphism problem and the Erdős-Rényi model, by Y. Li, Y. Qiao. arXiv:1708.04501. 
(5) From independent sets and vertex colorings to isotropic spaces and isotropic decompositions, by X. Bei, S. Chen, J. Guan, Y. Qiao, X. Sun. In preparation; paper available soon.

Abstract: In the hypergraph k-cut problem, the input is a hypergraph along with a constant k and the goal is to find a smallest subset of hyperedges whose removal ensures that the remaining hypergraph has at least k connected components. The graph k-cut problem is solvable in polynomial-time (Goldschmidt and Hochbaum, 1994) while the complexity of the hypergraph k-cut problem is open. In this talk, I will present a randomized polynomial-time algorithm to solve the hypergraph k-cut problem. Based on joint work with Chao Xu and Xilin Yu.

Abstract: In the near future, there will likely be special-purpose quantum computers with 50-70 high-quality qubits and controllable nearest-neighbor couplings.  In this talk, I'll discuss general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for *some* task, motivated by the goal of overturning the Extended Church-Turing Thesis (which says that all physical systems can be efficiently simulated by classical computers) as confidently as possible.  This part of the talk is based on joint work with Lijie Chen, https://arxiv.org/abs/1612.05903.  Then, in a second part of the talk, I'll discuss new, not-yet-published work on how these experiments could be used to generate cryptographically certified random bits.

Abstract: We give an m1+o(1)βo(1)-time algorithm for generating uniformly random spanning trees in weighted graphs with max-to-min weight ratio β. In the process, we illustrate how fundamental tradeoffs in graph partitioning can be overcome by eliminating vertices from a graph using Schur complements of the associated Laplacian matrix.

Our starting point is the Aldous-Broder algorithm, which samples a random spanning tree using a random walk. As in prior work, we use fast Laplacian linear system solvers to shortcut the random walk from a vertex v to the boundary of a set of vertices assigned to v called a "shortcutter." We depart from prior work by introducing a new way of employing Laplacian solvers to shortcut the walk. To bound the amount of shortcutting work, we show that most random walk steps occur far away from an unvisited vertex. We apply this observation by charging uses of a shortcutter S to random walk steps in the Schur complement obtained by eliminating all vertices in S that are not assigned to it.

Abstract: The Ring-Learning-With-Errors problem is a candidate quantum-safe hard ideal-lattice problem upon which to base future post-quantum cryptography.  It has already been deployed by the likes of Google, but is it really hard?  I'll explain the problem and how it can be used for cryptography, its relationship to other lattice problems, and a number theorist's perspective on its difficulty.

Abstract: The general mathematical problem of computing low-cost maps between two metric spaces is prevalent in many applications, including registration in medical image processing, function detection in protein modeling, reconstructing evolutionary trees in phylogenomics, finding recurrent patterns in data analysis, and finding simple representations of data in visualization. Many instances of the general problem are provably hard, so we have a dichotomy. On one hand, we have general algorithms with theoretical worst-case guarantees that are prohibitively slow in practice, and on the other we have faster heuristics engineered for specific applications, but lacking guarantees. In dichotomy is opportunity – I use parameterized complexity to create a finer understanding of the complexity of computing low-cost maps between metric spaces based on their geometric and topological properties. I specifically target new algorithms with better performance guarantees, in particular, for cases of practical interest.

In this talk, I introduce parametrized complexity: a branch of computational complexity that classifies problems based on their hardness with respect to several parameters of the input in addition to the input size; hence it gives a much more fine-grained classification compared to the traditional worst-case analysis that is based on only the input size.  Then, I present several parametrized algorithms for computing low-cost maps between geometric objects.  The running time of these algorithms are parametrized with respect to topological and geometric parameters of the input objects.  For example, when the input is a graph, a topological parameter can be its treewidth that measures to what extent the graph looks like a tree, and a geometric parameter can be the intrinsic dimensionality of the metric space induced by shortest paths in the graph.

Abstract: One useful application of character theory is in studying mixing times of random walks on groups. The notion of a supercharacter theory encourages a further refinement of techniques in this application; in particular, it gives a spectrum of options that allow tailoring the theory to meet the needs of the underlying walk. This talk focuses on induced random walks on combinatorial objects that arise from set partitions on groups.  We explore what properties of set partitions give desirable behaviors and how to use resulting representation theories to compute eigenvalues and eigenvectors for the walk. This project is joint with a  multitude of REU students.

Abstract: Spectral clustering algorithms find clusters in a given network by exploiting properties of the eigenvectors of matrices associated with the network. As a first step, one computes a spectral embedding, that is a mapping of nodes to points in a low-dimensional real space; then one uses geometric clustering algorithms such as k-means to cluster the points corresponding to the nodes.

Such algorithms work so well that, in certain applications unrelated to network analysis, such as image segmentation, it is useful to associate a network to the data, and then apply spectral clustering to the network. In addition to its application to clustering, spectral embeddings are a valuable tool for dimension-reduction and data visualization.

The performance of spectral clustering algorithms has been justified rigorously when applied to networks coming from certain probabilistic generative models.

A more recent development, which is the focus of this lecture, is a worst-case analysis of spectral clustering, showing that, for every graph that exhibits a certain cluster structure, such structure can be found by geometric algorithms applied to a spectral embedding.

Such results generalize the graph Cheeger’s inequality (a classical result in spectral graph theory), and they have additional applications in computational complexity theory and in pure mathematics.

Abstract: Differential privacy offers a mathematical framework for balancing two goals: obtaining useful information about sensitive data, and protecting individual-level privacy. Discovering the limitations of differential privacy yields insights as to what analyses are incompatible with privacy and why. These insights further aid the quest to discover optimal privacy-preserving algorithms. In this talk, I will give examples of how both follow from new understandings of the structure of differential privacy.

I will first describe negative results for private data analysis via a connection to cryptographic objects called fingerprinting codes. These results show that an (asymptotically) optimal way to solve natural high-dimensional tasks is to decompose them into many simpler tasks. In the second part of the talk, I will discuss concentrated differential privacy, a framework which enables more accurate analyses by precisely capturing how simpler tasks compose.

Abstract: Fourier-analytical and algebraic methods have driven several recent advancements in theoretical computer science. These techniques are typically tailored for the hypercube or other so-called product domains; however, there has been a growing interest towards broadening the horizon to include non-product domains. An important such example is the space of perfect matchings, whose non-trivial algebraic structure has proven to be problematic for the current hypercube-based algebraic techniques.

In this talk, we overview the algebraic structure of perfect matchings from the viewpoint of representation theory. We give perfect matching analogues of some familiar notions and objects in the field of Boolean functions and discuss a few fundamental differences that arise in the non-Abelian setting. The talk will conclude with a summary of some results in extremal combinatorics, optimization, and computational complexity that have been obtained from this algebraic point of view.

Abstract: In this talk I will describe a result of Guentner, Tessera, and myself about approximation and rigidity properties of relatively hyperbolic groups.  Along the way we'll see what it means for a graph to be hyperbolic and what the Banach-Tarski paradox has to do with expanders.  No prior knowledge is necessary.

Abstract: We study the space complexity of sketching cuts and Laplacian quadratic forms of graphs. We show that any data structure which approximately stores the sizes of all cuts in an undirected graph on n vertices up to a 1+ϵ error must use Ω(n logn/ϵ2) bits of space in the worst case, improving the Ω(n/ϵ2) bound of Andoni et al. and matching the best known upper bound achieved by spectral sparsifiers. Our proof is based on a rigidity phenomenon for cut (and spectral) approximation which may be of independent interest: any two d−regular graphs which approximate each other's cuts significantly better than a random graph approximates the complete graph must overlap in a constant fraction of their edges.

Abstract: Planar curves are one of the most fundamental objects in both mathematics and computer science.  Apart from their obvious usage in computational geometry and topology, interesting applications have been found in optimizations, knot theory, and more.

In this talk, we will take a road trip together through a small part of the landscape.  We will start with representations of planar curves and their deformations, together with their appearance and many uses in various disciplines of math and cs.  Then we focus on a discrete representation of curve deformations called homotopy moves.  We will sketch the ideas behind a few recent results on untangling planar curves using homotopy moves, its generalization on surfaces, and the implications on solving planar graph optimization problems using electrical transformations.  

There are no assumptions on background knowledge in topology.  Open questions (not restricted to planar curves) will be provided during the talk.