Stats, Optimization, and Machine Learning Seminar - Anshumali Shrivastava

March 12, 2019

Hashing Algorithms for Extreme Scale Machine Learning In this talk, I will discuss some of my recent and surprising findings on the use of hashing algorithms for large-scale estimations. Locality Sensitive Hashing (LSH) is a hugely popular algorithm for sub-linear near neighbor search. However, it turns out that fundamentally LSH...

Stats, Optimization, and Machine Learning Seminar - Shuang Li

March 5, 2019

Optimization for High-dimensional Analysis and Estimation High-dimensional signal analysis and estimation appear in many signal processing applications, including modal analysis, and parameter estimation in the spectrally sparse signals. The underlying low-dimensional structure in these high-dimensional signals inspires us to develop optimization-based techniques and theoretical guarantees for the above fundamental problems...

Stats, Optimization, and Machine Learning Seminar - Philip Kragel

Feb. 26, 2019

Detecting emotional situations using convolutional neural networks and distributed models of human brain activity Emotions are thought to be canonical responses to situations ancestrally linked to survival or the well-being of an organism. Although sensory elements do not fully determine the nature of emotional responses, they should be sufficient to...

Stats, Optimization, and Machine Learning Seminar - Osman Malik and Ann-Casey Hughes

Feb. 5, 2019

Osman Malik - Fast Randomized Matrix and Tensor Interpolative Decomposition Using CountSketch In this talk I will present our recently developed fast randomized algorithm for matrix interpolative decomposition. If time permits, I will also say a few words about how our method can be applied to the tensor interpolative decomposition...

Stats, Optimization, and Machine Learning Seminar - Lio Horesh

Jan. 15, 2019

"Don't go with the flow -- – A new tensor algebra for Neural Networks" Multi-dimensional information often involves multi-dimensional correlations that may remain latent by virtue of traditional matrix-based learning algorithms. In this study, we propose a tensor neural network framework that offers an exciting new paradigm for supervised machine...

Stats, Optimization, and Machine Learning Seminar - Zhenhua Wang

Dec. 11, 2018

Induction of time inconsistency in optimal stopping problem Time inconsistency is a common phenomenon of optimal control and optimal stopping problems, especially in finance and economics. It says a player will change his optimal strategy over time. To deal with such problem, we usually search for some consistent plan (equilibrium)...

Stats, Optimization, and Machine Learning Seminar - Colton Grainger and Claire Savard

Nov. 27, 2018

Colton Grainger, Department of Mathematics, University of Colorado Boulder On Characterizing the Capacity of Neural Networks using Algebraic Topology The learnability of different neural architectures can be characterized directly by computable measures of data complexity. In this paper, we reframe the problem of architecture selection as understanding how data determines...

Stats, Optimization, and Machine Learning Seminar - Nicholas Landry

Nov. 6, 2018

Music Data Mining: Finding structure in song An introduction to basic music data mining techniques

Stats, Optimization, and Machine Learning Seminar - Jeffrey Hokanson

Oct. 16, 2018

Exploiting Low-Dimensional Structure in Optimization Under Uncertainty In computational science, optimization under uncertainty (OUU) provides a new methodology for building designs reliable under a variety of conditions with improved efficiency over a traditional, safety factor based approach. However, the resulting optimization problems can be challenging. For example, chance constraints bounding...

Stats, Optimization, and Machine Learning Seminar - Nishant Mehta

Oct. 9, 2018

Fast Rates for Unbounded Losses: from ERM to Generalized Bayes I will present new excess risk bounds for randomized and deterministic estimators, discarding boundedness assumptions to handle general unbounded loss functions like log loss and squared loss under heavy tails. These bounds have a PAC-Bayesian flavor in both derivation and...

Pages