All colloquia take place on Fridays from 12:00 p.m. to 2:00 p.m. (unless otherwise noted) in Room D428/430 (ICS) on the fourth floor of the Muenzinger Psychology building.
**Note: Talks marked with a double asterisk are not ICS talks but are approved for the ICS Topics class.
January 12, 2015
- First day of classes
January 16, 2015
Todd Gureckis, Ph.D.
Department of Psychology
Affiliate, Center for Data Science
New York University
TITLE: Self-directed learning: Understanding the interactions between decision making, learning, and memory”
ABSTRACT: My research explores how people learn from their interactions with the world around them. For example, how are we so good at figuring out how something works by tinkering with it? How do we formulate questions with the goal of gaining knowledge and reducing our uncertainty? How do our choices to gather information affect our memory or conceptual knowledge? Such questions strike at the heart of what makes us such an adaptable and intelligent species. In this talk, I will overview recent progress in my lab understanding how people gather information in "self-directed" learning environments (i.e., those where the learner is in control of what to learn about and when to learn it). A primary objective of my work is to develop detailed computational models of human learning, and my talk will highlight the important role that such models can play in helping to understand self-directed learning as a core aspect of human behavior. I will conclude by discussing implications of this work for education, instructional design, as well as the basic science of human learning.
January 23, 2015
January 27, 2015 *Tuesday - E214*
Jennifer S. Trueblood, Ph.D.
Institute of Mathematical Behavioral Sciences
University of California, Irvine
Title: The Influence of Context and Changing Information on Choice
Abstract: Everyday we make hundreds of choices. Some are seemingly trivial -- what cereal should I eat for breakfast? Others have long lasting implications -- what stock should I invest in? Despite their obvious differences, these two decisions have one important thing in common; both are sensitive to context. In this talk, I will first describe my research investigating how context influences preferences in multi-alternative, multi-attribute choice behavior. I will provide experimental evidence that context effects from consumer choice research arise in other domains such as perception. This suggests context effects are a general feature of human choice behavior and calls for a common theoretical explanation that applies across paradigms. I will present a new model called the multi-attribute Linear Ballistic Accumulator (MLBA) model as a generalized explanation of these effects.
In the second half of the talk, I will describe my work investigating the impact of changing information on choice behavior. Most past decision research has focused on “stationary” decisions where a choice is made on the basis of fixed, unchanging information. However, in the real world, many decisions are made in the face of dynamically changing information. To assess how the decision process adapts to changes of information, I developed the piecewise Linear Ballistic Accumulator (pLBA) model. I will demonstrate the model using results from a simple perceptual decision-making task, and show how early information influences the integration of later information. The talk will conclude with a discussion of future directions including research investigating the impact of changing information on context effects in multi-alternative choice and the integration of the two models, MLBA and pLBA. I will also discuss future applications of the models to health-related decision-making.
January 30, 2015
Tamar H. Gollan, Ph.D.
Professor of Psychiatry
University of California, San Diego, School of Medicine
Hispanic Program Core Leader
UCSD Alzheimer's Disease Research Center
Title: Searching for the Language Switch: The Invasion of Executive Control into the Psycholinguistics of Bilingualism
Abstract: Switching back and forth between tasks is known to be more difficult than steadily performing a single task. From this perspective, people who speak more than one language regularly engage in a puzzling behavior often referred to as code-switching. Bilinguals spontaneously switch languages when conversing with other bilinguals, sometimes with great frequency, and even though nothing obvious compels them to do so. Experimental investigation confirms the presence of robust processing costs associated with both language switching and non-linguistic task switching but using methods that differ from naturally occurring switches in a number of critical ways. In this talk I will show that fully voluntary switching is less costly than previously assumed, sometimes even cost-free. In addition, direct comparisons between language- and task-switching, and analysis of control over unintended language switches, reveal dissociations in brain mechanisms underlying mixing in linguistic and non-linguistic domains, and preservation of control over language selection in aging. Such results imply that language control is a “special case” with modular mechanisms that have little to do with general executive control – and this seems inconsistent with reports that bilinguals might be more efficient switchers than monolinguals. I'll discuss what properties could make language special, and how a better understanding of this might help us achieve greater efficiency in switching in other domains (e.g., switching between writing a paper and checking e-mail).
February 3, 2015 *Tuesday - E214*
Darrell A. Worthy, Ph.D.
Texas A & M University
Title: Using Computational Models to Examine Age-Related Changes in Neural Activity during Decision-Making
Abstract: Decision-making is an important task that individuals of all ages must engage in on a daily basis. Extensive work suggests that the normal aging process leads to significant structural and functional declines in the frontostriatal brain networks implicated in decision-making. This can often lead to poorer performance for older adults in decision-making tasks compared to younger adults. In this talk I present two studies that use a combination of computational modeling and fMRI to identify age differences in neural activity associated with specific cognitive processes during decision-making. Study 1 shows that neural responses in striatal and medial prefrontal regions to reward prediction errors decline with age, while responses to reward outcomes are maintained. Study 2 uses a more demanding state-based decision-making task where individuals must consider how actions affect changes in their future state. In this study we find evidence of compensatory activation of lateral prefrontal regions in older adults that is related to a model based measure of how current actions affect changes in one's future state. This suggests that older adults may recruit additional brain structures during cognitively demanding decision-making tasks to achieve the same level of performance as younger adults. I end the talk by discussing future directions in decision neuroscience across the lifespan, and current work aimed at exploring how genetic differences can affect learning and decision-making.
February 6, 2015
Michael N. Jones, Ph.D.
Associate Professor of Cognitive Science
Associate Professor of Psychological and Brain Sciences
Adjunct Professor of Informatics and Computing
Adjunct Professor, Program in Neuroscience
Affiliated Faculty, Network Science Institute
Title: Scaling Models of Human Semantic Abstraction
Abstract: Human semantic memory develops over a lifetime of linguistic and perceptual experience. Laboratory experiments can probe the end product of the process of semantic abstraction, or can study the process at a small scale using well-controlled stimuli. But a full understanding of the mechanisms that drive semantic abstraction requires the study of models that learn over realistic data at a scale that humans do. Indeed, the models that perform best at small scales do not readily scale up to human-scale amounts of data, and relatively “dumb but scalable” models end up generating impressively complex representations when trained on sufficient data. I will present some recent work from my lab exploring random vector accumulation models based on theories of associative memory. These models learn incrementally from linguistic corpora, making surprising predictions about semantic development, and can also integrate large-scale perceptual data from our crowdsourcing project, the NSF Semantic Pictionary Project. Because humans are both the producers and consumers of such a large amount of the data we wish mine for knowledge, models of human semantic abstraction can offer unique insights not captured by purely data-driven machine learning techniques. I will end with a few applied examples from my lab using human semantic models in clinical informatics, automated synthesis of the neuroimaging literature, optimizing passwords, and intelligent tutoring systems.
February 6, 2015 *3:00 p.m.*
Anoop Sarkar, Ph.D.
School of Computing Science
Simon Fraser University
British Columbia, Canada
Title: Interactive visualization of facts extracted from natural language text
Abstract: In natural language processing, the summarization of information in a large amount of text has typically been viewed as a type of natural language generation problem, e.g. "produce a 250 word summary of some documents based on some input query". An alternative view, which will be the focus of this talk, is to use natural language parsing to extract facts from a collection of documents and then use information visualization to provide an interactive summarization of these facts.
The first step is to extract detailed facts about events from natural language text using a predicate-centered view of events (who did what to whom, when and how). We exploit semantic roles in order to create a predicate-centric ontology for entities which is used to create a knowledge base of facts about entities and their relationship with other entities.
The next step is to use information visualization to provide a summarization of the facts in this knowledge base. The user can interact with the visualization to find summaries that have different granularities. This enables the discovery of extremely uncommon facts easily, unlike large scale "macro-reading" approaches to information extraction.
We have used this methodology to build an interactive visualization of events in human history by machine reading Wikipedia articles (available on the web at http://lensingwikipedia.cs.sfu.ca).
February 13, 2015
February 20, 2015
Elizabeth Race, Ph.D.
Post-doctoral Research Associate
Memory Disorders Research Center
TITLE: Memory, mental simulation & language: The ties that bind
ABSTRACT: From recognizing a childhood friend to reliving the moment we fell in love, our memories provide a rich tapestry of prior experiences and knowledge that lies at the core of our human experience. But memory does not simply provide a lens into the past. On the contrary, memory also offers an important window into the future. Accumulating evidence suggests that the brain continually generates predictions based on past experience and stored knowledge. These memory-based predictions provide expectations about the future that shape our thoughts, decisions, and actions. In this talk, I will discuss recent neuropsychological evidence that provides novel insight into the cognitive and neural processes that support this powerful, yet surprising, proactive use of memory. I will first discuss studies of patients with medial temporal lobe amnesia that have revealed the importance of associative processes supported by the hippocampus in the generation and flexible use of memory-based predictions when simulating the future (prospection). I will then discuss how damage to the hippocampal system not only impairs the ability to construct mental simulations, but also impairs the ability to share these mental simulations with others as well as construct integrated verbal discourse more generally. These results reveal that memory, mental simulation, and language share fundamental underlying cognitive and neural processes, and that binding functions supported by the hippocampus play a critical role in the flexible and proactive use of both memory and language. Finally, I will conclude by discussing how this novel view of memory and memory systems aligns with additional neuropsychological evidence suggesting that the medial temporal lobes support cognition beyond the long-term memory domain.
February 20, 2015 **4:00 p.m.**
Reza Shadmehr, Ph.D.
Professor of Neuroscience
Johns Hopkins University
Title: Encoding of action by the Purkinje cells of the cerebellum
Abstract: Execution of accurate movements depends critically on the cerebellum, suggesting that Purkinje cells (P-cells) may predict state of the moving body, a process called 'forward model'. Yet, this encoding has remained a long-standing puzzle. For example, during saccadic eye movement, firing of P-cells show little consistent modulation with respect to speed or direction of the moving eye, and critically, lasts longer than duration of the movement. How could the cerebellum be involved in control of saccades, its yet the firing of P-cells show activity that far outlasts the movement? Here, we analyzed P-cell discharge in the oculomotor vermis of behaving monkeys during saccadic eye movements. We found neurons that increased their activity during saccades, as well neurons that decreased their activity. When we estimated the synaptic inhibition that these two populations produced via their projections to the caudal fastigial nucleus (cFN), we uncovered a signal that precisely predicted the real-time motion of the eye, an encoding that was not present in either population alone. When we aligned the simple spike activity of each P-cell to a coordinate system that depended on that cell’s complex spike (CS) tuning, the result unmasked a pattern of inhibition at cFN that encoded saccade speed and direction via a multiplicative gain-field. Therefore, our results suggested three new ideas: reliable encoding of saccade metrics does not occur in the firing of individual P-cells, but via synchronized inputs of bursting and pausing cells onto cFN; in this encoding, speed and direction are multiplicatively represented via a gain-field; and the anatomical projections of P-cells to cFN neurons are not random, but organized by the CS tuning of the P-cells.
February 27, 2015
Max Berniker, Ph.D.
Department of Mechanical and Industrial Engineering
College of Engineering
University of Illinois at Chicago
Physical Medicine and Rehabilitation
TITLE: A new perspective on the internal representations of motor control and learning
ABSTRACT: The motor system generates time-varying commands to move our limbs and body. Yet how this is achieved is largely a mystery. In conventional formulations the brain relies on dynamical models of our body (forward and inverse models) and control policies that must be integrated forward in time to generate feedforward time-varying commands; thus these are representations across space, but not time. Optimal control theory tells us that a relatively small number of parameters can uniquely define an entire command and state trajectory. Building off this, we examine a new approach that directly represents both time-varying commands and the resulting state trajectories with a function; a representation across space and time. Since the output of this function includes time, it is high-dimensional and requires more parameters than a typical dynamical model. To avoid the problems of over-fitting and local minima these extra parameters introduce, we exploit recent advances in machine learning to build our function using a stacked autoencoder, or deep network. Using initial states and final target states as input, this deep network can be trained to output an accurate temporal profile of the optimal command and state trajectory for a point-to-point reach of a nonlinear limb model, even when influenced by varying force fields. In a manner that mirrors motor babble, the network can also teach itself to learn through trial and error. Lastly, we demonstrate how this network can learn to optimize a cost objective. This functional approach to motor control is a sharp departure from the standard dynamical approach, and we end by noting how it may offer new insights into many commonly observed electrophysiological phenomena, including distributed parallel encoding, preparatory activity, and temporally unaligned activity.
March 6, 2015
Stephen Becker, Ph.D.
University of Colorado Boulder
Title: Matrix Completion and Robust PCA: new data analysis tools
Abstract: Matrix completion is a generalization of compressed sensing that seeks to determine missing matrix entries under some (non-Bayesian) assumptions about the matrix. The technique has generated a lot of excitement due to rigorous guarantees in some case, and also due to applications to machine learning (e.g., the Netflix prize problem). This talk discusses basic matrix completion, including efficient algorithms suitable for big data, as well as an extension of matrix completion known as robust PCA, which can handle large outliers in the data. We continue with several applications: inferring the structure of chromosomes, functional imaging of the brain, removing clouds from multi-spectral satellite image data, and verifying the properties of a quantum state or a quantum gate.
March 13, 2015
Adele Goldberg, Ph.D.
Department of Psychology
Title: Explain me this: how we Learn what not to Say
Abstract: In certain cases, linguistic formulations are semantically sensible and syntactically well-formed, and yet noticeably dispreferred (e.g., ??She disappeared the ticket; ??the afraid boy). Experimental evidence suggests that competition in context—statistical preemption—plays a key role in learning what not to say in these cases. I will also offer a proposal as to why adult second language learners seem to have more trouble avoiding these dispreferred utterances.
March 20, 2015
March 27, 2015
- Spring Break (No Classes)
April 3, 2015
Ehtibar Dzhafarov, Ph.D.
Department of Psychological Sciences
Title: Contextuality and Random Variables from Quantum Mechanics to Psychology
Abstract: Probabilistic contextuality is an abstract system-theoretic notion with applications ranging from behavioral and social systems to quantum theory (where it includes, as a special case, nonlocality). We view it as a foundational concept of probability theory because contextuality is about identity of random variables. Contextuality-by-Default means that the identity of a random variable a priori differs from one context to another, a context being defined by conditions under which it is recorded, in particular, by other random variables recorded together with it. Two random variables, R in context c and R’ in context c’ , if they are identically distributed, can sometimes be coupled (imposed a joint distribution on) so that their values differ with probability zero. If such a coupling does not exist, we say that the system involving R, R’, c, c’ (and, possibly, many other random variables in various contexts) is contextual. More generally, if R in context c and R’ in context c’ may be differently distributed, it is possible that they can be coupled so that the probability with which their values differ has the minimal value allowed by the difference in their distributions; if such a coupling does not exist, we say that the system involving R, R’, c, c’ is contextual. There are numerous mathematical tests (necessary conditions, and sometimes criteria) for determining whether a system is contextual. One of several possible measures of contextuality is based on the idea of computing the minimal probability with which R in context c may be unequal to R’ in context c’ when they are coupled.
Acknowledgments: The work has been supported by NSF grant SES-1155956 and AFOSR grant FA9550-14-1-0318. The presentation is based on joint work with Janne Kujala, with thanks to Jan-Åke Larsson, Acacio de Barros, and Gary Oas.
April 10, 2015
ICS Summer Research Award Recipient
Michael Mozer, Ph.D., Professor, ICS and Department of Computer Science
Adrian Ward, Senior Research Associate, Department of Marketing Scholar, Center for Research on Consumer Financial Decision Making
Ian Smith, Ph.D. Student, Department of Computer Science
John Lynch, Ted Anderson Professor of Free Enterprise, Department of Marketing Director, Center for Research on Consumer Financial Decision Making
April 17, 2015
ICS Summer Research Award Recipient
Lauren Durkee, Ph.D. Student, ICS and Speech, Lanuage, Hearing Sciences
Hannah Glick, Ph.D. student, ICS and Speech, Language and Hearing Sciences
Anu Sharma, Ph.D., ICS Fellow
Sarel van Vuuren, Ph.D., ICS Fellow
April 24, 2015
ICS Summer Research Award Recipient
James Foster, Ph.D. Student, ICS and Cognitive Psychology
Matt Jones, Professor, Department of Psychology and Neuroscience
Al Kim, Professor, ICS and Department of Psychology and Neuroscience
Vicky Lai, Research Staff Scientist in the Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Netherlands
May 1, 2015
- ICS Fiesta & Poster Session