Conference Facilities
 
OSU Conference Services
100 LaSells Stewart Center
Corvallis, OR 97333
(541) 737-6439
(800) 678-6311
 

Invited Speakers

 

 

David Heckerman
Microsoft Research
Graphical Models for HIV Vaccine Design

I will discuss two applications of graphical models to HIV vaccine design. The first helps determine how strongly our immune system fights HIV. The second helps identify which parts of HIV can be successfully attacked by our immune system. I will also discuss how these applications have exposed a weakness in the process of learning graphical models from data---namely, the inability to quantify how many arcs in a learned graphical model are spurious. I will offer a solution based on the False Discovery Rate.

 

Bernhard Schölkopf
Max Planck Institute for Biological Cybernetics
Thoughts on Kernels

I will present my thoughts on what made kernel machines popular and what may or may not keep them going. I will also discuss applications in different domains, including computer graphics.

 

Josh Tenenbaum
Massachusetts Institute of Technology
Bayesian models of human inductive learning

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations -- far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people's everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called "intuitive theories" or "schemas". For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured and used, and how these representations could themselves be learned via Bayesian methods. The key challenge is to balance the need for strongly constrained inductive biases -- critical for generalization from very few examples -- with the flexibility to learn about the structure of new domains, to learn new inductive biases suitable for environments which we could not have been pre-programmed to perform in. The models I discuss will connect to several directions in contemporary machine learning, such as semi-supervised learning, structure learning in graphical models, hierarchical Bayesian modeling, and nonparametric Bayes.