Pour vous authentifier, privilégiez eduGAIN / To authenticate, prefer eduGAIN

AutoML 2015 workshop @ ICML 2015

Lille Grand Palais

Lille Grand Palais

1 Boulevard des Cités Unies, 59777 Lille-Euralille
Balázs Kégl (LAL), Frank Hutter (University of Freiburg)

The web site of the event:


Please submit the questions you would like to raise in the panel discussion at this site.

    • 8:30 AM 10:00 AM
      Session 1
      • 8:30 AM
        Invited Talk: Open Research Problems in AutoML 40m
        Speaker: Rich Caruana (Microsoft Research)
      • 9:10 AM
        Invited Talk: Bandits and Bayesian optimization for AutoML 40m
        Complex optimization and decision making tasks are beginning to play an increasingly crucial role across a wide variety of scientific fields. This is becoming more and more evident as entire research programs are being automated. In this talk I'll describe a set of methods, known as Bayesian optimization, which provide a very sample efficient approach to this problem. Much of the gains of these methods are obtained by building a posterior model of a function during optimization in order to efficiently explore its surface. I will further describe a number of advanced search mechanisms and models and show how these can be used for automating Machine Learning problems. Finally, I will also briefly provide links to related bandit literature.
        Speaker: Matthew Hoffmann (University of Cambridge)
      • 9:50 AM
        Poster Spotlights 1 10m
        5 spotlights of 2 minutes each
        • Using Internal Validity Measures to Compare Clustering Algorithms 2m
          Speakers: Hendrik Blockeel, Toon Van Craenendonck
        • Redundant Feature Selection using Permutation Methods 2m
          Speakers: Abhir Bhalerao, Nathan Griffiths, Phillip Taylor
        • A Linear-Time Particle Gibbs Sampler for Infinite Hidden Markov Models 2m
          Speakers: Hong Ge, Nilesh Tripuraneni, Shane Gu, Zoubin Ghahramani
        • Autograd: Effortless Gradients in Pure Numpy 2m
          Speakers: David Duvenaud, Dougal Maclaurin, Ryan P. Adams
        • Autonomous learning of parameters in differential equations 2m
          Speakers: Adel Mezine, Artémis Llamosi, Florence d'Alché-Buc, Michele Sebag, Veronique Letort
    • 10:00 AM 10:30 AM
      Coffee break
    • 10:30 AM 12:00 PM
      Session 2
      • 10:30 AM
        Invited Talk: Algorithm Recommendation as Collaborative Filtering 40m
        Speaker: Michele Sebag (CNRS)
      • 11:10 AM
        Poster spotlights 2 18m
        9 spotlights of 2 minutes each
        • Improving reproducibility of data science experiments 2m
          Speakers: Alexander Baranov, Alexey Rogozhnikov, Andrey Ustyuzhanin, Egor Khairullin, Tatiana Likhomanenko
        • Introducing Sacred: A Tool to Facilitate Reproducible Research 2m
          Speakers: Jürgen Schmidhuber, Klaus Greff
        • DIGITS: the Deep learning GPU Training System 2m
          Speakers: Allison Gray, Julie Bernauer, Luke Yeager, Michael Houston
        • Design of the 2015 ChaLearn AutoML Challenge 2m
          Speakers: Gavin Cawley, Hugo Jair Escalante, Isabelle Guyon, Kristin Bennett, Sergio Escalera, Tin Kam Ho
        • Autokit: automatic machine learning via representation and model search 2m
          Speaker: Tadej Štajner
        • AutoCompete: A Framework for Machine Learning Competitions 2m
          Speakers: Abbishek Thakur, Artus Krohn-Grimberghe
        • Methods for Improving Bayesian Optimization for AutoML 2m
          Speakers: Aaron Klein, Frank Hutter, Jost Tobias Springenberg, Katharina Eggensperger, Manuel Blum, Matthias Feurer
        • Fast Cross-Validation for Incremental Learning 2m
          Speakers: András György, Csaba Szepesvári, Pooria Joulani
        • Active Structure Discovery for Gaussian Processes 2m
          Speakers: Gustavo Malkomes, Roman Garnett
      • 11:30 AM
        1st Poster Session 30m
    • 12:00 PM 2:00 PM
      Lunch Break 2h
    • 2:00 PM 4:00 PM
      Session 3
      • 2:00 PM
        Invited Talk: Recursive Self-Improvement 40m
        Most machine learning researchers focus on domain-specific learning algorithms. Can we also construct meta-learning algorithms that can learn better learning algorithms, and better ways of learning better learning algorithms, and so on, restricted only by the fundamental limitations of computability? In 1965, J. Good already made informal remarks on an intelligence explosion through such recursive self-improvement (RSI). I will discuss various concrete algorithms (not just vague ideas) for RSI: 1. My diploma thesis (1987) proposed an evolutionary system that learns to inspect and improve its own learning algorithm, where Genetic Programming (GP) is recursively applied to itself, to invent better learning methods, meta-learning methods, meta-meta-learning etc. 2. RSI based on the self-referential Success-Story Algorithm for self-modifying probabilistic programs (1997) was already able to solve complex tasks. 3. My self-referential deep recurrent neural networks (since 1993) run and inspect and change their own weight change algorithms. Back in 2001, my former student Hochreiter (now prof) already had a practical implementation of such an RNN that meta-learns an excellent learning algorithm, at least for a limited domain. 4. The Goedel machine (2006) is the first RSI that is mathematically optimal in a particular sense. Will RSI finally take off in the near future?
        Speaker: Juergen Schmidhuber (IDSIA)
      • 2:40 PM
        Invited Talk: Automatically constructing models, and automatically explaining them, too. 40m
        How could an artificial intelligence do statistics? It would need an open-ended language of models, and a way to search through and compare those models. Even better would be a system that could explain the different types of structure found, even if that type of structure had never been seen before. This talk presents a prototype of such a system, which builds structured Gaussian processes regression models by combining covariance kernels to build a custom model for each dataset. The resulting models can be broken down into relatively simple components, and surprisingly, it's not hard to write code that automatically describes each component, even for novel combinations of kernels. The result is a procedure that takes in a dataset, and outputs a report with plots and English descriptions of the different types of structure found in that dataset.
        Speaker: David Duvenaud (Harvard University)
      • 3:20 PM
        2nd Poster Session 40m
    • 4:00 PM 4:30 PM
      Coffee break
    • 4:30 PM 6:00 PM
      Session 4
      • 4:30 PM
        Invited Talk: OpenML: A Foundation for Networked & Automatic Machine Learning 40m
        OpenML is an online machine learning platform where scientists can automatically log and share data sets, code, and experiments, organize them online, and collaborate with researchers all over the world. It helps to automate many tedious aspects of research, is readily integrated into several machine learning tools, and offers easy-to-use APIs. It also enables large-scale and real-time collaboration, allowing researchers to build directly on each other's latest results, and track the wider impact of their work. Ultimately, this provides a wealth of information for building systems that learn from previous experiments, to either assist people while analyzing data, or automate the process altogether.
        Speaker: Joaquin Vanschoren (Eindhoven University of Technology)
      • 5:10 PM
        AutoML Challenge 20m
        Speaker: Marc Boulle (Orange)
      • 5:30 PM
        Panel Discussion: Next steps for AutoML 30m
        Panelists: Marc Boulle, Rich Caruana, David Duvenaud, Matthew Hoffmann, Juergen Schmidhuber, Michèle Sebag, Joaquin Vanschoren.