Pour vous authentifier, privilégiez eduGAIN / To authenticate, prefer eduGAINeu

HEPML workshop at NIPS14

Europe/Paris
Level 5, room 511 c (Palais des Congrès de Montréal)

Level 5, room 511 c

Palais des Congrès de Montréal

Description

The web site of the event:

https://sites.google.com/site/hepml14.

    • Session 1
      • 1
        Welcome
        Orateur: Balázs Kégl (LAL)
      • 2
        HEP&ML and the HiggsML challenge
        We first describe the HiggsML challenge (the problem of optimizing classifiers for discovery significance, the setup of the challenge, the results, and some analysis of the outcome). In the second part we outline some of the application themes of machine learning in high-energy physics.
        Orateur: Balázs Kégl (LAL)
        Slides
      • 3
        Embedding ML in Classical Statistical tests used in HEP (invited talk)
        I will review the ways that machine learning is typically used in particle physics, some recent advancements, and future directions. In particular, I will focus on the integration of machine learning and classical statistical procedures. These considerations motivate a novel construction that is a hybrid of machine learning algorithms and more traditional likelihood methods.
        Orateur: Kyle Cranmer (New York University)
        Transparents
    • Coffee break
    • Session 2
      • 4
        Presentation of the winner of the HiggsML challenge
        We describe the winning solution of the HiggsML challenge, the issues related to the evaluation metric and reliable assessment of model performance. Finally, we take a stab at predicting how to achieve larger improvements.
        Orateur: Gábor Melis
        Slides
      • 5
        Presentation of the runner up of the HiggsML challenge
        High Energy Physics provides a challenging data domain with data that is highly structured, but also very noisy. I will present what I have learned analyzing this data for the HiggsML challenge, focusing on methods that are able to effectively search through a high dimensional model space while also achieving good statistical efficiency. In addition, I will discuss the role of the physicist in modelling this type of data, and I will talk about robustly applying our findings to real (not simulated) HEP data.
        Orateur: Tim Salimans
        Slides
      • 6
        Presentation of the winner of the HEP meets ML prize
        In this talk, I will describe how we use principle of gradient boosting method to construct simple and effective regression trees functions for Higgs Boson detection. We take a functional space optimization framework that jointly optimize the training objective and simplicity of functions learnt. I talk about how the objective could be clearly related to the tree searching, pruning and leave weight estimation. Finally I will discuss how the framework could be modularized, to provide interface for adding physics domain knowledge into the learning algorithm.
        Orateur: Tianqi Chen
      • 7
        Real time data analysis at the LHC : present and future
        The large hadron collider (LHC), which collides protons at an energy of 14 TeV (for non-physicists, each beam of protons carries roughly the energy of a TGV train going at full speed), produces hundreds of exabytes of data per year, making it one of the largest sources of data in the world today. At present it is not possible to even transfer most of this data from the four main particle detectors at the LHC to "offline" data facilities, much less to permanently store it for future processing. For this reason the LHC detectors are equipped with real-time analysis systems, called triggers, which process this volume of data and select the most interesting proton-proton collisions. The LHC experiment triggers reduce the data produced by the LHC by between 1/1000 and 1/10000, to tens of petabytes per year, allowing its economical storage and further analysis. The bulk of this data-reduction is performed by custom electronics which ignores most of the data in its decision making, and is therefore unable to exploit the most powerful known data analysis strategies developed by e.g. the machine learning community. In this talk I will cover the present status of real-time data analysis at the LHC, before explaining why the future upgrades of the LHC experiments will increase the volume of data which can be sent off the detector and into off-the-shelf data processing facilities (such as CPU or GPU farms) to tens of exabytes per year. This development will simultaneously enable a vast expansion of the physics programme of the LHC's detectors, and make it mandatory to develop and implement a new generation of real-time multivariate analysis tools in order to fully exploit this new potential of the LHC. I will explain what work is ongoing in this direction and hopefully motivate why more effort is needed in the coming years.
        Orateur: Vava Gligorov
    • Session 3
      • 8
        Machine Learning for Ultra-High-Energy Physics (invited talk)
        I will describe the computational and machine learning challenges of the CRAYFIS project: a distributed cosmic ray telescope consisting of consumer smartphones and geared for the detection of ultra-high-energy cosmic rays. For more info: http://crayfis.ps.uci.edu/
        Orateur: Daniel Whiteson
      • 9
        Weighted Classification Cascades for Optimizing Discovery Significance in the HiggsML Challenge
        We introduce a minorization-maximization approach to optimizing common measures of discovery significance in high energy physics. The approach alternates between solving a weighted binary classification problem and updating class weights in a simple, closed-form manner. Moreover, an argument based on convex duality shows that an improvement in weighted classification error on any round yields a commensurate improvement in discovery significance. We complement our derivation with experimental results from the 2014 Higgs boson machine learning challenge.
        Orateur: Lester Mackey
      • 10
        Consistent optimization of AMS by logistic loss minimization
        In this paper, we theoretically justify an approach popular among participants of the Higgs Boson Machine Learning Challenge to optimize approximate median significance (AMS). The approach is based on the following two-stage procedure. First, a real-valued function is learned by minimizing a surrogate loss for binary classification, such as logistic loss, on the training sample. Then, a threshold is tuned on a separate validation sample, by direct optimization of AMS. We show that the regret of the resulting (thresholded) classifier measured with respect to the squared AMS, is upperbounded by the regret of the underlying real-valued function measured with respect to the logistic loss. Hence, we prove that minimizing logistic surrogate is a consistent method of optimizing AMS.
        Orateur: Wojciech Kotlowski
    • Coffee break
    • Session 4
      • 11
        Ensemble of maximied Weighted AUC models for the maximization of the median discovery significance
        From May 12th 2014 to September 15th 2014 took place the Higgs Boson Machine Learning Challenge. Its goal was to explore machine learning methods to improve the discovery significance of the ATLAS experiment. This talk describes the preprocessing, training and results of our model, that finished in 9th position among the solutions of 1785 teams.
        Orateur: Roberto Diaz Morales (University Carlos III de Madrid)
        Paper
        Slides
        Video
      • 12
        Deep Learning In High-Energy Physics (invited talk)
        We will provide a brief overview of the challenges and opportunities facing machine learning in the natural sciences, from physics to biology, and then focus on the application of deep learning methods to problems in high-energy physics. In particular we will describe the results obtained on three different problems (Higgs boson detection, Supersymmetry, and Higgs boson decay).
        Orateur: Pierre Baldi
        Slides
      • 13
        Panel discussion