Get Adobe Flash player
DR VALERIE HALYO New discoveries at the LHC Dr Valerie Halyo explains how advances in scientific computing, and the integration of massively parallel architecture in the trigger, might further advance the possibilities of new scientific discovery at the LHC topologies that can differentiate themselves from the Standard Model (SM) of particle physics we know today, and to keep a note of all corresponding events the trigger system suppressed or designated non-important, for archiving and further analysis. My goal was to investigate how we can improve the performance of the trigger at the LHC and pave the way for great expansion in the search for new physics. How does this work tie in with your investigations into physics beyond the SM? Could you begin by outlining the primary objectives of your research? I believe the key to an era of new discoveries at the Large Hadron Collider (LHC) might lie in recent innovations focused around parallel processing. Computational accelerators such as Nvidia’s Graphics Processing Unit (GPU) or Intel’s Xeon-Phi and multi core CPUs are capable of running massively parallel algorithms to enhance the performance of the trigger system that is responsible for selecting, for further rigorous study, the most interesting particle events emanating from the proton-proton (pp) collisions. What led you to develop an interest in this area? While working as a postdoc in the BaBar experiment at the SLAC National Accelerator Laboratory in the US, I was heavily involved in the trigger upgrade, which helped me realise the importance of the trigger system in collider experiments. Moving to the LHC, I initially set myself two goals: to try to look for final state 36 INTERNATIONAL INNOVATION In our recent publications, we demonstrated promising preliminary results in our attempt to extend the search at the LHC. We chose to develop a tracking algorithm based on the Hough Transform to run on the GPUs in order to reconstruct the trajectory of the particles created after the collisions. The parallel algorithm not only permits simultaneously finding, in real time, any combination of prompt and non-prompt tracks at LHC conditions where the number of pp interactions in a collision is very large, but it also allows development of new trigger algorithms that could select exotic final state topologies which include new highly displaced black holes or jets. If found, these new topologies would represent a smoking gun for new physics, and hence increase the reach of new physics at the LHC. The new algorithm still needs to be compared to the conventional implementation of the tracking algorithm on the GPU or Xeon-Phi, and extensive work has to be done to bring it to final production. However, these preliminary results on the impact of the potential discovery reach are encouraging. Could you summarise some of your most exciting or significant professional highlights to date? As co-manager of the CMS Luminosity effort (February 2006 – December 2010), I led my group to provide the sole online luminosity system for the CMS collaboration based on the Hadronic Forward Calorimeter. I was responsible for the design, development, commissioning and deployment of the online/offine CMS luminosity system readout. With my magnificent group we were able to complete all the milestones on time before LHC started running. The system not only delivers the LHC with real-time bunch- by-bunch luminosity at the CMS interaction point, but also – through its collaborative work with LHC – provides the absolute luminosity measurement via the Van der Meer calibration. In 2010, and for the first time at the LHC, my graduate student Jeremy Werner compared the latter measurement to the absolute Luminosity measurement obtained by measuring the yield of Z bosons. My latest effort was to lead searches for long- lived particles decaying either leptonically or to heavy flavour hadrons, which also led us to investigate new ways of improving the trigger performance for this type of search. My goal is to integrate the new GPU/Xeon- Phi technology into the trigger to enhance its performance and extend the scope of the physics reach at the LHC, and at the same time empower students and postdocs with new cutting edge skills. Has a collaborative approach proved important to the success of the project? I am collaborating with experts from Nvidia and Colfax who are specialising on GPU and MIC architecture respectively. They are not only very knowledgeable and competent, but also share the enthusiasm to contribute to fundamental science. Scientific computing is playing a significant role in the operation of the LHC. It takes an interdisciplinary attitude to obtain the optimal hardware and software design and impact the physics of tomorrow.