Pavel Laskov from the University of Liechtenstein has been named as the winner of the Test of Time Award by the International Conference on Machine Learning. He and his two co-authors were recognized for their paper that focused on manipulated data techniques versus algorithms.

Pavel Laskov, Hilti Chair for Data and Application Security at the University of Liechtenstein, has been recognized for his work on the vulnerability of Artificial Intelligence (AI). Together with his co-authors Battista Biggio from the University of Cagliari in Sardinia and Blaine Nelsen, who works for the company Robust Intelligence in San Francisco, USA, Laskov was presented with the Test of Time Award 2022 by the International Conference on Machine Learning (ICML).

The prize is awarded to papers that have had the greatest impact on the scientific community in the ten years since being presented at this conference. This year, the three researchers were honored for their article “Poisoning Attacks against Support Vector Machines”. It was ranked as the most significant out of a total of 244 papers presented at ICML 2012. At that time, the three award-winning authors were working at the University of Tübingen in Baden-Württemberg, Germany.

According to a press release issued by the University of Liechtenstein, this prize shows that its teaching and research in areas such as cyber-security and data science are at the highest level in a global comparison.

In their award-winning article, the three researchers posed the question as to whether machine learning algorithms, which are actually supposed to detect security risks, can be overcome by manipulated data. Just 12 months later, they designed a model of this kind, proving in the process that this is indeed possible. Since then, according to the press release, more than 5,000 papers have been published in which possible security gaps were examined. Laskov and his colleagues are also dealing with this topic in their current work.

Back to overview