I am a researcher focusing on topics at the intersection of machine learning and optimization. I am doing my PhD at TU Munich under the supervision of Prof. Günnemann in the DAML research group and am part of the relAI graduate school. Currently, I am especially interested in the use of machine learning for and in optimization problems commonly arising e.g., in operations research. Before this, I have worked extensively on robustness verification, graph neural networks, and general robustness topics.
If you want to contact me, best write me an e-mail: lukas . gosch [at] tum.de. Scroll down to find my other social media appearances.
I visited and gave an invited talk in the group of Prof. Martin Vechev at INSAIT. It was an amazing visit, thanks to Prof. Martin Vechev and his institute for inviting and hosting me!
Machine learning models are highly vulnerable to label flipping, i.e., the adversarial modification (poisoning) of training labels to compromise performance. Thus, deriving robustness certificates is important to guarantee that test predictions remain unaffected and to understand worst-case robustness behavior. However, for Graph Neural Networks (GNNs), the problem of certifying label flipping has so far been unsolved. We change this by introducing an exact certification method, deriving both sample-wise and collective certificates. Our method leverages the Neural Tangent Kernel (NTK) to capture the training dynamics of wide networks enabling us to reformulate the bilevel optimization problem representing label flipping into a Mixed-Integer Linear Program (MILP). We apply our method to certify a broad range of GNN architectures in node classification tasks. Thereby, concerning the worst-case robustness to label flipping: (i) we establish hierarchies of GNNs on different benchmark graphs; (ii) quantify the effect of architectural choices such as activations, depth and skip-connections; and surprisingly, (iii) uncover a novel phenomenon of the robustness plateauing for intermediate perturbation budgets across all investigated datasets and architectures. While we focus on GNNs, our certificates are applicable to sufficiently wide NNs in general through their NTK. Thus, our work presents the first exact certificate to a poisoning attack ever derived for neural networks, which could be of independent interest.
@inproceedings{sabanayagam2025exact,title={Exact Certification of (Graph) Neural Networks Against Label Poisoning},author={Sabanayagam$*$, Mahalakshmi and Gosch$*$, Lukas and G{\"u}nnemann, Stephan and Ghoshdastidar, Debarghya},booktitle={The Thirteenth International Conference on Learning Representations (ICLR), Spotlight},year={2025},}
Provable robustness of (graph) neural networks against data poisoning and backdoor attacks
Lukas Gosch*, Mahalakshmi Sabanayagam*, Debarghya Ghoshdastidar, and
1 more author
NeurIPS 2024’ AdvML-Frontiers Workshop (Best Paper Award), 2024
Generalization of machine learning models can be severely compromised by data poisoning, where adversarial changes are applied to the training data, as well as backdoor attacks that additionally manipulate the test data. These vulnerabilities have led to an interest in certifying (i.e., proving) that such changes up to a certain magnitude do not affect test predictions. We, for the first time, certify Graph Neural Networks (GNNs) against poisoning and backdoor attacks targeting the node features of a given graph. Our certificates are white-box and based upon (i) the neural tangent kernel, which characterizes the training dynamics of sufficiently wide networks; and (ii) a novel reformulation of the bilevel optimization problem describing poisoning as a mixed-integer linear program. Consequently, we leverage our framework to provide fundamental insights into the role of graph structure and its connectivity on the worst-case robustness behavior of convolution-based and PageRank-based GNNs. We note that our framework is more general and constitutes the first approach to derive white-box certificates for NNs, which can be of independent interest beyond graph-related tasks.
@article{gosch2024provable,title={Provable robustness of (graph) neural networks against data poisoning and backdoor attacks},author={Gosch$*$, Lukas and Sabanayagam$*$, Mahalakshmi and Ghoshdastidar, Debarghya and G{\"u}nnemann, Stephan},journal={NeurIPS 2024' AdvML-Frontiers Workshop (Best Paper Award)},year={2024},}
Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions
Lukas Gosch, Simon Geisler, Daniel Sturm, and
3 more authors
In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), 2023
Despite its success in the image domain, adversarial training did not (yet) stand out as an effective defense for Graph Neural Networks (GNNs) against graph structure perturbations. In the pursuit of fixing adversarial training (1) we show and overcome fundamental theoretical as well as practical limitations of the adopted graph learning setting in prior work; (2) we reveal that more flexible GNNs based on learnable graph diffusion are able to adjust to adversarial perturbations, while the learned message passing scheme is naturally interpretable; (3) we introduce the first attack for structure perturbations that, while targeting multiple nodes at once, is capable of handling global (graph-level) as well as local (node-level) constraints. Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.
@inproceedings{gosch2023adversarial,title={Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions},author={Gosch, Lukas and Geisler, Simon and Sturm, Daniel and Charpentier, Bertrand and Z{\"u}gner, Daniel and G{\"u}nnemann, Stephan},booktitle={Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS)},year={2023},}
Revisiting Robustness in Graph Machine Learning
Lukas Gosch, Daniel Sturm, Simon Geisler, and
1 more author
In The Eleventh International Conference on Learning Representations (ICLR), 2023
Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure.
However, because manual inspection of a graph is difficult, it is unclear if the
studied perturbations always preserve a core assumption of adversarial examples:
that of unchanged semantic content. To address this problem, we introduce a
more principled notion of an adversarial graph, which is aware of semantic con-
tent change. Using Contextual Stochastic Block Models (CSBMs) and real-world
graphs, our results uncover: i) for a majority of nodes the prevalent perturba-
tion models include a large fraction of perturbed graphs violating the unchanged
semantics assumption; ii) surprisingly, all assessed GNNs show over-robustness
- that is robustness beyond the point of semantic change. We find this to be a
complementary phenomenon to adversarial examples and show that including the
label-structure of the training graph into the inference process of GNNs signif-
icantly reduces over-robustness, while having a positive effect on test accuracy
and adversarial robustness. Theoretically, leveraging our new semantics-aware
notion of robustness, we prove that there is no robustness-accuracy tradeoff for
inductively classifying a newly added node.
@inproceedings{gosch2023revisiting,title={Revisiting Robustness in Graph Machine Learning},author={Gosch, Lukas and Sturm, Daniel and Geisler, Simon and G{\"u}nnemann, Stephan},booktitle={The Eleventh International Conference on Learning Representations (ICLR)},year={2023},url={https://openreview.net/forum?id=h1o7Ry9Zctm},}
If you want to contanct me, best just drop me an e-mail :).