Подписаться
Gilad Yehudai
Gilad Yehudai
Postdoctoral Associate, New York University
Подтвержден адрес электронной почты в домене weizmann.ac.il - Главная страница
Название
Процитировано
Процитировано
Год
Proving the lottery ticket hypothesis: Pruning is all you need
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
International Conference on Machine Learning, 6682-6691, 2020
3672020
On the power and limitations of random features for understanding neural networks
G Yehudai, O Shamir
Advances in Neural Information Processing Systems, 2019
2302019
Reconstructing training data from trained neural networks
N Haim, G Vardi, G Yehudai, O Shamir, M Irani
Advances in Neural Information Processing Systems 35, 22911-22924, 2022
1892022
From Local Structures to Size Generalization in Graph Neural Networks
G Yehudai, E Fetaya, E Meirom, G Chechik, H Maron
arXiv preprint arXiv:2010.08853, 2020
1782020
Learning a single neuron with gradient methods
G Yehudai, S Ohad
Conference on Learning Theory, 3756-3786, 2020
922020
The effects of mild over-parameterization on the optimization landscape of shallow relu neural networks
IM Safran, G Yehudai, O Shamir
Conference on Learning Theory, 3889-3934, 2021
412021
On the optimal memorization power of relu neural networks
G Vardi, G Yehudai, O Shamir
arXiv preprint arXiv:2110.03187, 2021
342021
Gradient methods provably converge to non-robust networks
G Vardi, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 35, 20921-20932, 2022
322022
From tempered to benign overfitting in relu neural networks
G Kornowski, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 36, 58011-58046, 2023
292023
Learning a single neuron with bias using gradient descent
G Vardi, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 34, 28690-28700, 2021
252021
Deconstructing data reconstruction: Multiclass, weight decay and general losses
G Buzaglo, N Haim, G Yehudai, G Vardi, Y Oz, Y Nikankin, M Irani
Advances in Neural Information Processing Systems 36, 51515-51535, 2023
232023
The connection between approximation, depth separation and learnability in neural networks
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
Conference on Learning Theory, 3265-3295, 2021
232021
Width is less important than depth in ReLU neural networks
G Vardi, G Yehudai, O Shamir
Conference on learning theory, 1249-1281, 2022
212022
When Can Transformers Count to n?
G Yehudai, H Kaplan, A Ghandeharioun, M Geva, A Globerson
arXiv preprint arXiv:2407.15160, 2024
112024
Generating collection rules based on security rules
NA ARBEL, L LAZAR, G Yehudai
US Patent 11,330,016, 2022
102022
Adversarial examples exist in two-layer relu networks for low dimensional linear subspaces
O Melamed, G Yehudai, G Vardi
Advances in Neural Information Processing Systems 36, 5028-5049, 2023
7*2023
On size generalization in graph neural networks
G Yehudai, E Fetaya, E Meirom, G Chechik, H Maron
62020
Reconstructing training data from real world models trained with transfer learning
Y Oz, G Yehudai, G Vardi, I Antebi, M Irani, N Haim
arXiv preprint arXiv:2407.15845, 2024
52024
On the benefits of rank in attention layers
N Amsel, G Yehudai, J Bruna
arXiv preprint arXiv:2407.16153, 2024
22024
Reconstructing training data from multiclass neural networks
G Buzaglo, N Haim, G Yehudai, G Vardi, M Irani
arXiv preprint arXiv:2305.03350, 2023
22023
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20