Benchmark Designers Should "Train on the Test Set" to Expose Exploitable Non-Visual Shortcuts
New York University
Robust benchmarks are crucial for evaluating Multimodal Large Language Models (MLLMs). Yet we find that models can ace many multimodal benchmarks without strong visual understanding, instead exploiting biases, linguistic priors, and superficial patterns. This is especially problematic for vision-centric benchmarks that are meant to require visual inputs. We adopt a diagnostic principle for benchmark design: if a benchmark can be gamed, it will be. Designers should therefore try to “game” their own benchmarks first, using diagnostic and debiasing procedures to systematically identify and mitigate non-visual biases. Effective diagnosis requires directly “training on the test set”—probing the released test set for its intrinsic, exploitable patterns.
We operationalize this standard with two components. First, we diagnose benchmark susceptibility using a “Test-set Stress-Test” (TsT) methodology. Our primary diagnostic tool involves fine-tuning a powerful Large Language Model via k-fold cross-validation on exclusively the non-visual, textual inputs of the test set to reveal shortcut performance and assign each sample a bias score s(x). We complement this with a lightweight Random Forest-based diagnostic operating on hand-crafted features for fast, interpretable auditing. Second, we debias benchmarks by filtering high-bias samples using an “Iterative Bias Pruning” (IBP) procedure. Applying this framework to four benchmarks—VSI-Bench, CV-Bench, MMMU, and VideoMME—we uncover pervasive non-visual biases. As a case study, we apply our full framework to create VSI-Bench-Debiased, demonstrating reduced non-visual solvability and a wider vision-blind performance gap than the original.
Coming soon!
@article{brown2025benchmark,
author = {Brown, Ellis and Yang, Jihan and Yang, Shusheng and Fergus, Rob and Xie, Saining},
title = {Benchmark Designers Should ``Train on the Test Set'' to Expose Exploitable Non-Visual Shortcuts},
journal = {arXiv preprint arXiv:2511.04655},
year = {2025},
}- Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
- Cambrian-S: Towards Spatial Supersensing in Video
- Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces - Introduces VSI-Bench for evaluating visual-spatial intelligence
- SIMS-V: Simulated Instruction-Tuning for Spatial Video Understanding