User profiles for Tomas Pfister

Tomas Pfister

Head of AI Research @ Google Cloud
Verified email at google.com
Cited by 16458

Recognising spontaneous facial micro-expressions

T Pfister, X Li, G Zhao… - … conference on computer …, 2011 - ieeexplore.ieee.org
Facial micro-expressions are rapid involuntary facial expressions which reveal suppressed
affect. To the best knowledge of the authors, there is no previous work that successfully …

Learning from simulated and unsupervised images through adversarial training

A Shrivastava, T Pfister, O Tuzel… - Proceedings of the …, 2017 - openaccess.thecvf.com
With recent progress in graphics, it has become more tractable to train models on synthetic
images, potentially avoiding the need for expensive annotations. However, learning from …

Learning to prompt for continual learning

…, X Ren, G Su, V Perot, J Dy, T Pfister - Proceedings of the …, 2022 - openaccess.thecvf.com
The mainstream paradigm behind continual learning has been to adapt the model parameters
to non-stationary data distributions, where catastrophic forgetting is the central challenge. …

Cutpaste: Self-supervised learning for anomaly detection and localization

CL Li, K Sohn, J Yoon, T Pfister - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We aim at constructing a high performance model for defect detection that detects unknown
anomalous patterns of an image without anomalous data. To this end, we propose a two-…

Flowing convnets for human pose estimation in videos

T Pfister, J Charles… - Proceedings of the IEEE …, 2015 - openaccess.thecvf.com
The objective of this work is human pose estimation in videos, where multiple frames are
available. We investigate a ConvNet architecture that is able to benefit from temporal context by …

[HTML][HTML] Temporal fusion transformers for interpretable multi-horizon time series forecasting

B Lim, SÖ Arık, N Loeff, T Pfister - International Journal of Forecasting, 2021 - Elsevier
Multi-horizon forecasting often contains a complex mix of inputs – including static (ie time-invariant)
covariates, known future inputs, and other exogenous time series that are only …

Tabnet: Attentive interpretable tabular learning

SÖ Arik, T Pfister - Proceedings of the AAAI conference on artificial …, 2021 - ojs.aaai.org
We propose a novel high-performance and interpretable canonical deep tabular data
learning architecture, TabNet. TabNet uses sequential attention to choose which features to …

Dualprompt: Complementary prompting for rehearsal-free continual learning

…, CY Lee, X Ren, G Su, V Perot, J Dy, T Pfister - … on Computer Vision, 2022 - Springer
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store past …

Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes

…, Y Fujii, A Ratner, R Krishna, CY Lee, T Pfister - arXiv preprint arXiv …, 2023 - arxiv.org
Deploying large language models (LLMs) is challenging because they are memory inefficient
and compute-intensive for practical applications. In reaction, researchers train smaller task-…

A spontaneous micro-expression database: Inducement, collection and baseline

X Li, T Pfister, X Huang, G Zhao… - 2013 10th IEEE …, 2013 - ieeexplore.ieee.org
Micro-expressions are short, involuntary facial expressions which reveal hidden emotions.
Micro-expressions are important for understanding humans' deceitful behavior. Psychologists …