User profiles for Tomas Pfister
Tomas PfisterHead of AI Research @ Google Cloud Verified email at google.com Cited by 16458 |
Recognising spontaneous facial micro-expressions
Facial micro-expressions are rapid involuntary facial expressions which reveal suppressed
affect. To the best knowledge of the authors, there is no previous work that successfully …
affect. To the best knowledge of the authors, there is no previous work that successfully …
Learning from simulated and unsupervised images through adversarial training
With recent progress in graphics, it has become more tractable to train models on synthetic
images, potentially avoiding the need for expensive annotations. However, learning from …
images, potentially avoiding the need for expensive annotations. However, learning from …
Learning to prompt for continual learning
The mainstream paradigm behind continual learning has been to adapt the model parameters
to non-stationary data distributions, where catastrophic forgetting is the central challenge. …
to non-stationary data distributions, where catastrophic forgetting is the central challenge. …
Cutpaste: Self-supervised learning for anomaly detection and localization
We aim at constructing a high performance model for defect detection that detects unknown
anomalous patterns of an image without anomalous data. To this end, we propose a two-…
anomalous patterns of an image without anomalous data. To this end, we propose a two-…
Flowing convnets for human pose estimation in videos
The objective of this work is human pose estimation in videos, where multiple frames are
available. We investigate a ConvNet architecture that is able to benefit from temporal context by …
available. We investigate a ConvNet architecture that is able to benefit from temporal context by …
[HTML][HTML] Temporal fusion transformers for interpretable multi-horizon time series forecasting
Multi-horizon forecasting often contains a complex mix of inputs – including static (ie time-invariant)
covariates, known future inputs, and other exogenous time series that are only …
covariates, known future inputs, and other exogenous time series that are only …
Tabnet: Attentive interpretable tabular learning
We propose a novel high-performance and interpretable canonical deep tabular data
learning architecture, TabNet. TabNet uses sequential attention to choose which features to …
learning architecture, TabNet. TabNet uses sequential attention to choose which features to …
Dualprompt: Complementary prompting for rehearsal-free continual learning
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store past …
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store past …
Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes
Deploying large language models (LLMs) is challenging because they are memory inefficient
and compute-intensive for practical applications. In reaction, researchers train smaller task-…
and compute-intensive for practical applications. In reaction, researchers train smaller task-…
A spontaneous micro-expression database: Inducement, collection and baseline
Micro-expressions are short, involuntary facial expressions which reveal hidden emotions.
Micro-expressions are important for understanding humans' deceitful behavior. Psychologists …
Micro-expressions are important for understanding humans' deceitful behavior. Psychologists …