User profiles for Tomoya Sakai

Tomoya Sakai

IBM Research - Tokyo
Verified email at ibm.com
Cited by 723

Semi-supervised classification based on classification from positive and unlabeled data

T Sakai, MC Plessis, G Niu… - … conference on machine …, 2017 - proceedings.mlr.press
Most of the semi-supervised classification methods developed so far use unlabeled data for
regularization purposes under particular distributional assumptions such as the cluster …

Fast spectral clustering with random projection and sampling

T Sakai, A Imiya - International Workshop on Machine Learning and Data …, 2009 - Springer
This paper proposes a fast spectral clustering method for large-scale data. In the present
method, random projection and random sampling techniques are adopted for reducing the data …

Do we need zero training loss after achieving zero training error?

T Ishida, I Yamane, T Sakai, G Niu… - arXiv preprint arXiv …, 2020 - arxiv.org
Overparameterized deep networks have the capacity to memorize training data with zero \emph{training
error}. Even after memorization, the \emph{training loss} continues to approach …

Theoretical comparisons of positive-unlabeled learning against positive-negative learning

G Niu, MC Du Plessis, T Sakai, Y Ma… - Advances in neural …, 2016 - proceedings.neurips.cc
In PU learning, a binary classifier is trained from positive (P) and unlabeled (U) data without
negative (N) data. Although N data is missing, it sometimes outperforms PN learning (ie, …

A surprisingly simple approach to generalized few-shot semantic segmentation

T Sakai, H Qiu, T Katsuki, D Kimura… - Advances in …, 2024 - proceedings.neurips.cc
The goal of* generalized* few-shot semantic segmentation (GFSS) is to recognize* novel-class*
objects through training with a few annotated examples and the* base-class* model that …

Semi-supervised AUC optimization based on positive-unlabeled learning

T Sakai, G Niu, M Sugiyama - Machine Learning, 2018 - Springer
Maximizing the area under the receiver operating characteristic curve (AUC) is a standard
approach to imbalanced classification. So far, various supervised AUC optimization methods …

[BOOK][B] Machine learning from weak supervision: An empirical risk minimization approach

M Sugiyama, H Bao, T Ishida, N Lu, T Sakai - 2022 - books.google.com
… In this book Masashi Sugiyama, Han Bao, Takashi Ishida, Nan Lu, Tomoya Sakai and Gang
Niu present theory and algorithms for weakly supervised learning, a paradigm of machine …

Covariate shift adaptation on learning from positive and unlabeled data

T Sakai, N Shimizu - Proceedings of the AAAI conference on artificial …, 2019 - aaai.org
The goal of binary classification is to identify whether an input sample belongs to positive or
negative classes. Usually, supervised learning is applied to obtain a classification rule, but …

Convex formulation of multiple instance learning from positive and unlabeled bags

H Bao, T Sakai, I Sato, M Sugiyama - Neural Networks, 2018 - Elsevier
Multiple instance learning (MIL) is a variation of traditional supervised learning problems
where data (referred to as bags) are composed of sub-elements (referred to as instances) and …

Multi-level optimization of matrix multiplication for GPU-equipped systems

K Matsumoto, N Nakasato, T Sakai, H Yahagi… - Procedia Computer …, 2011 - Elsevier
This paper presents results of our study on double-precision general matrix-matrix multiplication
(DGEMM) for GPU-equipped systems. We applied further optimization to utilize the …