User profiles for Tomoya Sakai
Tomoya SakaiIBM Research - Tokyo Verified email at ibm.com Cited by 723 |
Semi-supervised classification based on classification from positive and unlabeled data
Most of the semi-supervised classification methods developed so far use unlabeled data for
regularization purposes under particular distributional assumptions such as the cluster …
regularization purposes under particular distributional assumptions such as the cluster …
Fast spectral clustering with random projection and sampling
T Sakai, A Imiya - International Workshop on Machine Learning and Data …, 2009 - Springer
This paper proposes a fast spectral clustering method for large-scale data. In the present
method, random projection and random sampling techniques are adopted for reducing the data …
method, random projection and random sampling techniques are adopted for reducing the data …
Do we need zero training loss after achieving zero training error?
Overparameterized deep networks have the capacity to memorize training data with zero \emph{training
error}. Even after memorization, the \emph{training loss} continues to approach …
error}. Even after memorization, the \emph{training loss} continues to approach …
Theoretical comparisons of positive-unlabeled learning against positive-negative learning
In PU learning, a binary classifier is trained from positive (P) and unlabeled (U) data without
negative (N) data. Although N data is missing, it sometimes outperforms PN learning (ie, …
negative (N) data. Although N data is missing, it sometimes outperforms PN learning (ie, …
A surprisingly simple approach to generalized few-shot semantic segmentation
The goal of* generalized* few-shot semantic segmentation (GFSS) is to recognize* novel-class*
objects through training with a few annotated examples and the* base-class* model that …
objects through training with a few annotated examples and the* base-class* model that …
Semi-supervised AUC optimization based on positive-unlabeled learning
Maximizing the area under the receiver operating characteristic curve (AUC) is a standard
approach to imbalanced classification. So far, various supervised AUC optimization methods …
approach to imbalanced classification. So far, various supervised AUC optimization methods …
[BOOK][B] Machine learning from weak supervision: An empirical risk minimization approach
… In this book Masashi Sugiyama, Han Bao, Takashi Ishida, Nan Lu, Tomoya Sakai and Gang
Niu present theory and algorithms for weakly supervised learning, a paradigm of machine …
Niu present theory and algorithms for weakly supervised learning, a paradigm of machine …
Covariate shift adaptation on learning from positive and unlabeled data
T Sakai, N Shimizu - Proceedings of the AAAI conference on artificial …, 2019 - aaai.org
The goal of binary classification is to identify whether an input sample belongs to positive or
negative classes. Usually, supervised learning is applied to obtain a classification rule, but …
negative classes. Usually, supervised learning is applied to obtain a classification rule, but …
Convex formulation of multiple instance learning from positive and unlabeled bags
Multiple instance learning (MIL) is a variation of traditional supervised learning problems
where data (referred to as bags) are composed of sub-elements (referred to as instances) and …
where data (referred to as bags) are composed of sub-elements (referred to as instances) and …
Multi-level optimization of matrix multiplication for GPU-equipped systems
K Matsumoto, N Nakasato, T Sakai, H Yahagi… - Procedia Computer …, 2011 - Elsevier
This paper presents results of our study on double-precision general matrix-matrix multiplication
(DGEMM) for GPU-equipped systems. We applied further optimization to utilize the …
(DGEMM) for GPU-equipped systems. We applied further optimization to utilize the …