-
University of Texas at San Antonio
- TX, US
Stars
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Graph Neural Network Library for PyTorch
PyTorch implementations of deep reinforcement learning algorithms and environments
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
PyTorch implementation of DQN, AC, ACER, A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and ....
Code and hyperparameters for the paper "Generative Adversarial Networks"
FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs o…
An open-source framework for machine learning and other computations on decentralized data.
Library for training machine learning models with privacy for training data
PyTorch for Semantic Segmentation
A PyTorch Implementation of Federated Learning
This is the official implementation for the paper 'Deep forest: Towards an alternative to deep neural networks'
Semi-supervised learning with graph embeddings
Neural Graph Collaborative Filtering, SIGIR2019
A collection of Google research projects related to Federated Learning and Federated Analytics.
RSTutorials: A Curated List of Algorithms about Traditional and Social Recommender System.
Federated Optimization in Heterogeneous Networks (MLSys '20)
code and docs for the EMNLP paper "DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning"
Simulate a federated setting and run differentially private federated learning.
Graph Neural Networks for Social Recommendation, WWW'19
Image segmentation - general superpixel segmentation & center detection & region growing
Efficient implementation of Generative Stochastic Networks
Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.
Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
Fair Resource Allocation in Federated Learning (ICLR '20)
[ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".