- Cairo, Egypt
-
09:55
(UTC +02:00) - ibrahimsharaf.github.io
Stars
Sarasra / models
Forked from tensorflow/modelsModels and examples built with TensorFlow
fast python port of arc90's readability tool, updated to match latest readability.js!
deepspeedai / Megatron-DeepSpeed
Forked from NVIDIA/Megatron-LMOngoing research training transformer language models at scale, including: BERT & GPT-2
tloen / llama-int8
Forked from meta-llama/llamaQuantized inference code for LLaMA models
dask / fastparquet
Forked from jcrobak/parquet-pythonpython implementation of the parquet columnar file format.
spring-media / ForwardTacotron
Forked from fatchord/WaveRNN⏩ Generating speech in a single forward pass without any attention!
sanjeevanahilan / nanoChatGPT
Forked from karpathy/nanoGPTA crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick
google / pyink
Forked from psf/blackPyink, pronounced pī-ˈiŋk, is a Python formatter, forked from Black with a few different formatting behaviors.
graykode / ALBERT-Pytorch
Forked from dhlee347/pytorchic-bertPytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)
renatoviolin / xlnet
Forked from zihangdai/xlnetXLNet: fine tuning on RTX 2080 GPU - 8 GB
cybertronai / transformer-xl
Forked from kimiyoung/transformer-xlTraining Transformer-XL on 128 GPUs
nyu-mll / spinn
Forked from stanfordnlp/spinnNYU ML² work on sentence encoding with tree structure and dynamic graphs
qwopqwop200 / gptqlora
Forked from artidoro/qloraGPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ
limenlp / verl
Forked from volcengine/verlAdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning
huggingface / bert-syntax
Forked from yoavg/bert-syntaxAssessing syntactic abilities of BERT
pytorch-tpu / fairseq
Forked from facebookresearch/fairseqFacebook AI Research Sequence-to-Sequence Toolkit written in Python.
IvoBrink / RACDH-old
Forked from oneal2000/MINDReal-time Attribution Classification to Detect Hallucinations