-
University of Delaware
- Newark, DE
-
07:43
(UTC -05:00) - https://www.linkedin.com/in/logan-hallee/
- https://orcid.org/0000-0002-0426-3508
- @Logan_Hallee
- https://www.synthyra.com/
- https://www.gleghornlab.com/
Stars
Pytorch implementation of U-Net, R2U-Net, Attention U-Net, and Attention R2U-Net.
This repository includes the official project of TransUNet, presented in our paper: TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation.
Build, personalize and control your own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/T…
[ICLR 2025] From anything to mesh like human artists. Official impl. of "MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers"
[ECCVW 2022] The codes for the work "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation"
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation
Muon is an optimizer for hidden layers in neural networks
Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
[TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.
PyTorch Tutorials from my YouTube channel
Chai-1, SOTA model for biomolecular structure prediction
Implementation of Alphafold 3 from Google Deepmind in Pytorch
A powerful and flexible machine learning platform for drug discovery
A Data Streaming Library for Efficient Neural Network Training
A trainable PyTorch reproduction of AlphaFold 3.
Using Diffusion Models to Segment/Reconstruct Organs from Medical Images [AAAI Most influential Paper]
A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.
Minimal and annotated implementations of key ideas from modern deep learning research.
AlphaGo Moment for Model Architecture Discovery.
Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集
Helpful tools and examples for working with flex-attention
Fast & Simple repository for pre-training and fine-tuning T5-style models
🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"
User friendly and accurate binder design pipeline