-
Southeast University
- Shenzhen
-
18:08
(UTC +08:00) - https://formind.netlify.app/
Highlights
- Pro
Lists (20)
Sort Name ascending (A-Z)
algorithm discovery
SOTA algorithm discovery based on LLMs.alu
Arithmetic logic unit, for verification purpose (may later for PPA opt.).courses
course materials.decision-making
some solvers, heuristics, etc.EDA
eda infra.formal
formal techniqueshardware trojan
Hardware trojan detectionHLS
LLM Agents
LLM Code
LLM Inference
LLMs
mlir
use of mlir.Parser
Program Representation
RISC-V
Scale
simu/emu
some simulation/emulation tech.synthesis
verification
list for hardware verification- All languages
- Assembly
- BibTeX Style
- C
- C#
- C++
- CMake
- CSS
- Common Lisp
- Coq
- Cuda
- F#
- Go
- HTML
- Haskell
- Java
- JavaScript
- Jinja
- Jupyter Notebook
- Kotlin
- LLVM
- Lean
- MATLAB
- MDX
- MLIR
- OCaml
- Objective-C++
- PDDL
- PHP
- Python
- R
- Racket
- Rocq Prover
- Roff
- Ruby
- Rust
- SMT
- Sail
- Scala
- Shell
- Smalltalk
- Standard ML
- Swift
- SystemVerilog
- TL-Verilog
- Tcl
- TeX
- TypeScript
- VHDL
- Verilog
- Vim Script
- Vue
- Yacc
Starred repositories
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
📚 C/C++ 技术面试基础知识总结,包括语言、程序库、数据结构、算法、系统、网络、链接装载库等知识及面试经验、招聘、内推等信息。This repository is a summary of the basic knowledge of recruiting job seekers and beginners in the direction of C/C++ technology, in…
Cross-platform, customizable ML solutions for live and streaming media.
Collection of various algorithms in mathematics, machine learning, computer science and physics implemented in C++ for educational purposes.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning …
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
High-speed Large Language Model Serving for Local Deployment
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
Implementation of popular deep learning networks with TensorRT network definition API
lightweight, standalone C++ inference engine for Google's Gemma models.
A Python-embedded modeling language for convex optimization problems.
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its…
Tengine is a lite, high performance, modular inference engine for embedded device
Lightning fast C++/CUDA neural network framework
Fast inference engine for Transformer models
A graphical processor simulator and assembly editor for the RISC-V ISA
TengineKit - Free, Fast, Easy, Real-Time Face Detection & Face Landmarks & Face Attributes & Hand Detection & Hand Landmarks & Body Detection & Body Landmarks & Iris Landmarks & Yolov5 SDK On Mobile.