Stars
This repository contains LLM (Large language model) interview question asked in top companies like Google, Nvidia , Meta , Microsoft & fortune 500 companies.
Code examples for Robotics, Vision & Control 3rd edition in Python
A lightweight Text-to-Image Retrieval model [Web App]
Astro template to help you build an interactive project page for your research paper
A list of 99 machine learning projects for anyone interested to learn from coding and building projects
Wrapper of 37+ image matching models with a unified interface
TorchGeo: datasets, samplers, transforms, and pre-trained models for geospatial data
Contains Company Wise Questions sorted based on Frequency and all time
Learn System Design concepts and prepare for interviews using free resources.
An open source SDK for logging, storing, querying, and visualizing multimodal and multi-rate data
An Open-source Deep Learning Framework for Visual Place Recognition
📊 Re-usable, easy interface JavaScript chart library based on D3.js
anthonix / llm.c
Forked from karpathy/llm.cLLM training in simple, raw C/HIP for AMD GPUs
Refine high-quality datasets and visual AI models
Code for the paper "How Attentive are Graph Attention Networks?" (ICLR'2022)
BoQ: A Place is Worth a Bag of learnable Queries (CVPR 2024)
Benchmarking and evaluation framework for place recognition methods, featuring SuperPoint+SuperGlue, LoGG3D-Net, Scan Context, DBoW2, MixVPR, STD
The top 500 highest paying companies based on median software engineer total comp on levels.fyi as of 12/1/23.
A vscode extension to quickly handle print operations ,like print variable value, variable attribute, funciton of variable etc by using shortcuts
Open source implementation of "Spreading Vectors for Similarity Search"
Transformers w/o Attention, based fully on MLPs
Official repository for BMVC 2022 paper: Global Proxy-based Hard Mining for Visual Place Recognition
MixVPR: Feature Mixing for Visual Place Recognition (WACV 2023)
GSV-Cities: a large-scale dataset for visual place recognition
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch