📚 Master networking basics and advanced topics through practical assignments and projects in this comprehensive course.
-
Updated
Nov 6, 2025
📚 Master networking basics and advanced topics through practical assignments and projects in this comprehensive course.
🗓️ Build responsive web applications with ease using React, HTML, CSS, and JavaScript, focusing on user-centered design and performance optimization.
Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™
autoupdate paper list
The collection of pre-trained, state-of-the-art AI models for ailia SDK
🎓Automatically Update CV Papers Daily using Github Actions (Update Every 12th hours)
Code to reproduce a zero-shot action recognition approach on the Epic-Kitchens dataset.
A physics-based video search engine using Meta's V-JEPA 2 world model to find videos with similar motion dynamics.
Computer vision that understands temporal relationships and causality in video sequences.
迳口村麒麟舞的动作识别和评分系统
PyTorch implementation of a Relational Attention-Based Model for Group Activity Recognition — inspired by "A Hierarchical Deep Temporal Model for Group Activity Recognition (arXiv:1511.06040v2)"
[CCS'24] Official Implementation of "Fisher Information guided Purification against Backdoor Attacks"
Surgical phase recognition on the Cataract-101 dataset — reproducible data pipeline, EDA, and two experimental baselines (clip-based classification vs. dense temporal segmentation) with training/evaluation scripts and visualization
PySlowFast, the official video understanding framework from Facebook AI Research (FAIR), to train, evaluate, and reproduce state-of-the-art video models on the UCF24 action detection dataset. It supports customizable training pipelines, model fine-tuning, and evaluation for video-based action recognition and spatio-temporal localization tasks.
DLC2Action is an action segmentation package that makes running and tracking of machine learning experiments easy.
A list of Human-Object Interaction Learning.
Curated catalog of 40+ Human Activity Recognition (HAR) and action recognition datasets: vision, wearable sensors, skeleton/mocap, multimodal. Includes benchmarks, baselines, and citations.
A curated paper list of awesome skeleton-based action recognition.
PyTorch implementation of "BDC-CLIP: Brownian Distance Covariance for Adapting CLIP to Action Recognition" (ICML 2025).
A sport-tailored, pose-enhanced action recognition framework
Add a description, image, and links to the action-recognition topic page so that developers can more easily learn about it.
To associate your repository with the action-recognition topic, visit your repo's landing page and select "manage topics."