Welcome to the RoboCup RM25 custom victim detection repository. This project is centered around a lightweight TinyML model designed for the OpenMV H7 camera, capable of real-time letter ('H', 'S', 'U') and color (red, yellow, green) victim detection, as required by the RoboCup Rescue Maze 2025 competition.
Important:
This repo does not include the original dataset or model used on our OpenMV camera or robot.
They were removed to keep the competition fair, you won’t find the model or data online.This is a public copy of our internal repo, cleaned for release.
You still get our scripts, STM communication, and project logic,
and you can train your own model with the same tools we used.If you’re working on victim recognition and need a hint,
contact SERŠ Team.
RM25-Model/
├── Main/ # Main program for OpenMV H7
│ └── main.py # ✔️ Current working detection script (upload to OpenMV)
├── Scripts/ # Examples and experiments
├── Training/ # Model training pipeline and instructions
├── requirements.txt # For dataset prep and model training
└── README.md # You're reading itThe primary code can be found in:
Main/main.py
This is the working OpenMV MicroPython script uploaded to the OpenMV H7 camera. It performs:
- Real-time object detection using a custom-trained model.
- Victim classification based on color or letter.
- GPIO output using P4 and P5.
- An interrupt signal via P6 (e.g., for STM32 communication).
This is a custom-trained FOMO (Fast Object Detection) model tailored for:
- RoboCup Rescue Maze rules
- Detecting letters
- Recognizing colored squares
It’s built for OpenMV H7, but works with any OpenMV camera that supports TFLite models.
🟢 Achieves ~90% detection accuracy in good conditions
⚡ Runs at ~60 FPS on H7 — highly optimized and lightweight!
-
🔽 Download OpenMV IDE
https://openmv.io/pages/download -
⚙️ Install custom firmware for model support
→ Follow instructions and get the firmware from:
Firmware-via-wmiuns -
📂 Upload
main.pyto your OpenMV cam using the IDE. -
✅ Done! The camera will now detect victims and send outputs via pins.
P4andP5: encode victim type (e.g., color or letter)P6: sends a HIGH pulse (20 ms) as an interrupt signal to STM32- LED indicators also signal detections visually
Look inside the Scripts/ folder for experimental code and small demos. Useful for:
- Tuning thresholds
- Trying out display options
- Debugging color vs. letter inference
If you want to retrain:
- Go into the
Training/folder - Follow the tensorflow workflow
- Use
requirements.txtto set up your environment:
pip install -r requirements.txt- Upload the
.tflitemodel back to the OpenMV cam using the IDE.
| Metric | Value |
|---|---|
| Accuracy | ~90% |
| FPS on H7 | ~60 |
| Flash Usage | ~<500KB |
| RAM Usage | <1MB |
Created by the Maj Korent | RM25 RoboCup Team
For RoboCup Rescue Maze 2025 technical challenge
Firmware tools and deployment inspired by OpenMV docs.
If you like this, star the repo and share your implementation with the community!