lr.with.tech.stack_720p.mp4
Whether you're an AI/ML Developer, aspiring to design the next Roomba, or interested in delving into robotics, there's no need to invest thousands of dollars in a physical robot and attempt to train it in your living space. With Lucky, you can master the art of training your robot, simulate its behavior with up to 90% accuracy, and then reach out to the robot's manufacturer to launch your new company.
Lucky Robots will feed robot's cameras to your Python application, and you can control the robot using your end to end AI Models. Details below.
(Note for the repository: Our Unreal repo got too big for GitHub (250GB+!), so we moved it to a local Perforce server. Major bummer for collab. We're setting up read-only FTP access to the files. If you need access or have ideas to work around this, give us a shout. I'll hook you up with FTP access for now.)
To begin using Lucky Robots:
- if you want to run the examples in this repository: (optional)
git clone https://github.com/luckyrobots/luckyrobots.git
cd luckyrobots/examples
- Use your fav package manager (optional)
conda create -n lr
conda activate lr
- Install the package using pip:
pip install luckyrobots
- Run one of the following
python basic_usage.py
python yolo_example.py
python yolo_mac_example.py
It will download the binary and will run it for you.
Lucky Robots provides several event listeners to interact with the simulated robot and receive updates on its state:
-
@lr.on("robot_output"): Receives robot output, including RGB and depth images, and coordinates.
Example output:
{ "body_pos": {"Time": "1720752411", "rx": "-0.745724", "ry": "0.430001", "rz": "0.007442", "tx": "410.410786", "ty": "292.086556", "tz": "0.190011", "file_path": "/.../4_body_pos.txt"}, "depth_cam1": {"file_path": "/.../4_depth_cam1.jpg"}, "depth_cam2": {"file_path": "/.../4_depth_cam2.jpg"}, "hand_cam": {"Time": "1720752411", "rx": "-59.724758", "ry": "-89.132507", "rz": "59.738461", "tx": "425.359645", "ty": "285.063092", "tz": "19.006545", "file_path": "/.../4_hand_cam.txt"}, "head_cam": {"Time": "1720752411", "rx": "-0.749195", "ry": "0.433544", "rz": "0.010893", "tx": "419.352843", "ty": "292.814832", "tz": "59.460736", "file_path": "/.../4_head_cam.txt"}, "rgb_cam1": {"file_path": "/.../4_rgb_cam1.jpg"}, "rgb_cam2": {"file_path": "/.../4_rgb_cam2.jpg"} }
-
@lr.on("message"): Decodes messages from the robot to understand its internal state.
-
@lr.on("start"): Triggered when the robot starts, allowing for initialization tasks.
-
@lr.on("tasks"): Manages the robot's task list.
-
@lr.on("task_complete"): Triggered when the robot completes a task.
-
@lr.on("batch_complete"): Triggered when the robot completes a batch of tasks.
-
@lr.on("hit_count"): Tracks the robot's collisions.
To control the robot, send commands using the lr.send_message()
function. For example, to make the robot's main wheels turn 10 times:
commands = [["W 3600 1"]] # This makes the main wheels turn 10 times.
For multiple commands and to know when a particular one ends, assign an ID field to your command:
commands = [[{"id": 1234, "code": "W 18000 1"}]]
If you want to send a whole set of instructions, add multiple arrays. Each array will wait until the previous array finishes. Commands inside one array are executed simultaneously, allowing smoother movements like the robot lifting its arms while moving forward or turning its head while placing an object.
commands = [["W 1800 1","a 30"],["a 0", "W 1800 1"]]
Commands in one list will override previous commands if they conflict. For instance, if you instruct your robot to turn its wheels 20 times, and on the 5th turn, you instruct it again to turn 3 times, the robot will travel a total of 8 revolutions and stop.
To know when a particular batch finishes, give it an ID and listen for that ID:
commands = [
["RESET"],
{"commands": [{"id": 123456, "code": "W 5650 1"}, {"id": 123457, "code": "a 30 1"}], "batchID": "123456"},
["A 0 1", "W 18000 1"]
]
lr.send_message(commands)
[DIRECTION] [DISTANCE] [SPEED]
Example:W 50 1
[DIRECTION]
: W is forward, S is backward[DISTANCE]
: Travel distance, 360 is full revolution of the wheels. 3600 is 10 revolutions.[SPEED]
: Speed at which motor will react - km/h- Send via API:
lr.send_message([["W 360 1"]])
[DIRECTION] [DEGREE]
Example:A 30
[DIRECTION]
: A is left, D is right[DEGREE]
: Spin Rotation in degrees- Or:
lr.send_message([["A 30"]])
- Remember, the back wheel only requires an angle adjustment. To turn the robot, set this angle and then command it to move forward. When you want robot to stop turning, set it back to
A 0
RESET
: Resets all positions and rotations to the zero pose- Or:
lr.send_message([["RESET"]])
-
[JOINT][DISTANCE]
Example:EX1 30
EX1 10
(extend 1st joint 10cm outwards)EX2 -10
(extend 2nd joint 10cm inwards)EX3 10
(extend 3rd joint 10cm outwards)EX4 10
(extend 4th joint 10cm outwards)- Or:
lr.send_message([["EX1 10"]])
,lr.send_message([["EX2 -10"]])
, etc.
-
U 10
(Up) - Or:lr.send_message([["U 10"]])
-
U -10
(Down) - Or:lr.send_message([["U -10"]])
-
Gripper:
G 5
orG -10
- Or:lr.send_message([["G 5"]])
orlr.send_message([["G -10"]])
-
Hand Cam Angle:
R1 10
- Or:lr.send_message([["R1 10"]])
R2 -30
(turn cam) - Or:lr.send_message([["R2 -30"]])
-
[JOINT][DEGREE]
Example:EX1 30
EX1 20
(1st rotate the joint 20 degrees)EX2 -10
(2nd rotate the joint -10 degrees)EX3 10
(3rd rotate the joint 10 degrees)EX4 10
(4th rotate the joint 10 degrees)- Or:
lr.send_message([["EX1 20"]])
,lr.send_message([["EX2 -10"]])
, etc.
-
U 10
(Up) - Or:lr.send_message([["U 10"]])
-
U -10
(Down) - Or:lr.send_message([["U -10"]])
-
Gripper:
G 5
orG -10
- Or:lr.send_message([["G 5"]])
orlr.send_message([["G -10"]])
-
Hand Cam Angle:
R 10
- Or:lr.send_message([["R 10"]])
To start the robot simulation, use:
lr.start(binary_path, sendBinaryData=False)
- Releasing our first basic end to end model
- Drone!!!
- Scan your own room
- Import URDFs
- (your idea?)
** UPDATE 3/19/24 FIRST LUCKY WORLD UBUNTU BUILD IS COMPLETE: https://drive.google.com/drive/folders/15iYXzqFNEg1b2E6Ft1ErwynqBMaa0oOa
** UPDATE 3/6/24 **
We have designed Stretch 3 Robot and working on adding this robot to our world
** UPDATE 1/6/24 **
WE GOT OUR FIRST TEST LINUX BUILD (NOT THE ACTUAL WORLD, THAT'S BEING BUILT) (TESTED ON UBUNTU 22.04)
https://drive.google.com/file/d/1_OCMwn8awKZHBfCfc9op00y6TvetI18U/view?usp=sharing
** UPDATE 2/15/24 **
Luck-e World second release is out (Windows only - we're working on Linux build next)!
** UPDATE 2/8/24 **
We are now writing prompts against the 3d environment we have reconstructed using point clouds...
prompts.2024-02-08.at.15.58.23_aa3e14cb.mp4
** UPDATE 2/6/24 **
Lucky first release: https://drive.google.com/file/d/1qIbkez1VGU1WcIpqk8UuXTbSTMV7VC3R/view?amp;usp=embed_facebook
Now you can run the simulation on your Windows Machine, and run your AI models against it. If you run into issues, please submit an issue.
** UPDATE 1/15/24 **
Luck-e is starting to understand the world around us and navigate accordingly!
VIDEO-2024-01-14-06-13-11.mov
** UPDATE 1/13/24 **
We are able to construct a 3d world using single camera @niconielsen32 (This is not a 3d room generated by a game engine, this is what we generate from what we're seeing through a camera in the game!)
2024-01-13.16-13-nico.mp4
** UPDATE 12/29/23 ** We are now flying! Look at these environments, can you tell they're not real?
** UPDATE 12/27/23 **
Lucky now has a drone - like the Mars Rover! When it's activated camera feed switches to it automatically!
Lucky.Drone.mov
** UPDATE 12/5/23 **
Completed our first depth map using Midas monocular depth estimation model
MiDas.depth.estimation.mp4
Where.We.re.At.with.Everything.mp4
- Realistic Training Environments: Train your robots in various scenarios and terrains crafted meticulously in Unreal Engine.
- Python Integration: The framework integrates seamlessly with Python 3.10, enabling developers to write training algorithms and robot control scripts in Python.
- Safety First: No physical wear and tear on robots during training. Virtual training ensures that our robotic friends remain in tip-top condition.
- Modular Design: Easily extend and modify the framework to suit your specific requirements or add new training environments.
For any queries, issues, or feature requests, please refer to our issues page.
We welcome contributions! Please read our contributing guide to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to Lucky Robots.
Absolutely! Show us a few cool things and/or contribute a few PRs -- let us know!
Lucky Robots Training Framework is released under the MIT License.
Happy training! Remember, be kind to robots. ๐ค๐