I don't really see how the vacuum can effectively clean a whole room or flat using only a CNN of the current image in front of the robot. This would help detect obstacles, but a bumper sensor would do that as well.
All but the most basic vacuum robots map their work area and devise plans how to clean them systematically. The others just bump into obstacles, rotate a random amount and continue forward.
Don't get me wrong, I love this project and the idea to build it yourself. I just feel like that (huge) part is missing in the article?
Not saying that it’s viable here to build a world map since things like furniture can move but some systems, e.g. warehouse robots do use things like lights to triangulate on the assumption that the lights on the tall ceiling are fixed and consistent.
The classic Roombas from a decade or so ago worked without any sort of mapping or camera at all -- they basically did a version of the "run and tumble" algorithm used by many bacteria -- go in one direction until you can't any more then go off in a random new one. It may not be efficient but it does work for covering territory.
Sounds like it would only work for a single room with not too many obstacles.
I guess the mapping capabilities vary greatly between vendors. I had a first gen Mi Robot vacuum and it was amazing. It would map the entire floor with all the rooms, then go room by room in a zigzag pattern, then repeat each room, having no issues going from one room to another and avoiding obstacles. It also made sure not to fall down the stairs. Then later it broke and I bought a more noname model and despite having lidar tower, it didn't perform as well as Xiaomi vacuum did. It worked for a single room, but anything more and it would get lost.
Eh, it worked fine in my multiroom apartment - again, this is how all first generation robot vacuums worked. Mine eventually died and I got a new one with lidar, and the main adventage is that with mapping I can specify areas to avoid like a chair whose base tends to trap robot vacuums.
I think the only reason for mapping is to be able to block off 'no go' areas (no escaping out the front door!) and to be able to go home to the charger.
my previous robot vacuum did not do any mapping, but did always manage to find its way back to the charger. It'd just follow the walls until it saw the chargers IR beacon.
Clever design if you ask me. Doing a lot with a little.
Apart from just detecting obstacles, we wanted to build a robot which is intelligent enough to take in semantic cues like this is a doorway so I can go through it, or this is a kitchen I can clean it this way and so on
It may or may not be useful for you but I've been working for a while on converting ORBSLAM3 into a self contained standalone program, without the need for ROS to be useful.
The "UI" for saving/loading the map and calibrating the camera is exposed through a built-in crude webserver. Visualization is done via threejs instead of having a dependency on pangolin.
If your robot can expose the camera feed as anything opencv can ingest ( i.e. mjpeg via http ) you could just point it there and then receive the pose stream via HTTP/SSE
The whole thing is distributed as an AppImage so you just run it and connect to it
Can you please design a version for kids to ride on?
With a seat and handle similar to "wooden bee ride on" by b. toys?
I want a vacuum that kids can actually drive, ride on, do real vacuuming and has minimal levels so safety, like turning it over halts vacuums, stairs/ledges are avoided, and lack of rollers or items that could snare a kids hair, etc.
There may be benefits of fusion of child input signals with supervisory vacuums route goals. Would be age dependent, older kids would want full manual I think.
Kids like to do real jobs, and as a parent I prefer purchasing real items for my kids rather than toy versions if practical.
Here is thought, this is a fixed 3d environment and you lack training data or at least an algorithm to train. Why not use RL to learn good trajectories?
Like build a 3d environment of your home/room and generate images and trajectories in a game engine to generate image data to pretrain/train it, then for each run hand label only promising trajectories i.e. where the robot actually did better cleaning. That might make it a good RL exercise. You could also place some physical flags in the room that when the camera gets close enough it gets rewarded to automate these trajectory rewards.
If mass produced, no part of a robot vacuum is expensive. Blower fans are ~$1. Camera is $1. Cheap wifi MCU with a little ML accelerator + 8 Mbytes of ram is $1. Gyro is $1. Drive motors+gearboxes together are $1. AC charger $2. Plastic case $2. Batteries are the most expensive bit (~$3), but you can afford to have a battery life of just 10 mins if you can return to base frequently.
The hard part is the engineering hours to make it all work well. But you can get repaid those as long as you can sell 100 Million units to every nation in the world.
Cool project! That validation loss curve screams train set memorization without generalization ability.
Too little train data, and/or data of insufficient quality. Maybe let the robot run autonomously with an (expensive) VLM operating it to bootstrap a larger train dataset without needing to annotate it yourself.
Or maybe the problem itself is poorly specified, or intractable with your chosen network architecture. But if you see that a vision llm can pilot the bot, at least you know you have a fighting chance.
Check out using maybe some kind of monocular depth estimation models, like Apple's Depth Pro (https://github.com/apple/ml-depth-pro) and use the depth map to predict a path?
56 comments
All but the most basic vacuum robots map their work area and devise plans how to clean them systematically. The others just bump into obstacles, rotate a random amount and continue forward.
Don't get me wrong, I love this project and the idea to build it yourself. I just feel like that (huge) part is missing in the article?
Not saying that it’s viable here to build a world map since things like furniture can move but some systems, e.g. warehouse robots do use things like lights to triangulate on the assumption that the lights on the tall ceiling are fixed and consistent.
I guess the mapping capabilities vary greatly between vendors. I had a first gen Mi Robot vacuum and it was amazing. It would map the entire floor with all the rooms, then go room by room in a zigzag pattern, then repeat each room, having no issues going from one room to another and avoiding obstacles. It also made sure not to fall down the stairs. Then later it broke and I bought a more noname model and despite having lidar tower, it didn't perform as well as Xiaomi vacuum did. It worked for a single room, but anything more and it would get lost.
For the actual cleaning, random works great.
https://news.ycombinator.com/item?id=46472930
Clever design if you ask me. Doing a lot with a little.
The "UI" for saving/loading the map and calibrating the camera is exposed through a built-in crude webserver. Visualization is done via threejs instead of having a dependency on pangolin.
If your robot can expose the camera feed as anything opencv can ingest ( i.e. mjpeg via http ) you could just point it there and then receive the pose stream via HTTP/SSE
The whole thing is distributed as an AppImage so you just run it and connect to it
https://github.com/mgschwan/ORBSlammer_LocalizationService
With a seat and handle similar to "wooden bee ride on" by b. toys?
I want a vacuum that kids can actually drive, ride on, do real vacuuming and has minimal levels so safety, like turning it over halts vacuums, stairs/ledges are avoided, and lack of rollers or items that could snare a kids hair, etc.
There may be benefits of fusion of child input signals with supervisory vacuums route goals. Would be age dependent, older kids would want full manual I think.
Kids like to do real jobs, and as a parent I prefer purchasing real items for my kids rather than toy versions if practical.
I would begin in one room to practice this.
The hard part is the engineering hours to make it all work well. But you can get repaid those as long as you can sell 100 Million units to every nation in the world.
Too little train data, and/or data of insufficient quality. Maybe let the robot run autonomously with an (expensive) VLM operating it to bootstrap a larger train dataset without needing to annotate it yourself.
Or maybe the problem itself is poorly specified, or intractable with your chosen network architecture. But if you see that a vision llm can pilot the bot, at least you know you have a fighting chance.
Very cool project though!
(Lidar can of course also be echolocation).
It could easily understand so much about the environment with even a small multimodal model.
I want to work really hard to be too lazy to bother with the grass.