2024 brought about the first robots that had computer vision (some say earlier, but I say 2024). The two I worked with were Novabot (now BestMow`) and Sunseeker. Computer vision and LiDAR offer the possibility of creating visual maps for positioning. Remember that for an autonomous mower to work the robot has to have two things – positioning/localization and sensing. Positioning (also called localization) allows the mower to know where it is in space – and it needs centimeter level precision. Sensing allows the mower to see what it is around so it can avoid objects. Computer vision (some people call it artificial intelligence, but I don’t quite know if it is actually learning) can do both positioning and sensing. So instead of using the camera just to avoid objects – the visual map can help it to navigate as well.
Here is the issue – I think that computer vision cannot do it alone but really needs to be seen as an additional layer of positioning and sensing. Cameras can get covered in dust/dirt/clippings, can fog up, get distorted by moisture, etc. So, it is best for positioning to have GPS/RTK combined with computer vision. Further, I think it is best to also have some ultrasonic on the mowers as well. Why not? As I understand it, they are cheap and easy to replace.