Abstract:
This research explores the application of visual simultaneous localisation and mapping to a stationary robot arm used in online lab sessions to observe control panels and perform similar tasks. This arm does not include encoders to reduce the hardware costs of the robot arm; hence, VSLAM is used to localise the arm's end. The primary camera is connected to the arm's end, and a second camera is connected to another joint on the same horizontal plane as the arm's end, which moves on a horizontal plane. Both cameras are connected to the same VSLAM algorithm in which the primary camera initialises the local map. Then, both cameras localise the camera position independently, acting on each other's backup.
Two independent cameras needed to be connected to the same VSLAM algorithm and share their position derived from VSLAM; thus, modifications were needed to the VSLAM algorithm. The initiation phase only uses the primary camera to generate the 3D map, moving the entire robot arm 3600 around the vertical axis. The next phase is localisation, using both cameras independently and the generated map.
Simulations were carried out in 3 DoF. First, the entire robot arm was rotated towards the observable direction around the vertical axis, followed by the adjustments given sufficient map points are inside the view range. Results show that significant errors were observed in the initial state with mapping, which was corrected considerably after loop closing. In the localisation, both cameras give reasonably acceptable results. However, designing a loop-closing algorithm in the localisation phase is a considerable upgrade. Installing the camera in the radial direction to the vertical will give more accurate results than in the tangential direction. According to the results, the localisation error was under 0.15 meters, given that the camera velocity was under the value the algorithm allowed.