Navigating into the mystique
Robots - Mars rovers, humanoids, quadruped robots, self-driving cars etc., all have one thing in common. A challenge to navigate in the unknown, especially in environments that are poorly understood or constantly changing. The ability to determine a robot's position and orientation, and to build a map of the environment, is essential for robots to operate autonomously.
This article highlights some of the most widely used navigation techniques in robotics. Each navigation technique has its own advantages and challenges. The choice of technique depends on the specific requirements of the project, the environment the robot will be operating in, and the level of autonomy required. It's important to understand the need and constraints of the project vis-a-vis the capabilities and limitations of each technique to choose the most appropriate one for the task at hand.
Following are some the techniques that are currently being used in navigation modules:
Simultaneous Localization and Mapping (SLAM) is a technique that uses sensor data to estimate a robot's position and build a map of the environment in real-time. SLAM algorithms use sensor data to solve the two main problems of robotic navigation: localization, which is the problem of determining the robot's position and orientation in the environment, and mapping, which is the problem of constructing a model of the environment. SLAM uses a variety of sensors like LIDAR, cameras, IMU, wheel encoders etc. to gather data about the environment and the robot's position and orientation. The main advantage of SLAM is that it can provide a consistent and accurate estimate of the robot's position and a detailed map of the environment. However, it can be computationally expensive, and it can be sensitive to sensor noise and errors.
FastSLAM and GraphSLAM are variations of the SLAM algorithm that use a particle filter to estimate the robot's position and build a map of the environment. FastSLAM uses a particle filter to estimate the position of the robot and a Kalman filter to estimate the map, while GraphSLAM uses a particle filter to estimate both the position of the robot and the map. The main advantage of FastSLAM and GraphSLAM is that they can handle large-scale environments and can work in dynamic environments. However, they can be computationally expensive and can be sensitive to sensor noise.
Visual odometry uses visual information from cameras to estimate the motion and position of the robot. This technique uses feature detection and matching to track the movement of the camera and estimate the motion of the robot based on the motion of the features in the image. Visual odometry can work well in environments with rich visual features and provides a relatively low-cost solution. However, it can be sensitive to lighting changes and can struggle to track features in low-textured environments.
Real-time Kinematic (RTK) GPS is a technique that uses a network of ground-based reference stations to provide precise positioning information to a rover. The main advantage of RTK GPS is that it can provide highly accurate position estimates and can work well in open environments. However, RTK GPS may require a clear line of sight between the rover and the reference stations, which can be disrupted by tall buildings, trees, or other obstacles. It also requires a network of reference stations that cover the area where the rover is operating, which can be limited in some remote or rural areas. In addition, RTK GPS can be relatively expensive to set up and maintain. It’s worth noting that recently there have been developments to overcome the challenges of disruptions.
Autonomous navigation system:
An autonomous navigation system comprises of, a combination of techniques. These can be summarised as follows:
Sensor Fusion: a combination of cameras, radar, ultrasonic sensors, and GPS to gather data about the robots surroundings.
Visual odometry is used to estimate the robot's position and orientation by analyzing the images from its cameras. The robot's onboard computer uses feature detection and matching algorithms to track the movement of the camera and estimate the motion of the robot based on the motion of the features in the image.
Inertial measurements are used to estimate the robot's linear and angular velocity, as well as its orientation. The robot uses data from accelerometers and gyroscopes to estimate its motion.
Sensor Fusion is used to combine the data from different sensor modules including the visual odometry and inertial measurements to provide a more accurate estimate of the robot's position and orientation. The data is fused using a Kalman filter or a particle filter to provide a comprehensive and accurate understanding of the environment and the robot's state.
High Definition Maps: high-definition maps to provide precise information about the location, the layout of the roads, and the location of static objects such as traffic signals, road signs, and buildings. This information is used to plan the path and to detect and respond to changing conditions.
Computer Vision: computer vision algorithms analyze the data from the cameras and detect objects and obstacles. The system uses this information to understand the environment and plan a safe path to follow.
Machine Learning: machine learning algorithms help to improve understanding of the environment and the ability to respond to changing conditions. The system uses data from previous missions to learn about different scenarios and to improve its ability to make decisions.
Motion Planning: motion planning algorithms are used to plan the path based on the data from the sensors, the high-definition maps, and the machine learning models. The system uses this information to plan a safe and efficient path to follow.
In summary, an autonomous navigation system uses various techniques which work together to provide a comprehensive understanding of the robots environment, plan a safe path to follow, and respond to changing conditions.
Technology at Kanan: This note is an attempt to briefly summarise different technologies involved in an autonomous mobile robot. If you are passionate about working on such technologies, Kanan is the place where you will get all the freedom to explore different avenues of these technologies for a greater societal cause.
My promise - Your great grandchildren will remember and count your contribution in this incredible cause with pride! I believe in building new territories which will make the world a better place.
P.S.: If you are thinking of reaching out to me, I’d suggest doing it only if the scale and magnitude of impact impresses you. ;)
Thank you for reading.


