Autonomous vehicles require several integrated systems to be able to operate safely and efficiently in the environment. This article will briefly cover the fundamentals of the guidance, navigation and control systems (sometimes referred to as GNC) of an autonomous vehicle.
This is an example of a map generated by a LiDAR, a laser sensor typically used to identify objects in the vehicles environment.
This is arguably one of the most critical components on an autonomous vehicle, it also happens to be one of the most complex systems on the vehicle. To put it simply, this is the eyes and ears of the vehicle. Its purpose is to collect data from the on board sensors and process them so that it can generate an accurate virtual map of all the objects around the vehicle. It also uses information from sensors to figure out where the vehicle is on this map in relation to objects around it.
The control system takes the information about the world and the vehicles position from the navigation system as well the waypoints the guidance system outputs to calculate the type of forces it needs to exert on the vehicle to help it achieve its objective. It then takes this information and figures out what signals it needs to send to the vehicles motors to move the vehicle to those waypoints and then executes those functions. Overall, all of these systems need to work together in a loop passing information to one another so that the vehicle can not only react to new information it receives from its environment but also that it can correct any errors that can occur through the above processes.
This image is a representation of what a guidance system does. The black circles are obstacles, the bottom left position (Xi,Yi) are the robots initial position and WP4 represents the robots end goal.
This system takes the desired behaviour of the vehicle (i.e. getting from point A to point B) and works out the safest, most efficient way to guide the vehicle to the objective. The guidance system takes information that the navigation system outputs (a map of the real world and where the vehicle is on that map) and using a pre-programmed behaviour it works out what are the necessary steps the vehicle needs to take to get to point B. It usually breaks this down into a set of waypoints that the vehicle can use to move towards the objective. This system is VERY similar to the algorithms used in games like Age of Empires that make a unit/character move from one part of the map to another when you click on the screen. Additionally, this system is also responsible for higher-level decisions that the vehicle needs to make, such as deciding what to do if an obstacle suddenly appears in its path.
Together, these components act as a ‘brain’ for an autonomous vehicle and once programmed and configured correctly, can perform tasks without the need for human operation and supervision. As these systems become more sophisticated, autonomous vehicles will be able to perform more and more complex tasks. Tune in next time as we examine one of these components in depth.
Share the post "How Does Autonomous Technology Work?"
https://i0.wp.com/elite-robotics.com.au/wp-content/uploads/2018/05/LiDAR.png?fit=1547%2C814&ssl=18141547Luke De Bonohttp://elite-robotics.com.au/wp-content/uploads/2017/11/Website-Header-1030x193.jpgLuke De Bono2018-05-15 02:59:142018-05-15 03:42:43How Does Autonomous Technology Work?