How to train your
Dragon Autonomous Vehicle?
The ability to make decisions on its own is the core concept behind the autonomous vehicle. To make its own decisions, the vehicle needs to understand about the current environment and the action that needs to be taken in real life scenarios, and finally, it needs to understand the outcomes of its decision before implying it. The autonomous vehicle can recognize or achieve this concept by integrating machine learning and artificial intelligence (AI) in the vehicle brain.
AI works efficiently with a predefined set of rules, i.e. what actions should be taken? When these actions do needs to be taken? Also, how to prioritize these actions? These kind of decisions are really important for an autonomous vehicle to avoid any uncalled circumstances or before claiming any life. The computer chess game is the best way to teach an autonomous vehicle or artificial intelligence to make individual decisions. The computer chess game is the hard-coded set of rules and is preprogrammed to make certain decisions. On the other hand, machine learning is the tool to learn an intricate pattern and make a decision based on preset examples such as detecting the pedestrians in the camera. This system uses key sensor technologies such as LIDAR, cameras, and radar to measure the distance precisely.
The driver safety and assistance system provides a certain level of autonomy but doesn’t provide the complete picture of autonomous driving. The autonomous driving will gradually move to become a part of the system. Automo expects that it will take another decade for an autonomous vehicle to become a reality.
There is a need to distinguish between different cases to understand autonomous driving. Manual driving on the highway and urban mobility are a completely different scenario. The algorithm automatically changes and the processing time of the computer. Every vehicle is becoming part of a fleet with an increase in a number of sensors and hardware which further helps them to connect with other vehicles. The autonomous vehicle is really about driving in the complex environment and reasoning the situation appropriately. It is more of a psychological factor just like a human; the vehicle would like to anticipate the activities on the street such as crossing pedestrians, or ongoing maintenance activities, reducing the speed when speed bumper is spotted.
If the crossing pedestrian looks at the moving vehicle, he/she will stop. In another scenario, the pedestrian crossing the street while looking at his/her mobile phone, in that case, the human driver will understand and will apply the brake. This type of situations is really difficult for the vehicle to understand and handle the decision appropriately. This is a completely human behavioral problem and thus it is very difficult to teach the autonomous vehicle about real-world scenarios.
The real challenge is the mixed environment, let’s assume if the vehicle has a communication feature, i.e. vehicle to vehicle communication. This further solves the problem concerning the vehicle which is currently under operation. However, in real life it is not just another vehicle, there will be a vehicle without communication features, there will be harsh motorcycle rider and of course the pedestrians. Thus, to commercialize the autonomous vehicle, the vehicle needs to be trained in a mixed environment.
A child becomes an adult when he reaches 18 years of age, and by that time he or she gathers all the cultural data about human and their cognitive behavior. Let’s imagine the amount of data it will take for the car to make a decision like an adult driver. However, once someone has the data which can help the vehicles to make decision autonomously then the data can be shared with other operators.
So to train the vehicle, the first thing that is needed is a lot of annotation by humans to feed into the computer to achieve the desired outcome. If we want to see two pedestrians in an image, we need to annotate it and provide it to the computer to understand two pedestrian crossing the street. Once you mark all the pedestrian in an image, the computer will know what to detect and how to detect to get the desired outcome. Gradually the automakers will arrive at a scenario of unsupervised learning, where the computer won’t need human assistance for annotations.
The current culture is like whenever the new technology is introduced, the culture tries to shape around it i.e. people starts learning about new advancement and change their perception around it. Once the autonomous vehicles are commercialized, there will be a certain way in which human will behave or pedestrian will cross the road. We at Automo expect that their movements will become more obvious and careful, this will further make it easier for the autonomous vehicle to move in the urban area. On the other hand, an autonomous vehicle can also teach people about itself and become more obvious about its decisions. Automo expect that the external interface should be built for the autonomous vehicle with cues such as acoustic, visual and lights. This will further help the pedestrians and others riding side by side of the autonomous vehicle to understand its current decision and drive and cross the road with precaution.
One of the good things is that the car doesn’t have an Indian mindset of driving, if it flashes light, it will ask the people to wait and insist that it’s their turn to cross. Other countries have a completely different meaning when it comes to flashing light.