The solution uses a module that incorporates more data to perceive the surrounding environment.
A research team from the Toyohashi University of Technology has developed an artificial intelligence model for self-driving vehicles that combines both the perception and control modules and runs them simultaneously.
Most self-driving vehicles have several subsystems which each handle a specific task, however, according to the research team involved with this AI model, this can be costly, inefficient, and lead to information loss due to manual parameter adjustment.
One of the issues the team ran into was what data to feed the control module, as the sensors can collect and identify lots of inconsequential data while driving. The solution was to provide the perception module with more data to perceive its surrounding environment, and to add a sensor fusion technique to improve performance.
Another issue noted by the research team was the imbalance learning during the training process, due to both modules performing simultaneously. To fix this, the team built the AI model with end-to-end and multi-task manners. This ensures that all tasks are performed equally, so one module is not over-prioritized.
Implementation of this combined AI model has only been simulated by the research team. With the publication of this study, there is hope of adding a Lidar sensor to incorporate more data gathering. The sensor will also allow tests to be conducted in all weathers, which is an issue with only using camera sensors.
The team is also planning to test out the AI model in real-life at some point in the near future. Japan is one of the countries ahead of the US and Europe in terms of generalized use of self-driving vehicles for testing and commercial purchase, with autonomous vehicles rated at “level 3” permitted on roads. In May 2023, a new law will permit level 4 vehicles, which can be completely autonomous in most road conditions.