Abstract
|
|
---|---|
Navigation is known to be a hard Sequential Decision-Making problem that attracts the attention of a large number of fields like Artificial Intelligence or Robotics. In this work, we approach the problem of partially observable navigation with a reactive system trained by model-free Reinforcement Learning. The advantages of this learned approach include reducing the engineering effort at the cost of more computing power during training. We designed an agent and an environment with a focus on being able to navigate independently of the map. We use well-tested general Reinforcement Learning algorithms without any hyper-parameter tuning and achieve promising results. Our results show that several general purpose Reinforcement Learning algorithms can reach the target in our navigation setup more than 85% of the episodes. Hence, these algorithms may provide a significant step forward towards autonomous navigation systems. | |
International
|
Si |
Congress
|
2019 27th European Signal Processing Conference (EUSIPCO) |
|
960 |
Place
|
A Coruña |
Reviewers
|
Si |
ISBN/ISSN
|
978-9-0827-9703-9 |
|
10.23919/EUSIPCO.2019.8902933 |
Start Date
|
02/09/2019 |
End Date
|
06/09/2019 |
From page
|
1 |
To page
|
5 |
|
2019 27th European Signal Processing Conference (EUSIPCO) |