Skip to main content
OpenConf small logo

Providing all your submission and review needs
Abstract and paper submission, peer-review, discussion, shepherding, program, proceedings, and much more

Worldwide & Multilingual
OpenConf has powered thousands of events and journals in over 100 countries and more than a dozen languages.

Topology Optimization In Neural Networks For Deep Reinforcement Learning

This work proposes the use of the SAE CollabNet network as a Deep Q-Network (DQN) agent for controlling robots in obstacle- filled environments characterized by partial information. SAE CollabNet is employed to extract latent representations of the robot’s state from sensory inputs, including proximity sensors, partial maps, and position- ing information, functioning as an approximation of the value function action-state. The network’s constructive architecture allows for the incre- mental insertion of new layers, promoting modular and adaptive learn- ing, with each branch able to focus on different aspects of the state, such as obstacles, trajectory, or relative position to the destination. The resulting agent learns to select actions that maximize the accumulated reward, guiding the robot to the destination efficiently and avoiding col- lisions. Experiments conducted in simulated environments show that the approach improves learning efficiency and policy stability compared to traditional DQNs. The results indicate that the use of latent represen- tations learned in a modular way may be a promising strategy for the autonomous control of robots in complex navigation tasks.

Ataniel Silva Santos Segundo
Federal Institute of Maranhão
Brazil

Almir Souza E Silva
Federal Institute of Maranhão
Brazil

Francisco Dos Santos Viana
Federal Institute of Maranhão
Brazil

Areolino De Almeida Neto
Federal University of Maranhão
Brazil

Clisman Alvino Lopes
Federal Institute of Maranhão
Brazil