This repository contains the source code for the paper: Reinforcement Learning for Active Perception in Autonomous Navigation.
A video demonstrating the work is available here.
-
Install Isaac Gym and Aerial Gym Simulator
Follow the instructions provided in the respective repository.
Before installing the Aerial Gym Simulator, you must modify the Isaac Gym installation. The argument parser in Isaac Gym may interfere with additional arguments required by other learning frameworks. To resolve this, you need to modify line 337 of the
gymutil.pyfile located in theisaacgymfolder.Change the following line:
args = parser.parse_args()
to:
args, _ = parser.parse_known_args()
-
Set up the environment
Once the installation is successful, activate the
aerialgymenvironment:cd ~/workspaces/ && conda activate aerialgym
-
Clone this repository
Clone the repository by running the following command:
git clone git@github.com:ntnu-arl/active-perception-RL-navigation.git
-
Install active-perception-RL-navigation
Navigate to the cloned repository and install it using the following command:
cd ~/workspaces/active-perception-RL-navigation/ pip install -e .
The standalone examples, along with a pre-trained RL policy, are available in the examples directory.
The ready-to-use policy (used in the work described in Reinforcement Learning for Active Perception in Autonomous Navigation) can be found under: examples/pre-trained_network. These examples illustrate policy inference in a corridor-like environment under different levels of complexity, specifically with 10, 20, and 30 obstacles.
Run the following:
cd ~/workspaces/active-perception-RL-navigation/examples/
conda activate aerialgym
bash example_10obstacles.shYou should now be able to observe the trained policy in action β performing a navigation task with actively actuated camera sensor in the environment:
simplescreenrecorder-2025-09-29_15.05.15.mp4
Run the following:
cd ~/workspaces/active-perception-RL-navigation/examples/
conda activate aerialgym
bash example_20obstacles.shπ₯ Demo:
simplescreenrecorder-2025-09-29_15.07.28.mp4
Run the following:
cd ~/workspaces/active-perception-RL-navigation/examples/
conda activate aerialgym
bash example_30obstacles.shπ₯ Demo:
simplescreenrecorder-2025-09-29_15.15.37.mp4
To train your first active perception RL navigation policy, run:
conda activate aerialgym
cd ~/workspaces/active-perception-RL-navigation/
python -m rl_training.train_aerialgym --env=navigation_active_camera_task --experiment=testExperimentBy default, the number of environments is set to 1024. If your GPU cannot handle this load, reduce it by adjusting the num_envs parameter in /src/config/task/navigation_active_camera_task_config.py:
num_envs = 1024By default, the training environment contains 38 obstacles. You can modify this by editing the num_assets parameter in /src/config/assets/env_object_config.py:
class object_asset_params(asset_state_params):
num_assets = 35and
class panel_asset_params(asset_state_params):
num_assets = 3To load a trained checkpoint and perform inference only (no training), follow these steps:
-
For clear visualization (to avoid rendering overhead), reduce the number of environments (e.g., to 16) and enable the viewer by modifying
/src/config/task/navigation_active_camera_task_config.py:From:
num_envs = 512 use_warp = True headless = True
To:
num_envs = 16 use_warp = True headless = False
-
For a better view during inference, consider excluding the top wall from the corridor-like environments by modifying the
/src/config/env/env_active_camera_with_obstacles.pyfile:"top_wall": False, # excluding top wall
-
Finally, execute the inference script with the following command:
conda activate aerialgym cd ~/workspaces/active-perception-RL-navigation/ python -m rl_training.enjoy_aerialgym --env=navigation_active_camera_task --experiment=testExperiment
The default viewer is set to follow the agent. To disable this feature and inspect other parts of the environment, press
Fon your keyboard.
If you use or reference this work in your research, please cite the following paper:
G. Malczyk, M. Kulkarni and K. Alexis, "Reinforcement Learning for Active Perception in Autonomous Navigation", Accepted to the IEEE International Conference on Robotics and Automation (ICRA) 2026
@article{malczyk2025reinforcement,
title={Reinforcement Learning for Active Perception in Autonomous Navigation},
author={Malczyk, Grzegorz and Kulkarni, Mihir and Alexis, Kostas},
journal={arXiv preprint arXiv:2602.01266},
year={2026}
}For inquiries, feel free to reach out to the authors:
-
Grzegorz Malczyk
-
Mihir Kulkarni
-
Kostas Alexis
This research was conducted at the Autonomous Robots Lab, Norwegian University of Science and Technology (NTNU).
For more information, visit our website.
This material was supported by the Research Council of Norway under Award NO-338694 and the Horizon Europe Grant Agreement No. 101119774.
Additionally, this repository incorporates code and helper scripts from the Aerial Gym Simulator.
