The package provides AI-based control of a crane system using reinforcement learning, developed at DNV AS.
The primary goal is to solve the anti-pendulum problem: training an agent to dampen (or start) the swing of a load hanging from a mobile crane, using only horizontal crane acceleration as the control input.
AntiPendulumEnvThe main environment. A mobile crane with a swinging load modelled via real crane physics (
crane-controllerlibrary). The agent controls horizontal crane acceleration and must either start or stop the pendulum motion.- Observation: crane x-position, crane x-velocity, load polar angle, load x-velocity
- Actions: Discrete(3) — accelerate left / coast / accelerate right
- Modes: start (build pendulum energy) or stop (dampen swing)
ControlledCraneEnv- A more general mobile crane environment for future work.
Three RL algorithms are implemented, each as a self-contained agent class:
- PPO (
ppo_agent.py) — Proximal Policy Optimization viastable-baselines3. Supports vectorized environments for faster training. Models saved as.zipfiles. - Q-Learning (
q_agent.py) — Tabular Q-learning with epsilon-greedy exploration. Uses a discretized observation space. Q-tables saved/loaded as JSON for incremental training. - AlgorithmAgent (
algorithm.py) — Brute-force search over all 81 handcoded strategies (34 combinations). Useful as a baseline.
Generic Gymnasium wrappers (from the Farama Foundation examples) are included for reference:
ClipReward— clips immediate rewards to a valid rangeDiscreteActions— restricts the action space to a finite subsetRelativePosition— computes relative position between agent and targetReacherRewardWrapper— weights multiple reward terms
Two classic Gymnasium environments were used as stepping stones when developing this project:
- GridWorldEnv — minimal grid navigation, ideal for learning the Gymnasium API. See the environment creation tutorial and the Gymnasium examples repo.
- CartPoleEnv — cart-pole balancing, useful for verifying RL algorithms before applying them
to the crane. Available via
gymnasium.make("CartPole-v1").
pip install crane-controllerInstall dependencies and run the test suite with uv:
uv run pytest tests/ -vTest files are organised by algorithm:
tests/test_environment.py-- environment and observation space teststests/test_algorithm.py-- brute-force algorithm teststests/test_q.py-- Q-learning smoke and analysis teststests/test_ppo.py-- PPO training, VecNormalize, and inference tests
Tests are suitable for CI/CD — no plot windows are produced.
PPO:
uv run python scripts/train_ppo.pyKey options:
--steps N— total training timesteps (default: 100 000)--n-envs N— number of parallel environments (default: 4)--save-path PATH— where to write the trained model (default:models/ppo_AntiPendulumEnv.zip)--resume-from PATH— continue training from a saved checkpoint; preserves VecNormalize statistics and learning rate schedule--dry-run— run 1 000 steps with a live reward-tracking plot and no model saved
Q-learning:
uv run python scripts/train_q.pyKey options:
--episodes N— total training episodes (default: 10 000)--v0 F— initial crane speed; negative = stop mode, positive = start mode (default:-1.0)--reward-limit F— per-episode termination threshold (default:-0.05)--save-path PATH— where to write the Q-table (default:models/q_AntiPendulumEnv.json)--trained PATH— continue training from an existing Q-table JSON--intervals N— run interval training: N rounds of 10 episodes each--dry-run— run 50 episodes with a reward plot and no model saved
Run a trained agent visually. Both scripts accept --render-mode with the following options:
plot— 4-panel figure per episode (load angle, crane position/speed, rewards)play-back— animated crane trajectory after each episodereward-tracking— live reward line plot updating every step
PPO (default render-mode: play-back):
uv run python scripts/play_ppo.py --model-path models/ppo_AntiPendulumEnv.zip
uv run python scripts/play_ppo.py --model-path models/ppo_AntiPendulumEnv.zip --render-mode plot --episodes 3Q-learning (default render-mode: plot):
uv run python scripts/play_q.py --model-path models/q_AntiPendulumEnv.json
uv run python scripts/play_q.py --model-path tests/anti-pendulum.json --render-mode play-back --episodes 3Inspect a trained Q-table without running the environment:
uv run python scripts/analyse_q.py --model-path tests/anti-pendulum.jsonPrints per-pos/speed average Q-values for a quick sanity check. To drill into
specific states, use --obs with 5 integers (use -1 as a wildcard):
uv run python scripts/analyse_q.py --model-path tests/anti-pendulum.json --obs -1 0 0 -1 -1The five observation dimensions are: [energy, pos, speed, distance, sector].
This project uses uv as package manager.
If you haven't already, install uv, preferably using it's "Standalone installer" method:
..on Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"..on MacOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh(see docs.astral.sh/uv for all / alternative installation methods.)
Once installed, you can update uv to its latest version, anytime, by running:
uv self update
Clone the crane-controller repository into your local development directory:
git clone https://github.com/dnv-opensource/crane-controller path/to/your/dev/crane-controllerChange into the project directory after cloning:
cd crane-controllerRun uv sync -U to create a virtual environment and install all project dependencies into it:
uv sync -UNote: Using
--no-devwill omit installing development dependencies.Explanation: The
-Uoption stands for--update. It forcesuvto fetch and install the latest versions of all dependencies, ensuring that your environment is up-to-date.
Note:uvwill create a new virtual environment called.venvin the project root directory when runninguv sync -Uthe first time. Optionally, you can create your own virtual environment using e.g.uv venv, before runninguv sync -U.
When using uv, there is in almost all cases no longer a need to manually activate the virtual environment.
uv will find the .venv virtual environment in the working directory or any parent directory, and activate it on the fly whenever you run a command via uv inside your project folder structure:
uv run <command>However, you still can manually activate the virtual environment if needed. When developing in an IDE, for instance, this can in some cases be necessary depending on your IDE settings. To manually activate the virtual environment, run one of the "known" legacy commands:
..on Windows:
.venv\Scripts\activate.bat..on Linux:
source .venv/bin/activateThe .pre-commit-config.yaml file in the project root directory contains a configuration for pre-commit hooks.
To install the pre-commit hooks defined therein in your local git repository, run:
uv run pre-commit installAll pre-commit hooks configured in .pre-commit-config.yam will now run each time you commit changes.
pre-commit can also manually be invoked, at anytime, using:
uv run pre-commit run --all-filesTo skip the pre-commit validation on commits (e.g. when intentionally committing broken code), run:
uv run git commit -m <MSG> --no-verifyTo update the hooks configured in .pre-commit-config.yaml to their
newest versions, run:
uv run pre-commit autoupdateTo test that the installation works, run pytest in the project root folder:
uv run pytestCopyright (c) 2026 DNV AS. All rights reserved.
Siegfried Eisinger - @LinkedIn - siegfried.eisinger@dnv.com
Aleksandar Babic - @LinkedIn - aleksandar.babic@dnv.com
Distributed under the MIT license. See LICENSE for more information.
https://github.com/dnv-opensource/crane-controller
- Fork it https://github.com/dnv-opensource/crane-controller/fork/
- Create an issue in your GitHub repo
- Create your branch based on the issue number and type (
git checkout -b issue-name) - Evaluate and stage the changes you want to commit (
git add -i) - Commit your changes (
git commit -am 'place a descriptive commit message here') - Push to the branch (
git push origin issue-name) - Create a new Pull Request in GitHub
For your contribution, please make sure you follow the STYLEGUIDE before creating the Pull Request.