skip to Main Content

For my master’s thesis I want to work on further developing the PyPSA-Eur energy system model for the Baltic Sea Region. Before I can work on the development I have to succesfully run the model. I am running the PyPSA-Euro model by cloning the code from Github and running the code in a Ubuntu WSL environment using Visual Studio Code to solve the permission errors I had before. I have not made any changes to the existing code and I updated all the packages according to the environment.yaml file. The PyPSA-Eur repository on Github:

https://github.com/PyPSA/pypsa-eur

The error I keep encountering is an Error in rule build_renewable_profiles when running Snakemake –cores all in the terminal. The error message indicates that the process running the build_renewable_profiles rule in your Snakemake pipeline was killed due to high memory usage. This is likely due to the fact that the worker process exceeded its memory limit.

I have tried increasing the memory limit by adding a script in the Snakefile to increase the memory limit of the Dask Worker. Additionally, I tried to increase the memory limit of the WSL environment by adding a .wslconfig file in the user directory of the WSL enviroment, specifying 8GB of memory. However, both of these approaches have not solved the error. My system has a total memory of 16GB.

I kindly ask for help on how I could solve this type of error, as given in the provided images. Solving this error enables me to succesfully run the PyPSA-model and to continue working on developing the PyPSA-eur energy system for my master’s thesis.

Many thanks,
Stijn

First image of the full terminal message after running snakemake:
First image of the full terminal message after running snakemake

Second image of the full terminal message after running snakemake, including the error message:
Second image of the full terminal message after running snakemake, including the error message

The part of the terminal that includes the error message after running snakemake --cores all:

INFO:__main__:Calculate landuse availabilities...
INFO:__main__:Completed availability calculation (95.49s)
INFO:atlite.convert:Convert and aggregate 'pv'.
2023-12-27 14:44:34,179 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 1.38 GiB -- Worker memory limit: 1.91 GiB
2023-12-27 14:44:36,119 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 1.36 GiB -- Worker memory limit: 1.91 GiB
2023-12-27 14:44:37,196 - distributed.worker.memory - WARNING - Worker is at 81% memory usage. Pausing worker.  Process memory: 1.55 GiB -- Worker memory limit: 1.91 GiB
[Wed Dec 27 14:44:38 2023]
Error in rule build_renewable_profiles:
    jobid: 10
    input: resources/networks/base.nc, data/bundle/corine/g250_clc06_V18_5.tif, resources/natura.tiff, resources/country_shapes.geojson, resources/offshore_shapes.geojson, resources/regions_onshore.geojson, cutouts/europe-2013-sarah.nc
    output: resources/profile_solar.nc
    log: logs/build_renewable_profile_solar.log (check log file(s) for error details)
    conda-env: /home/cfl/pypsa-balticsea/.snakemake/conda/a193a967e4b3183c6023115ed840b879_

RuleException:
CalledProcessError in file /home/cfl/pypsa-balticsea/rules/build_electricity.smk, line 309:
Command 'set -euo pipefail;  /home/cfl/miniconda3/envs/pypsa-eur/bin/python3.11 /home/cfl/pypsa-balticsea/.snakemake/scripts/tmpwdpwcq1a.build_renewable_profiles.py' died with <Signals.SIGKILL: 9>.
  File "/home/cfl/pypsa-balticsea/rules/build_electricity.smk", line 309, in __rule_build_renewable_profiles
  File "/home/cfl/miniconda3/envs/pypsa-eur/lib/python3.11/concurrent/futures/thread.py", line 58, in run
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
/home/cfl/miniconda3/envs/pypsa-eur/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 24 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
Complete log: .snakemake/log/2023-12-27T144217.073372.snakemake.log

2

Answers


  1. The rule build_renewable_profiles can be quite resource-intensive and may run multiple times in parallel (for solar, onshore wind, offshore wind, etc.)

    You could try again with:

    snakemake -j1
    

    This will execute only one rule at a time, minimising memory consumption.

    Login or Signup to reply.
  2. Fabian’s approach will throttle the rule using large amounts of memory but also remove any parallelism. You have a few better options:

    • Add a custom resource to the large memory rule(s) to limit running only one instance at a time. This will take effect regardless of other system properties (e.g. if you move to another machine or use slurm).
    rule big_mem:
        resources:
            lots_of_memory=1
    

    invoke with snakemake --resources lots_of_memory=1. This will allow only one instance of the rule to run, but other jobs can still run at the same time. This may still cause crashes if other jobs use lots of memory.

    • Set the number of cores for the offending rule to a large number, e.g. 100. This will limit your machine to run only one instance of the large memory rule, but once finished will allow multiple, other jobs to run. A quick hack but won’t translate to clusters well.

    • Assign mem_mb usage to each rule and execute snakemake under the constraint of your system, snakemake --resources mem_mb=16000. Snakemake will take care of the accounting, but will not enforce the limits. If you say a rule uses 10 GB but actually uses 12 GB, it will not be killed by snakemake. This approach is more time consuming and may still have problems, but will allow you to easily transition to a cluster/cloud if you need to.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search