PAYU issues on Leonardo

Hello to everyone,

From what I found here, probably this question should be addressed to @Aidan @john_reilly @angus-g @dale.roberts @harshula

I have some progress with payu and porting ACCESS OM2 to Leonardo supercomputer.

Right now at the stage of payu run.

Long story short:

  1. *.exe files are compiled with ACCESS NRI local modules (compiled with spack)
  2. RYF JRA-55 files calculated locally on Leonardo
  3. Initial conditions (transferred to Leonardo) and forcing fields specified in config.yaml and atmosphere/forcing.json

payu setup produced manifests, can be found in my local repo: GitHub - VanuatuN/1deg_jra55_ryf: 1 degree ACCESS-OM2 experiment with JRA55 RYF atmospheric forcing.

The questions are:

  • how to force payu to use my local modules that were compiled at the first stage?
    I know it is going to be slower than with system modules, but I want to make it just
    working first
  • where exactly in payu/*.py files I should modify the rest of the slurm specific flags for Leonardo?

Example of batch script:
#!/bin/bash
#SBATCH --job-name=benchmark_test
#SBATCH --output=benchmark_test.out
#SBATCH --error=benchmark_test.err
#SBATCH --nodes=1
#SBATCH --cpus-per-task=32
#SBATCH -A ICT24_MHPC
#SBATCH --time=00:30:00
#SBATCH --partition=boost_usr_prod

Thank you!

The current output from payu run:

02:48 $ payu run 
payu: warning: Job request includes 47 unused CPUs.
payu: warning: CPU request increased from 241 to 288
sbatch -A ICT24_MHPC --time=10800 --ntasks=288 --wrap="/leonardo/prod/spack/5.2/install/0.21/linux-rhel8-icelake/
gcc-8.5.0/anaconda3-2023.09-0-zcre7pfofz45c3btxpdk5zvcicdq5evx/bin/
python /leonardo/home/userexternal/ntilinin/.local/bin/payu-run" --export="PAYU_PATH=/leonardo/home/userexternal/ntilinin/.local/bin,MODULESHOME
=/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-8.5.0/
environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi,MODULES_CMD=/leonardo/prod/spack/03/install/
0.19/linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi/libexec/modulecmd.tcl,MODULEPATH=
/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi/modulefiles:/leonardo/prod/opt/modulefiles/
profiles:/leonardo/prod/opt/modulefiles/base/archive:/leonardo/prod/opt/modulefiles/
base/dependencies:/leonardo/prod/opt/modulefiles/base/data:/leonardo/prod/opt/
modulefiles/base/environment:/leonardo/prod/opt/modulefiles/base/libraries:/leonardo/
prod/opt/modulefiles/base/tools:/leonardo/prod/opt/modulefiles/base/compilers:/leonardo/prod/opt/modulefiles/base/applications"
sbatch: error: no partition specified, using default partition lrd_all_serial
sbatch: error: no gres:tmpfs specified, using default: gres:tmpfs:10g
sbatch: error: Batch job submission failed: More processors requested than permitted
Traceback (most recent call last):
  File "/leonardo/home/userexternal/ntilinin/.local/bin/payu", line 10, in <module>
    sys.exit(parse())
             ^^^^^^^
  File "/leonardo/home/userexternal/ntilinin/.local/lib/python3.11/site-packages/payu/cli.py", line 42, in parse
    run_cmd(**args)
  File "/leonardo/home/userexternal/ntilinin/.local/lib/python3.11/site-packages/payu/subcommands/run_cmd.py", line 108, in runcmd
    cli.submit_job('payu-run', pbs_config, pbs_vars)
  File "/leonardo/home/userexternal/ntilinin/.local/lib/python3.11/site-packages/payu/cli.py", line 156, in submit_job
    subprocess.check_call(shlex.split(cmd))
  File "/leonardo/prod/spack/5.2/install/0.21/linux-rhel8-icelake/gcc-8.5.0/anaconda3-2023.09-0-zcre7pfofz45c3btxpdk5zvcicdq5evx/lib/python3.11/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sbatch', '-A', 'ICT24_MHPC', '--time=10800', '--ntasks=288',
 '--wrap=/leonardo/prod/spack/5.2/install/0.21/linux-rhel8-icelake/gcc-8.5.0/
anaconda3-2023.09-0-zcre7pfofz45c3btxpdk5zvcicdq5evx/bin/python 
/leonardo/home/userexternal/ntilinin/.local/bin/payu-run', '--export=PAYU_PATH=/leonardo/home/userexternal/ntilinin/.local/
bin,MODULESHOME=/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/
gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi,MODULES_CMD=
/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-8.5.0/
environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi/libexec/modulecmd.tcl,
MODULEPATH=/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi/modulefiles:
/leonardo/prod/opt/modulefiles/profiles:/leonardo/prod/opt/modulefiles/
base/archive:/leonardo/prod/opt/modulefiles/base/dependencies:/leonardo/prod/opt/modulefiles/base/data:/leonardo/prod/opt/modulefiles/base/environment:
/leonardo/prod/opt/modulefiles/base/libraries:/leonardo/prod/opt/modulefiles/base/tools:/leonardo/prod/opt/modulefiles/base/compilers:/leonardo/prod/opt/modulefiles/base/applications']' returned non-zero exit status 1.

Hi Natalia,

Iā€™m no expert on this stuff and just picked up where @angus-g and @ChrisC28 got to with our slurm-based hpc. But hopefully this helps;

Not sure about the first question sorry, but for the slurm specific flags, we put them at the start of the config.yaml file in the run directory. If you can find the slurm.py file in payu/schedulers/ that should make more sense about how payu reads these flags in.

Hereā€™s an example of one of our config.yaml files:

scheduler: slurm
project: pawsey0410
walltime: 02:20:00
jobname: eac_sthpac-forced_v3
ncpus: 1804
nnodes: 15
runspersub: 1

shortpath: /scratch/pawsey0410
model: mom6
input:
    - /scratch/pawsey0410/jreilly/mom6-inputs/eac_sthpac-forced_v2/
    - /scratch/pawsey0410/jreilly/jra_padded/2016/
    - /scratch/pawsey0410/jreilly/mom6/archive/eac_sthpac-forced_v3/restart305
#    - /g/data/ua8/JRA55-do/RYF/v1-3/
#    - /g/data/ik11/inputs/JRA-55/RYF/v1-3/
# release exe
exe: /software/projects/pawsey0410/cc7576/mom6-cmake/coupler/MOM6-SIS2
  #exe: /software/projects/pawsey0410/jreilly/mom6-cmake/coupler/MOM6-SIS2
  #  /software/projects/pawsey0410/cc7576/mom6-cmake/coupler/MOM6-SIS2

stacksize: unlimited

collate: false
runlog: false

mpi:
  runcmd: srun
1 Like

Hi Natalia

This looks like where it failed:

It looks like the number of nodes is hardcoded to 1, and then payu is requesting 288 cores. I donā€™t know how many cores per node Leonardo hardware has, for us its 48, so setting the number of nodes to 6 would be correct for us. I would try setting ncpus and nnodes in the config.yaml per Johns code snippetā€¦

Thereā€™s these two lines in the payu output:

payu: warning: Job request includes 47 unused CPUs.
payu: warning: CPU request increased from 241 to 288

I think there might be 32 cores per node for you, so I would try:

ncpus: 256
nnodes: 8
npernode: 32

For our normal gadi scheduler, we donā€™t specify the number of nodes, so its possible Payu hasnā€™t been tested very well in these cases.

Re: modules

You can set it similar to this config:

When you have module: use: and load: lines in the config.yaml, you should be able to access the binaries without a path

for example, the exe: entry just could become yatm.exe

I would test these in a command prompt first by doing a module use and module load, and seeing if the executables are available as commands.

2 Likes

Hi @john_reilly, very much appreciated!
Didnā€™t know that slurm flags can be specified exactly in config.yaml

Will try to implement it.

Hi @anton!

Thank you, all clear for the moment!
My time zone forces for a delay in reply.

Very useful information, will do that and let you know soon.

Fingers crossed.

1 Like

Happy to help. If the instructions donā€™t make sense , I can make a Pull Request into your fork of the om2 configurations

It worked for the job submission! But failed to pick the modules.
Leonardo has 32 cores per node, right.
I modified slurm.py and config.yaml files

payu run gives:

payu run 
sbatch -A ICT24_MHPC --time=00:30:00 --ntasks=256 --partition=boost_usr_prod 
--wrap="/leonardo/prod/spack/5.2/install/0.21/linux-rhel8-icelake/gcc-8.5.0/
anaconda3-2023.09-0-zcre7pfofz45c3btxpdk5zvcicdq5evx/bin/
python /leonardo/home/userexternal/ntilinin/.local/bin/payu-run" --export="PAYU_PATH=/leonardo/home/userexternal/ntilinin/.local/bin,
MODULESHOME=/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/
gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi,MODULES_CMD=/leonardo/prod/spack/03/
install/0.19/linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi/libexec/modulecmd.tcl,MODULEPATH=
/leonardo_scratch/large/userexternal/ntilinin/ACCESS-NRI/release/
modules/linux-rhel8-x86_64:/leonardo/prod/spack/03/install/0.19/
linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi/modulefiles:/leonardo/prod/opt/modulefiles/
profiles:/leonardo/prod/opt/modulefiles/base/archive:/leonardo/prod/opt/modulefiles/
base/dependencies:/leonardo/prod/opt/modulefiles/base/data:/leonardo/prod/opt/
modulefiles/base/environment:/leonardo/prod/opt/modulefiles/base/libraries:/leonardo/
prod/opt/modulefiles/base/tools:/leonardo/prod/opt/modulefiles/base/compilers:
/leonardo/prod/opt/modulefiles/base/applications"

But still an output from payu run is:

laboratory path:  ./ntilinin/access-om2
binary path:  ./ntilinin/access-om2/bin
input path:  ./ntilinin/access-om2/input
work path:  ./ntilinin/access-om2/work
archive path:  ./ntilinin/access-om2/archive
nruns: 1 nruns_per_submit: 1 subrun: 1
Loading input manifest: manifests/input.yaml
Loading restart manifest: manifests/restart.yaml
Loading exe manifest: manifests/exe.yaml
Setting up atmosphere
Setting up ocean
Setting up ice
Setting up access-om2
Checking exe and input manifests
Updating full hashes for 3 files in manifests/exe.yaml
Creating restart manifest
Writing manifests/restart.yaml
Writing manifests/exe.yaml
payu: Found modules in /leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi
Traceback (most recent call last):
  File "/leonardo/home/userexternal/ntilinin/.local/bin/payu-run", line 10, in <module>
    sys.exit(runscript())
             ^^^^^^^^^^^
  File "/leonardo/home/userexternal/ntilinin/.local/lib/python3.11/site-packages/payu/subcommands/run_cmd.py", line 132, in runscript
    expt.run()
  File "/leonardo/home/userexternal/ntilinin/.local/lib/python3.11/site-packages/payu/experiment.py", line 517, in run
    mpi_module = envmod.lib_update(
                 ^^^^^^^^^^^^^^^^^^
  File "/leonardo/home/userexternal/ntilinin/.local/lib/python3.11/site-packages/payu/envmod.py", line 114, in lib_update
    mod_name, mod_version = fsops.splitpath(lib_path)[2:4]
    ^^^^^^^^^^^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 0)

modified slurm.py file:

modified (with nodes, etc. specified)config.yaml file:

Will try to work around with modules in the coming days. But
would very grateful for any hints where to move.

Thank you!!!

It looks like payu is trying to check that the mpi version which is linked to by the model executable its the version loaded. But for whatever reason the formatting or check is failing.

I would try adding these lines to your config.yaml and set them to the modules which are used by your exectuables. (You might be able to confirm the path to the mpi version using ldd )

mpi:
    modulepath:
    module:

See this section in the docs:

https://payu.readthedocs.io/en/stable/config.html#miscellaneous

Pinging @Aidan as he has more experience than I with this!

1 Like

Yes this was always quite NCI specific, and with the spack built executables is no longer strictly necessary.

Can you try updating your version of payu, as there is now logic that isolates this check to NCI systems by matching the library path:

If you have made local changes you can fetch the latest payu and git rebase your changes on top of them.

Update:

Before I was using ā€˜pawseyā€™ branch from the payu repo (found somewhere is the issues or here on forum). Now switched to ā€˜masterā€™ branch and made few corrections.

The job goes to submission, which is goods news.

My opempi module does not recognise --chdir, so I commented out this string and kept -wdir:

What Iā€™m not sure about is that payu uses the correct version of as the cmd still looks as:

 ~/access-om2/control/1deg_jra55_ryf [master ā†‘Ā·13|ā€¦28] 
15:57 $ payu run 
/leonardo/home/userexternal/ntilinin/.local/lib/python3.11/site-packages/payu/fsops.py:77: UserWarning: Duplicate key found in config.yaml: key 'jobname' with value 'access_om2_ryf'. This overwrites the original value: '1deg_jra55_ryf'
/leonardo/home/userexternal/ntilinin/.local/lib/python3.11/site-packages/payu/fsops.py:77: UserWarning: Duplicate key found in config.yaml: key 'queue' with value 'boost_usr_prod'. This overwrites the original value: 'boost_usr_prod'
sbatch -A ICT24_MHPC --time=00:30:00 --ntasks=256 --partition=boost_usr_prod 
--wrap="/leonardo/prod/spack/5.2/install/0.21/linux-rhel8-icelake/
gcc-8.5.0/anaconda3-2023.09-0-zcre7pfofz45c3btxpdk5zvcicdq5evx/
bin/python /leonardo/home/userexternal/ntilinin/.local/bin/payu-run" 
--export="PAYU_PATH=/leonardo/home/userexternal/ntilinin/.local/bin,
MODULESHOME=/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi,
MODULES_CMD=/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi/libexec/modulecmd.tcl,
MODULEPATH=/leonardo_scratch/large/userexternal/ntilinin/ACCESS-NRI/release/modules/linux-rhel8-x86_64:
/leonardo/prod/spack/03/install/0.19/linux-rhel8-icelake/gcc-8.5.0/environment-modules-5.2.0-rz47odw4phlhzhhbz7b65nv5s5othgmi/modulefiles:
/leonardo/prod/opt/
modulefiles/profiles:
/leonardo/prod/opt/modulefiles/base/archive:
/leonardo/prod/opt/modulefiles/base/dependencies:
/leonardo/prod/opt/modulefiles/base/data:
/leonardo/prod/opt/modulefiles/base/environment:
/leonardo/prod/opt/modulefiles/base/libraries:
/leonardo/prod/opt/modulefiles/base/tools:
/leonardo/prod/opt/modulefiles/base/compilers:
/leonardo/prod/opt/modulefiles/base/applications"
Submitted batch job 9594255

It still sees MODULES_CMD and use systemwide modulecmd.tcl as well as systemwide MODULESHOME, not sure it affects something, but still.

However MODULEPATH is updated to the proper location.

Slurm out looks reasonable, hope it picks not default systemwide openmpi, but first in the list:

Another issue: Iā€™m doing something wrong with resources allocation,
donā€™t know how payu distributes submodels across nodes, here is an error that Iā€™m getting now:

And the config.yaml:

Iā€™ll try to work around, but would be grateful for any advice as usual :slight_smile:

Many thanks!!!

Hi Natalia

I would try commenting out these lines:

For reasons that are not clear to me, for some reason payu is requesting 16 tasks per available ā€œsocketā€, when its probably only possible to have one. Maybe this is a gadi specific detail for some specific case. You might be able to remove the -map-by argument entirely. I think it will take some experimentation.

1 Like

Hi Anton,

Will try, thank you.
It could be that each gadiā€™s node has 3 sockets each with 16 cores, this makes sense - 16*3=48 cores.

Hi @anton

I have an update and a question again.

So far Iā€™ve been struggling during the last month to make payu run executables built from old COSIMA repo with spack built modules (from ACCESS NRI). I rewrote parts of payu that checks modules and adds libraries, but couldnā€™t make it use proper mpi. Gave up that.

Now I cloned the latest version of payu and trying to run spack built executables with spack built model components from cloned config repo GitHub - ACCESS-NRI/access-om2-configs at release-1deg_jra55_ryf. The problem Iā€™m facing now is that one:

mpirun was unable to launch the specified application as it could not access
or execute an executable:

#--------------------------------------------------------------------------
mpirun was unable to launch the specified application as it could not access
or execute an executable:

Executable: ./ntilinin/access-om2/work/1deg_jra55_ryf-expt-3216c7cb/atmosphere/yatm.exe
Node: lrdn3421

while attempting to start process rank 0.
#--------------------------------------------------------------------------

ā€œwhich mpirunā€ points to the proper module prebuilt with spack.
The symlink points to the proper location of yamt.exe

I do have a feeling that it again uses systemwide mpirun (just a guess).

I found the same issue raised by @Aidan here ACCESS-OM2 Restart Reproducibility: Bitwise Reproducibility Testing

Maybe there is something specific you and @Aidan can advise on that issues?

Many thanks as usual!

Also an update on nodes/sockets/cores:

It should be 2 sockets per node on Gadi

And it is 1 socket with 32 cores on each Leonardo node, so no need to divide by 2. Payu was configured to divide 32/2 in case of even number ā€˜npernodeā€™ from config.yaml

This is solved now.

The problem was these parts of the code:

And here:

When using -wdir argument with slurm and mpi (probably Leonardo version of slurm, but more likely in general) the line resulting from run_cmd looks as:

mpirun --mca io ompio --mca io_ompio_num_aggregators 1 -wdir ./ntilinin/access-om2/work/1deg_jra55_ryf-expt-a3822e12/atmosphere -n 1 ./ntilinin/access-om2/work/1deg_jra55_ryf-expt-a3822e12/atmosphere/yatm.exe

and after working directory has been changed to:

/ntilinin/access-om2/work/1deg_jra55_ryf-expt-a3822e12/atmosphere

it looks for the path to *.exe files starting from there. So the relative path wonā€™t work.
Iā€™ve changed the line 629 in experiment.py to

model_prog.append(os.path.abspath(os.path.join(model.work_path, model.exec_name)))

And mpirun picked to file.

Now Iā€™m facing that output in the access-om2.err:

--------------------------------------------------------------------------
ORTE has lost communication with a remote daemon.

  HNP daemon   : [[55189,0],0] on node lrdn0001
  Remote daemon: [[55189,0],1] on node lrdn0009

This is usually due to either a failure of the TCP network
connection to the node, or possibly an internal failure of
the daemon itself. We cannot recover from this failure, and
therefore will terminate the job.
--------------------------------------------------------------------------
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source             
fms_ACCESS-OM.x    0000000001D5078B  Unknown               Unknown  Unknown
libpthread-2.28.s  000014F4C28ECCF0  Unknown               Unknown  Unknown
libopen-rte.so.40  000014F4BE89CD30  orte_dt_init          Unknown  Unknown
libopen-rte.so.40  000014F4BE8E4BB9  orte_ess_base_std     Unknown  Unknown
libopen-rte.so.40  000014F4BE8E8AA2  Unknown               Unknown  Unknown
libopen-rte.so.40  000014F4BE95CF0B  orte_init             Unknown  Unknown
libmpi.so.40.30.4  000014F4C3101CE1  ompi_mpi_init         Unknown  Unknown
libmpi.so.40.30.4  000014F4C2F2F40D  MPI_Init              Unknown  Unknown
libmpi_mpifh.so.4  000014F4C3440787  PMPI_Init_f08         Unknown  Unknown
fms_ACCESS-OM.x    0000000001D23B41  coupler_mod_mp_co          82  coupler.F90
fms_ACCESS-OM.x    000000000041F8A0  MAIN__                    186  ocean_solo.F90
fms_ACCESS-OM.x    00000000004111E2  Unknown               Unknown  Unknown
libc-2.28.so       000014F4C254ED85  __libc_start_main     Unknown  Unknown
fms_ACCESS-OM.x    00000000004110EE  Unknown               Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)

There is still a manual memory allocation in run_cmd.py in the lines 36-39, that I changed to Leonardo specs:

# TODO: Create drivers for servers
platform = pbs_config.get('platform', {})
max_cpus_per_node = platform.get('nodesize', 32)
max_ram_per_node = platform.get('nodemem', 514)

Probably it gets overwritten from config.yaml, but still.
Donā€™t know here to dig now for ORTE error.

Would be grateful for any guess as usual!

Glad to hear you are making some progress!

Not sure I have much to offer.

Yes - this is strange. Can you confirm which code versions you are using ?

It looks like its failing at MPI_Init for MOM

I guess the first thing to confirm is that the mpirun command printed from run_cmd has the right number of processors in the -n argument for MOM? The MOM exectuable is named fms_ACCESS-OM.x (And are the number of processors consistent with config.yaml)

I donā€™t know the specifics of Leonardo, but I did think that SLURM-based systems had to use srun as a ā€œwrapperā€ for MPI (it actually does a lot about setting up the execution environment), rather than trying to call mpiexec directly. Thatā€™s what we had to do on Setonix, but Iā€™m not sure if that continues to be the approach? Apologies if this is a red herring!

2 Likes

Iā€™d definitely be asking your local HPC Helpdesk @Natalia to see if they can assist with this error. I am sure theyā€™d have seen similar issues and be able to advise.

Please also note weā€™re doing some exploratory work to port ACCESS models to Setonix, a SLURM based machine. I canā€™t give you a definite time-line, but I would think weā€™d have made some progress within a month.

1 Like

Not a red herring! Makes sense, tried it and got:

The application appears to have been direct launched using ā€œsrunā€,
but OMPI was not built with SLURM support. This usually happens
when OMPI was not configured --with-slurm and we werenā€™t able
to discover a SLURM installation in the usual places.

Please configure as appropriate and try again.

The thing is that Iā€™m using a spack built OMPI from ACCESS NRI repo,
it was likely built without SLURM support.

which srun points to systemwide executable even the ACCESS NRI module is loaded:

21:23 $ which srun 
/usr/bin/srun

Also srun do not support flags like `ā€“mca io ompio --mca io_ompio_num_aggregators 1, not sure whether they needed.

From what I read here 10.7. Launching with Slurm ā€” Open MPI main documentation
mpirun is the recommended method for launching Open MPI jobs in Slurm jobs (at least now I know :woman_facepalming:).
Thanks!

My guess is that these settings are not essential for the model to run, and that the defaults probably work.

The thing is that Iā€™m using a spack built OMPI from ACCESS NRI repo,
it was likely built without SLURM support.

Itā€™s possibly worth revisiting this and using the system provided OpenMPI. On gadi, we found the system provided version performed better than the version we built through spack. If there is someone at your local helpdesk who understands spack, they may have advice ?

1 Like