The short answer is that they’re both using conda python environments that are not playing well together.
This is a solved problem for the version of payu in payu/dev, and we’ll shortly be releasing a new version of payu that will also not have this problem
$ module use /g/data/xp65/public/modules
$ module load nci-scripts
$ uqstat -h
usage: uqstat [-h] [--historical] [--format {table,csv,json}] [--project PROJECT] [--comment]
Print more detailed information from qstat
Returns the following columns for each job:
project: NCI project the job was submitted under
job_name: Name of the job
queue: Queue the job was submitted to
state: Current state - 'Q' in queue, 'R' running, 'H' held, 'E' finished
ncpus: Number of cpus requested
walltime: Walltime the job has run for so far
su: SU cost of the job so far
mem_pct: Percent of the memory request used
cpu_pct: Percent of time CPUs have been active
qtime: Time the job spent in the queue before starting
If 'mem_pct' is below 80% make sure you're not requesting too much memory (4GB
per CPU or less is fine)
If 'cpu_pct' is below 80% and you're requesting more than one CPU make sure
your job is making proper use of parallelisation
options:
-h, --help show this help message and exit
--historical, -x Show historical info
--format {table,csv,json}, -f {table,csv,json}
Output format
--project PROJECT, -P PROJECT
Show all jobs in a project
--comment, -c Show PBS queue comment
$ module use /g/data/vk83/prerelease/modules/
$ module load payu
Loading payu/dev-20250828T223708Z-0fd643d
Loading requirement: singularity
$ uqstat -h
usage: uqstat [-h] [--historical] [--format {table,csv,json}] [--project PROJECT] [--comment]
Print more detailed information from qstat
Returns the following columns for each job:
project: NCI project the job was submitted under
job_name: Name of the job
queue: Queue the job was submitted to
state: Current state - 'Q' in queue, 'R' running, 'H' held, 'E' finished
ncpus: Number of cpus requested
walltime: Walltime the job has run for so far
su: SU cost of the job so far
mem_pct: Percent of the memory request used
cpu_pct: Percent of time CPUs have been active
qtime: Time the job spent in the queue before starting
If 'mem_pct' is below 80% make sure you're not requesting too much memory (4GB
per CPU or less is fine)
If 'cpu_pct' is below 80% and you're requesting more than one CPU make sure
your job is making proper use of parallelisation
options:
-h, --help show this help message and exit
--historical, -x Show historical info
--format {table,csv,json}, -f {table,csv,json}
Output format
--project PROJECT, -P PROJECT
Show all jobs in a project
--comment, -c Show PBS queue comment
$ payu -h
usage: payu [-h] [--version]
{archive,branch,build,checkout,clone,collate,ghsetup,init,list,profile,push,run,setup,sweep,sync} ...
positional arguments:
{archive,branch,build,checkout,clone,collate,ghsetup,init,list,profile,push,run,setup,sweep,sync}
options:
-h, --help show this help message and exit
--version show program's version number and exit
$
I will update this topic when there is a new released version of payu that works as well, but you should be fine to use payu/dev in the mean time if you’re feeling adventurous.