It seems like the beloved gadi_jupyter script has hit a snag and needs an update to deal with the migration to Notebook 7(?). A job will still start when submitting the script, but the jupyter notebook session won’t load. I don’t think this script is updated anymore (is that right?), but I know many people still work with it (it’s so great - thanks Scott). Does anyone know if there’s a fix in this case?
Sorry about that. This should be fixed now, you’ll need to run git pull in your nci_scripts directory to get the latest version of the gadi_jupyter script, then try again.
1 Like
Aidan
(Aidan Heerdegen, ACCESS-NRI Release Team Lead)
4
Is this fix backwards compatible for the stable conda environment? Or should folks only update if they plan to use the unstable conda environment?
So the issue was that the gadi_jupyter script was running jupyter notebook, which, as of the 2023 envs no longer launches a jupyter lab session. All I’ve done is changed it to jupyter lab. This is how ARE launches its underlying conda environments anyway, so its definitely compatible with the earlier environments.
Code/analysis originally working faster using the gadi_jupyter script than on ARE, although I haven’t noticed much of a difference as of trying out ARE this morning.
Being able to see the dask dashboard with the gadi_jupyter script (when I first used ARE I couldn’t see this, but I think it’s there now).
More customisable in terms of cpus/memory requests.
Ease (just open a terminal, start the script and a job will open without needing to start up a browser and login to ARE ).
I just encountered the same problem and pulled the latest version, but now I get this error:
./gadi_jupyter -t 12:00:00 -n 48
WARNING: Using a large number of CPUs in an interactive session can waste lots of NCI compute time. Try keeping the number of CPUs under 8.
Proceed? [y/N] y
Starting notebook on gadi.nci.org.au...
Working directory: /scratch/e14/cs6673/tmp/runjp
qsub -N jupyter-lab -q 'normal' -l 'ncpus=48,mem=180gb,walltime=12:00:00,jobfs=100gb'
Notebook running as PBS job 87756045.gadi-pbs
Starting tunnel...
..........
Start a Dask cluster in your notebook using the Dask panel of Jupyterlab, or by
running (needs kernel analysis3-20.01 or later):
---------------------------------------------------------------
import climtas.nci
climtas.nci.GadiClient()
---------------------------------------------------------------
Opening http://localhost:8888/lab?token=483c034f-a88c-4d32-a2ff-03d2e068b58b
Traceback (most recent call last):
File "/g/data/hh5/public/apps/nci_scripts/qmonitor", line 26, in <module>
import pandas
File "/g/data/hh5/public/apps/miniconda3/envs/analysis3/lib/python3.9/site-packages/pandas/__init__.py", line 138, in <module>
from pandas import testing # noqa:PDF015
File "/g/data/hh5/public/apps/miniconda3/envs/analysis3/lib/python3.9/site-packages/pandas/testing.py", line 6, in <module>
from pandas._testing import (
File "/g/data/hh5/public/apps/miniconda3/envs/analysis3/lib/python3.9/site-packages/pandas/_testing/__init__.py", line 65, in <module>
from pandas._testing._warnings import (
ModuleNotFoundError: No module named 'pandas._testing._warnings'
Closing connections... (Ctrl-C will leave job in the queue)
I’m looking into this now, unfortunately I can’t replicate it. I’ve verified those paths exist, and that pandas._testing._warnings imports cleanly. Usually a ModuleNotFoundError comes from a file just not being there, but /g/data/hh5/public/apps/miniconda3/envs/analysis3/lib/python3.9/site-packages/pandas/_testing/_warnings.py exists too, so I’m at a loss at this stage as to why this is failing for you.
I’ve updated the launcher script that qmonitor users to exclude packages installed via pip install --user. Can you try again and see if that has made a difference?