aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--Docs/source/install/hpc/cori.rst22
-rw-r--r--Docs/source/install/hpc/perlmutter.rst13
-rw-r--r--Docs/source/install/hpc/spock.rst10
-rw-r--r--Docs/source/install/hpc/summit.rst35
4 files changed, 80 insertions, 0 deletions
diff --git a/Docs/source/install/hpc/cori.rst b/Docs/source/install/hpc/cori.rst
index f1aeec351..c901500e6 100644
--- a/Docs/source/install/hpc/cori.rst
+++ b/Docs/source/install/hpc/cori.rst
@@ -313,3 +313,25 @@ A multi-node batch script template can be found below:
.. literalinclude:: ../../../../Tools/BatchScripts/batch_cori_gpu.sh
:language: bash
+
+
+.. _post-processing-cori:
+
+Post-Processing
+---------------
+
+For post-processing, most users use Python via NERSC's `Jupyter service <https://jupyter.nersc.gov>`__ (`Docs <https://docs.nersc.gov/services/jupyter/>`__).
+
+As a one-time preparatory setup, `create your own Conda environment as described in NERSC docs <https://docs.nersc.gov/services/jupyter/#conda-environments-as-kernels>`__.
+In this manual, we often use this ``conda create`` line over the officially documented one:
+
+.. conda-block:: bash
+
+ conda create -n myenv -c conda-forge python mamba ipykernel ipympl matplotlib numpy pandas yt openpmd-viewer openpmd-api h5py fast-histogram
+
+We then follow the `Customizing Kernels with a Helper Shell Script <https://docs.nersc.gov/services/jupyter/#customizing-kernels-with-a-helper-shell-script>`__ section to finalize the setup of using this conda-environment as a custom Jupyter kernel.
+
+When opening a Jupyter notebook, just select the name you picked for your custom kernel on the top right of the notebook.
+
+Additional software can be installed later on, e.g., in a Jupyter cell using ``!mamba install -c conda-forge ...``.
+Software that is not available via conda can be installed via ``!python -m pip install ...``.
diff --git a/Docs/source/install/hpc/perlmutter.rst b/Docs/source/install/hpc/perlmutter.rst
index 3efd44cbf..0901bf3b3 100644
--- a/Docs/source/install/hpc/perlmutter.rst
+++ b/Docs/source/install/hpc/perlmutter.rst
@@ -125,3 +125,16 @@ To run a simulation, copy the lines above to a file ``batch_perlmutter.sh`` and
sbatch batch_perlmutter.sh
to submit the job.
+
+
+.. _post-processing-perlmutter:
+
+Post-Processing
+---------------
+
+For post-processing, most users use Python via NERSC's `Jupyter service <https://jupyter.nersc.gov>`__ (`Docs <https://docs.nersc.gov/services/jupyter/>`__).
+
+Please follow the same guidance as for :ref:`NERSC Cori post-processing <post-processing-cori>`.
+
+The Perlmutter ``$PSCRATCH`` filesystem is currently not yet available on Jupyter.
+Thus, store or copy your data to Cori's ``$SCRATCH`` or use the Community FileSystem (CFS) for now.
diff --git a/Docs/source/install/hpc/spock.rst b/Docs/source/install/hpc/spock.rst
index e28daeb25..78bb08361 100644
--- a/Docs/source/install/hpc/spock.rst
+++ b/Docs/source/install/hpc/spock.rst
@@ -104,3 +104,13 @@ Or in non-interactive runs:
:language: bash
We can currently use up to ``4`` nodes with ``4`` GPUs each (maximum: ``-N 4 -n 16``).
+
+
+.. _post-processing-spock:
+
+Post-Processing
+---------------
+
+For post-processing, most users use Python via OLCFs's `Jupyter service <https://jupyter.olcf.ornl.gov>`__ (`Docs <https://docs.olcf.ornl.gov/services_and_applications/jupyter/index.html>`__).
+
+Please follow the same guidance as for :ref:`OLCF Summit post-processing <post-processing-summit>`.
diff --git a/Docs/source/install/hpc/summit.rst b/Docs/source/install/hpc/summit.rst
index 794ec6bb0..55f0c7697 100644
--- a/Docs/source/install/hpc/summit.rst
+++ b/Docs/source/install/hpc/summit.rst
@@ -289,3 +289,38 @@ Known System Issues
For instance, if you compile large software stacks with Spack, make sure to register ``libfabric`` with that exact version as an external module.
If you load the documented ADIOS2 module above, this problem does not affect you, since the correct ``libfabric`` version is chosen for this one.
+
+.. warning::
+
+ Oct 12th, 2021 (OLCFHELP-4242):
+ There is currently a problem with the pre-installed Jupyter extensions, which can lead to connection splits at long running analysis sessions.
+
+ Work-around this issue by running in a single Jupyter cell, before starting analysis:
+
+ .. code-block:: bash
+
+ !jupyter serverextension enable --py --sys-prefix dask_labextension
+
+
+.. _post-processing-summit:
+
+Post-Processing
+---------------
+
+For post-processing, most users use Python via OLCFs's `Jupyter service <https://jupyter.olcf.ornl.gov>`__ (`Docs <https://docs.olcf.ornl.gov/services_and_applications/jupyter/index.html>`__).
+
+We usually just install our software on-the-fly on Summit.
+When starting up a post-processing session, run this in your first cells:
+
+.. conda-block:: bash
+
+ # work-around for OLCFHELP-4242
+ !jupyter serverextension enable --py --sys-prefix dask_labextension
+
+ # next Jupyter cell: install a faster & better conda package manager
+ !conda install -c conda-forge -y mamba
+
+ # next cell: the software you want
+ !mamba install -c conda-forge -y openpmd-api openpmd-viewer ipympl ipywidgets fast-histogram yt
+
+ # restart notebook