From 5f31c42a43f1396a636ff81cbfe2fabedf3f6e67 Mon Sep 17 00:00:00 2001 From: MaxThevenet Date: Fri, 23 Aug 2019 14:42:31 -0700 Subject: Doc: batch script and description on Summit and Cori KNL --- Docs/source/running_cpp/parallelization.rst | 17 ----------------- 1 file changed, 17 deletions(-) (limited to 'Docs/source/running_cpp/parallelization.rst') diff --git a/Docs/source/running_cpp/parallelization.rst b/Docs/source/running_cpp/parallelization.rst index 440c17235..94ac90ca8 100644 --- a/Docs/source/running_cpp/parallelization.rst +++ b/Docs/source/running_cpp/parallelization.rst @@ -63,20 +63,3 @@ and MPI decomposition and computer architecture used for the run: Below is a list of experience-based parameters that were observed to give good performance on given supercomputers. - -Rule of thumb for 3D runs on NERSC Cori KNL -------------------------------------------- - -For a 3D simulation with a few (1-4) particles per cell using FDTD Maxwell -solver on Cori KNL for a well load-balanced problem (in our case laser -wakefield acceleration simulation in a boosted frame in the quasi-linear -regime), the following set of parameters provided good performance: - -* ``amr.max_grid_size=64`` and ``amr.blocking_factor=64`` so that the size of - each grid is fixed to ``64**3`` (we are not using load-balancing here). - -* **8 MPI ranks per KNL node**, with ``OMP_NUM_THREADS=8`` (that is 64 threads - per KNL node, i.e. 1 thread per physical core, and 4 cores left to the - system). - -* **2 grids per MPI**, *i.e.*, 16 grids per KNL node. -- cgit v1.2.3 From 619f6e12c6dc782b7465465525dca80eded1cb72 Mon Sep 17 00:00:00 2001 From: MaxThevenet Date: Mon, 26 Aug 2019 13:33:03 -0700 Subject: add script to compute domain size and #cells --- Docs/source/running_cpp/parallelization.rst | 7 +++++-- Tools/compute_domain.py | 18 ++++++++++++++++++ 2 files changed, 23 insertions(+), 2 deletions(-) (limited to 'Docs/source/running_cpp/parallelization.rst') diff --git a/Docs/source/running_cpp/parallelization.rst b/Docs/source/running_cpp/parallelization.rst index 94ac90ca8..a8c89f340 100644 --- a/Docs/source/running_cpp/parallelization.rst +++ b/Docs/source/running_cpp/parallelization.rst @@ -61,5 +61,8 @@ and MPI decomposition and computer architecture used for the run: * Amount of high-bandwidth memory. -Below is a list of experience-based parameters -that were observed to give good performance on given supercomputers. +Because these parameters put additional contraints on the domain size for a +simulation, it can be cumbersome to calculate the number of cells and the +physical size of the computational domain for a given resolution. This +:download:`Python script<../../../Tools/compute_domain.py>` does it +automatically. diff --git a/Tools/compute_domain.py b/Tools/compute_domain.py index a1ba21979..822d776e8 100644 --- a/Tools/compute_domain.py +++ b/Tools/compute_domain.py @@ -3,6 +3,24 @@ import numpy as np import scipy.constants as scc import time, copy +''' +This Python script helps a user to parallelize a WarpX simulation. + +The user specifies the minimal size of the physical domain and the resolution +in each dimension, and the scripts computes: +- the number of cells and physical domain to satify the user-specified domain + size and resolution AND make sure that the number of cells along each + direction is a multiple of max_grid_size. +- a starting point on how to parallelize on Cori KNL (number of nodes, etc.). + +When running in a boosted frame, the script also has the option to +automatically compute the number of cells in z to satisfy dx>dz in the boosted +frame. + +Note that the script has no notion of blocking_factor. It is assumed that +blocking_factor = max_grid_size, and that all boxes have the same size. +''' + # Update the lines below for your simulation # ------------------------------------------ # 2 elements for 2D, 3 elements for 3D -- cgit v1.2.3