aboutsummaryrefslogtreecommitdiff
path: root/Docs/source/running_cpp
diff options
context:
space:
mode:
authorGravatar ablelly <aurore.blelly@ensta-paristech.fr> 2019-08-30 16:40:38 +0200
committerGravatar ablelly <aurore.blelly@ensta-paristech.fr> 2019-08-30 16:40:38 +0200
commit2f92b3877293bf51282becb6e8e55f06a8052207 (patch)
tree514cbeeb5e69975ff1f79c83ca87a85e141b96a0 /Docs/source/running_cpp
parentb1891e46af784e0423cbfda94a121e877c64b9e0 (diff)
parent0d188ff20e4c13e291e8117295fcabcff6663df9 (diff)
downloadWarpX-2f92b3877293bf51282becb6e8e55f06a8052207.tar.gz
WarpX-2f92b3877293bf51282becb6e8e55f06a8052207.tar.zst
WarpX-2f92b3877293bf51282becb6e8e55f06a8052207.zip
Merge branch 'merged_overlap_pml' of https://github.com/ablelly/WarpX into merged_overlap_pml
Diffstat (limited to 'Docs/source/running_cpp')
-rw-r--r--Docs/source/running_cpp/parallelization.rst24
-rw-r--r--Docs/source/running_cpp/parameters.rst26
-rw-r--r--Docs/source/running_cpp/platforms.rst69
-rw-r--r--Docs/source/running_cpp/running_cpp.rst1
4 files changed, 100 insertions, 20 deletions
diff --git a/Docs/source/running_cpp/parallelization.rst b/Docs/source/running_cpp/parallelization.rst
index 440c17235..a8c89f340 100644
--- a/Docs/source/running_cpp/parallelization.rst
+++ b/Docs/source/running_cpp/parallelization.rst
@@ -61,22 +61,8 @@ and MPI decomposition and computer architecture used for the run:
* Amount of high-bandwidth memory.
-Below is a list of experience-based parameters
-that were observed to give good performance on given supercomputers.
-
-Rule of thumb for 3D runs on NERSC Cori KNL
--------------------------------------------
-
-For a 3D simulation with a few (1-4) particles per cell using FDTD Maxwell
-solver on Cori KNL for a well load-balanced problem (in our case laser
-wakefield acceleration simulation in a boosted frame in the quasi-linear
-regime), the following set of parameters provided good performance:
-
-* ``amr.max_grid_size=64`` and ``amr.blocking_factor=64`` so that the size of
- each grid is fixed to ``64**3`` (we are not using load-balancing here).
-
-* **8 MPI ranks per KNL node**, with ``OMP_NUM_THREADS=8`` (that is 64 threads
- per KNL node, i.e. 1 thread per physical core, and 4 cores left to the
- system).
-
-* **2 grids per MPI**, *i.e.*, 16 grids per KNL node.
+Because these parameters put additional contraints on the domain size for a
+simulation, it can be cumbersome to calculate the number of cells and the
+physical size of the computational domain for a given resolution. This
+:download:`Python script<../../../Tools/compute_domain.py>` does it
+automatically.
diff --git a/Docs/source/running_cpp/parameters.rst b/Docs/source/running_cpp/parameters.rst
index 4e7ae6678..1c4e477d1 100644
--- a/Docs/source/running_cpp/parameters.rst
+++ b/Docs/source/running_cpp/parameters.rst
@@ -320,6 +320,7 @@ Particle initialization
* ``<species>.plot_vars`` (list of `strings` separated by spaces, optional)
List of particle quantities to write to `plotfiles`. By defaults, all
quantities are written to file. Choices are
+
* ``w`` for the particle weight,
* ``ux`` ``uy`` ``uz`` for the particle momentum,
* ``Ex`` ``Ey`` ``Ez`` for the electric field on particles,
@@ -336,6 +337,23 @@ Particle initialization
* ``warpx.serialize_ics`` (`0 or 1`)
Whether or not to use OpenMP threading for particle initialization.
+* ``<species>.do_field_ionization`` (`0` or `1`) optional (default `0`)
+ Do field ionization for this species (using the ADK theory).
+
+* ``<species>.physical_element`` (`string`)
+ Only read if `do_field_ionization = 1`. Symbol of chemical element for
+ this species. Example: for Helium, use ``physical_element = He``.
+
+* ``<species>.ionization_product_species`` (`string`)
+ Only read if `do_field_ionization = 1`. Name of species in which ionized
+ electrons are stored. This species must be created as a regular species
+ in the input file (in particular, it must be in `particles.species_names`).
+
+* ``<species>.ionization_initial_level`` (`int`) optional (default `0`)
+ Only read if `do_field_ionization = 1`. Initial ionization level of the
+ species (must be smaller than the atomic number of chemical element given
+ in `physical_element`).
+
Laser initialization
--------------------
@@ -696,7 +714,13 @@ Diagnostics and output
`openPMD <https://github.com/openPMD>`__ format.
When WarpX is compiled with openPMD support, this is ``1`` by default.
-* ``warpx.do_boosted_frame_diagnostic`` (`0 or 1`)
+* ``warpx.openpmd_backend`` (``h5``, ``bp`` or ``json``) optional
+ I/O backend for
+ `openPMD <https://github.com/openPMD>`__ dumps.
+ When WarpX is compiled with openPMD support, this is ``h5`` by default.
+ ``json`` only works with serial/single-rank jobs.
+
+* ``warpx.do_boosted_frame_diagnostic`` (`0` or `1`)
Whether to use the **back-transformed diagnostics** (i.e. diagnostics that
perform on-the-fly conversion to the laboratory frame, when running
boosted-frame simulations)
diff --git a/Docs/source/running_cpp/platforms.rst b/Docs/source/running_cpp/platforms.rst
new file mode 100644
index 000000000..fc4e2b1fb
--- /dev/null
+++ b/Docs/source/running_cpp/platforms.rst
@@ -0,0 +1,69 @@
+Running on specific platforms
+=============================
+
+Running on Cori KNL at NERSC
+----------------------------
+
+The batch script below can be used to run a WarpX simulation on 2 KNL nodes on
+the supercomputer Cori at NERSC. Replace descriptions between chevrons ``<>``
+by relevant values, for instance ``<job name>`` could be ``laserWakefield``.
+
+.. literalinclude:: ../../../Examples/batchScripts/batch_cori.sh
+ :language: bash
+
+To run a simulation, copy the lines above to a file ``batch_cori.sh`` and
+run
+::
+
+ sbatch batch_cori.sh
+
+to submit the job.
+
+For a 3D simulation with a few (1-4) particles per cell using FDTD Maxwell
+solver on Cori KNL for a well load-balanced problem (in our case laser
+wakefield acceleration simulation in a boosted frame in the quasi-linear
+regime), the following set of parameters provided good performance:
+
+* ``amr.max_grid_size=64`` and ``amr.blocking_factor=64`` so that the size of
+ each grid is fixed to ``64**3`` (we are not using load-balancing here).
+
+* **8 MPI ranks per KNL node**, with ``OMP_NUM_THREADS=8`` (that is 64 threads
+ per KNL node, i.e. 1 thread per physical core, and 4 cores left to the
+ system).
+
+* **2 grids per MPI**, *i.e.*, 16 grids per KNL node.
+
+Running on Summit at OLCF
+-------------------------
+
+The batch script below can be used to run a WarpX simulation on 2 nodes on
+the supercomputer Summit at OLCF. Replace descriptions between chevrons ``<>``
+by relevalt values, for instance ``<input file>`` could be
+``plasma_mirror_inputs``. Note that the only option so far is to run with one
+MPI rank per GPU.
+
+.. literalinclude:: ../../../Examples/batchScripts/batch_summit.sh
+ :language: bash
+
+To run a simulation, copy the lines above to a file ``batch_summit.sh`` and
+run
+::
+
+ bsub batch_summit.sh
+
+to submit the job.
+
+For a 3D simulation with a few (1-4) particles per cell using FDTD Maxwell
+solver on Summit for a well load-balanced problem (in our case laser
+wakefield acceleration simulation in a boosted frame in the quasi-linear
+regime), the following set of parameters provided good performance:
+
+* ``amr.max_grid_size=256`` and ``amr.blocking_factor=128``.
+
+* **One MPI rank per GPU** (e.g., 6 MPI ranks for the 6 GPUs on each Summit
+ node)
+
+* **Two `128x128x128` grids per GPU**, or **one `128x128x256` grid per GPU**.
+
+A batch script with more options regarding profiling on Summit can be found at
+:download:`Summit batch script<../../../Examples/Tests/gpu_test/script_profiling.sh>` \ No newline at end of file
diff --git a/Docs/source/running_cpp/running_cpp.rst b/Docs/source/running_cpp/running_cpp.rst
index 7d82e55f1..31cecb12f 100644
--- a/Docs/source/running_cpp/running_cpp.rst
+++ b/Docs/source/running_cpp/running_cpp.rst
@@ -9,3 +9,4 @@ Running WarpX as an executable
parameters
profiling
parallelization
+ platforms \ No newline at end of file