diff options
author | 2020-07-16 09:28:58 -0700 | |
---|---|---|
committer | 2020-07-16 18:28:58 +0200 | |
commit | 02d59e100674803542a3f99b38d4d25d5b34de9a (patch) | |
tree | af3a897ba12fc877d502f8d2aa4e2ab493ff1021 /Source/Particles/WarpXParticleContainer.cpp | |
parent | 6afd46fff1f71fb2f0f348d27bfca85fca9420fe (diff) | |
download | WarpX-02d59e100674803542a3f99b38d4d25d5b34de9a.tar.gz WarpX-02d59e100674803542a3f99b38d4d25d5b34de9a.tar.zst WarpX-02d59e100674803542a3f99b38d4d25d5b34de9a.zip |
Default: abort_on_out_of_gpu_memory = 1 (#1164)
* Default: abort_on_out_of_gpu_memory = 1
Change the default input parameter from AMReX
`amrex.abort_on_out_of_gpu_memory` from false (`0`) to true (`1`).
We set this by default to avoid that users experience super-slow GPU
runs when exceeding GPU memory. In such a case, users should explicitly
set this to option.
In my optinion, this is only an intermediate solution since what we
actually want on out-of-GPU memory events should be:
- finish current simulation step and cause a load balance or
- trigger a checkpoint and shut down cleanly
- then the user can manually restart with more resources
We want to address the opposite case, user under-utilizes a GPU, with
a warning for now.
Ref.:
- https://amrex-codes.github.io/amrex/docs_html/GPU.html#inputs-parameters
* abort_on_out_of_gpu_memory: review
Add review comments.
Co-authored-by: L. Diana Amorim <LDianaAmorim@lbl.gov>
Co-authored-by: L. Diana Amorim <LDianaAmorim@lbl.gov>
Diffstat (limited to 'Source/Particles/WarpXParticleContainer.cpp')
0 files changed, 0 insertions, 0 deletions