Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
- keep openPMD explicitly in the list as long GNUmake
is used for `run_tests.sh` CI
- add more info on BLAS++/LAPACK++ with Brew (manual)
|
|
|
|
|
|
|
|
|
|
* [skip-ci] Update code for electrostatic boundaries
* [skip-ci] Add new solver in the Cartesian part of the code
* Update electrostatic solver init
* [ci_skip] Update call to electrostatic solver
* Fix bug in electrostatic solver
* Remove blank line
* Update Source/FieldSolver/ElectrostaticSolver.cpp
Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com>
* Adapt code for latest amrex input
* Update test example
* Update automated test
* Add documentation and test
* Add benchmark
* Update test benchmark
* Set boundary potential values
Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com>
|
|
* Infrastructure for interacting particles with embedded boundary walls
* remove debug prints
* protect with AMREX_USE_EB
* fix for 2D XZ
* also update level set when regridding
* rename level set to 'DistanceToEB'
* add docstring for scrape particles.
* add assertion on maxLevel() since EB does not work with mesh refinement right now.
* m_eb_if_parser no longer exists
* add test for particle aborption at embedded boundaries
* fix bug I introduced refactoring
* add new test to suite
* fix test names
* fix 2D
* rookie python error
* fix filename in test
* fix script
* fix unused
* make sure we turn EB on in test
|
|
|
|
* Replace injection_style = python with injection_style = none
* Update Docs
* Add Default injection_style = none
|
|
Compile with `-Werror` to avoid new warnings slipping into the code base.
|
|
* Add New Spectral Index Class
* Cleaning
* Use New Spectral Index Class in PML
* Cleaning
* Reuse Available Data for divE
* Allocate Rho Data Only when Necessary
* Cleaning
* Fix Bug in RZ Geometry
* Revert Commits for Allocation of Rho Data
* Cleaning
* Update Forward Declaration Header
* Do Not Include Unnecessary Header Files
* Doxygen
* Do Not Use Separate div() Cleaning Flags
* SpectralFieldIndex: Add Missing param to Doxygen
* Remove Unused getRequiredNumberOfFields
|
|
Allows to restart more fine-grained workflows if network issues occur.
|
|
* add functions for getting External Fields
* Revert "Fix a bug in Update Monte Carlo Collisions (#2085) (#2086)"
This reverts commit e94122ce9a9d61c7d22ea25b593cbf04d0f5bf8b.
* Update Ascent path on summit
* revert
* remove the part of external field read development for future commit
* Fix unrelated code
* accidentally removed, add back
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
|
|
Warn if a build type passed by a user is not known in project build.
```
CMake Warning at cmake/WarpXFunctions.cmake:76 (message):
CMAKE_BUILD_TYPE 'adsads' is not one of
Release;Debug;MinSizeRel;RelWithDebInfo. Is this a typo?
Call Stack (most recent call first):
CMakeLists.txt:94 (set_default_build_type)
```
Summary:
```
Build type: adsads (unknown type, check warning)
```
This helps if one accidentially writes `debug` instead of `Debug`,
for instance. The result of a custom type that is undefined are
no compiler flags at all...
|
|
|
|
|
|
|
|
`IntelLLVM` is now a recognized CMake compiler, so we don't need to
add a Clang-ish identification anymore.
|
|
Forgot to reset the pointer in MCCProcess::Executor to device pointer. The
incorrect code could still run because we use pinned memory for the host
version. But it would have performance issues.
|
|
* amrex::Parser
Replace WarpXParser with amrex::Parser. Roundoff errors are expected because
of additional optimization in amrex::Parser.
* Reset the Langmuir_multi_psatd_single_precision benchmark due to change in single precision parser
* enable Intel oneAPI CI again
* Update Source/EmbeddedBoundary/WarpXInitEB.cpp
* Replace hard-coded number in ParticleDiag with a constexpr
|
|
* Update Monte Carlo Collisions
This addresses a number of issues in the Monte Carlo collision code.
* `MCCProcess` is not trivially copyable because it contains
`ManagedVector`. Therefore, it cannot be captured by GPU device lambda.
* The use of managed memory may have performance issues.
* There are memory leaks because some raw pointers allocated by `new` are
not `delete`d.
* `BackgroundMCCCollision` derives from a virtual base class, but the
compiler generated destructor is not automatically virtual.
* Apply suggestions from code review
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Apply the suggestion from @PhilMiller to get rid of unique_ptr
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
|
|
* RZ FDTD: Add ASSERT for Lower Bound of Radial Coordinate
* RZ PSATD: Add ASSERT for Lower Bound of Radial Coordinate
|
|
* Added file_min_digits input parameter
* Set default m_file_min_digits = 5
* Implemented file_min_digits in OpenPMD
|
|
and face multifabs. (#2079)
|
|
* Fix: CUDA -O3 with CMake
`-O3` was only set for CUDA if we ran CMake at least twice. This was
an intialization logic bug and should be set from the beginning, but
we initialized CUDA too late / applied the transform too early.
We now change the default build type to `Release` to avoid changing
defaults and avoid further surprises.
* Docs: Update Default Build Type (Release)
|
|
* Update copyright notices
* allow specification of boundary potentials at runtime when using Dirichlet boundary conditions in the electrostatic solver (labframe)
* added parsing to boundary potentials specified at runtime to allow time dependence through a mathematical expression with t (time)
* updated to picmistandard 0.0.14 in order to set the electrostatic solver convergence threshold
* update docs
* various changes requested during PR review
* fixed issue causing old tests to break and added a new test for time varying boundary potentials
* possibly a fix for the failed time varying boundary condition test
* changed permission on the analysis file for the time varying BCs test
* switched to using yt for data analysis since h5py is not available
* made changes compatible with PR#1730; changed potential boundary setting routine to use the ParallelFor construct and set all boundaries in a single call
* fixed typo in computePhiRZ
* updated docs and fixed other minor typos
* fixed bug in returning from loop over dimensions when setting boundary potentials rather than continuing
* changed to setting potentials on domain boundaries rather than tilebox boundaries and changed picmi.py to accept boundary potentials
* now using domain.surroundingNodes() to get the proper boundary cells for the physical domain
* fixed typo in variable name specifying z-boundary potential
* Initial commit of MCC development. Collision type background_mcc handles collisions with a neutral background species
* added back scattering and started expanding the multiple scattering processes functionality some
* added charge exchange collision handling
* added CrossSectionHandler class to install collision process cross-section calculators
* added file reading for cross-section data
* added input parameter for energy lost during inelastic collisions and changed how secondary species are passed for ionization events
* added ionization - requires work to add to the amrex::ParallelForRNG loop
* switched the MCC ionization handling to use the same workflow as other particle creation processes i.e. using the FilterCopyTransform functionality
* updated the docs with the input parameters needed to include MCC in a run
* added test for MCC and a function to ensure that cross-section data is provided with equal energy steps
* fixed issue with build failing when USE_OMP=TRUE and some of the naming issues in Examples/Physics_applications/capacitive_discharge but I am not sure what to do about the other files in that directory
* Improve file name construction to be strictly C++ compliant
* WIP GPU Support
* Fix QED Build (CUDA 10.0)
Replace capture of a host-side array with unnamed members for E & B
field transport with a nicely named struct that transports the
Array4's as members.
This is harder to mix up and thus more self-documenting and solves an
issue with NVCC 10.0 of the form:
```
nvcc_internal_extended_lambda_implementation: In instantiation of '__nv_dl_wrapper_t<Tag, F1, F2, F3, F4, F5>::__nv_dl_wrapper_t(F1, F2, F3, F4, F5) [with Tag = __nv_dl_tag<int (*)(amrex::ParticleTile<0, 0, 4, 0, amrex::ArenaAllocator>&, amrex::ParticleTile<0, 0, 4, 0, amrex::ArenaAllocator>&, amrex::Box, const amrex::Array4<const double> (&)[6], int, int, const SchwingerFilterFunc&, const SmartCreate&, const SmartCreate&, const SchwingerTransformFunc&), filterCreateTransformFromFAB<1, amrex::ParticleTile<0, 0, 4, 0, amrex::ArenaAllocator>, amrex::Array4<const double> [6], int, const SchwingerFilterFunc&, const SmartCreate&, const SmartCreate&, const SchwingerTransformFunc&>, 1>; F1 = amrex::Array4<double>; F2 = const SchwingerFilterFunc; F3 = const amrex::Array4<const double> [6]; F4 = const amrex::Box; F5 = int*]':
/home/ubuntu/repos/WarpX/Source/Particles/ParticleCreation/FilterCreateTransformFromFAB.H:174:28: required from 'Index filterCreateTransformFromFAB(DstTile&, DstTile&, amrex::Box, const FABs&, Index, Index, FilterFunc&&, CreateFunc1&&, CreateFunc2&&, TransFunc&&) [with int N = 1; DstTile = amrex::ParticleTile<0, 0, 4, 0, amrex::ArenaAllocator>; FABs = amrex::Array4<const double> [6]; Index = int; FilterFunc = const SchwingerFilterFunc&; CreateFunc1 = const SmartCreate&; CreateFunc2 = const SmartCreate&; TransFunc = const SchwingerTransformFunc&]'
/home/ubuntu/repos/WarpX/Source/Particles/MultiParticleContainer.cpp:1169:167: required from here
nvcc_internal_extended_lambda_implementation:70:103: error: invalid initializer for array member 'const amrex::Array4<const double> __nv_dl_wrapper_t<__nv_dl_tag<int (*)(amrex::ParticleTile<0, 0, 4, 0, amrex::ArenaAllocator>&, amrex::ParticleTile<0, 0, 4, 0, amrex::ArenaAllocator>&, amrex::Box, const amrex::Array4<const double> (&)[6], int, int, const SchwingerFilterFunc&, const SmartCreate&, const SmartCreate&, const SchwingerTransformFunc&), filterCreateTransformFromFAB<1, amrex::ParticleTile<0, 0, 4, 0, amrex::ArenaAllocator>, amrex::Array4<const double> [6], int, const SchwingerFilterFunc&, const SmartCreate&, const SmartCreate&, const SchwingerTransformFunc&>, 1>, amrex::Array4<double>, const SchwingerFilterFunc, const amrex::Array4<const double> [6], const amrex::Box, int*>::f3 [6]'
```
* CUDA: Quiet numerous warnings about unused-variable-warning suppressions being unreachable statements
* Compiles on GPU; may even run as intended
* Delete overwrought attempt at polymorphic implementation
* Fix compilation error from nvcc being stupid
* fixed improper input file for MCC test and updated reference results - a statistical test of the MCC routine would be better so that reference results should not change with changes in the RNG
* Runs on CPU and GPU now
* Clean up GPU-related memory/allocation management and function usage
* Try inlining MCCProcess::getCrossSection to appease HIP and SYCL compilers
* Fix up style/formatting issues
* Typedef to make stuff cleaner and simpler
* MCC: Make helper functions static
* MCC: Pull parsing out to a helper
* MCC: Name member variables according to convention
* MCC: Pull out part of constructor
* MCC: Add constructor that will take any iterable source for energies/cross sections
* MCC: Overload operator new/delete to allocate in managed memory, to make later use more straightforward
* MCC: Add process type for ionization
* MCC: Expose a method for adding processes programmatically
* MCC: Follow convention of all types being 'class', which keeps grep easy
* MCC: Fix a formatting silliness
* added a check that the collision cross-section is zero at the energy penalty for the collision to ensure that no collision will happen with a particle with insufficient energy to pay the energy cost
* updated MCC input files to new standard inputs
* reverted incorrect changes that was messed up during various upstream and branch merges
* moved the MCC benchmark results to the Examples section in the documentation, which allows us to meet the style requirements - the tests are ongoing and the results will be provided in a following commit
* Add GPU synchronization after collisions
* added benchmark results and updated test results with the refined cross-sections needed to accurately calculate the benchmark cases
* removed example input files for benchmarks since the style requirement prohibits input files not included in an automated test; also updated the reference results for the MCC test which changed slightly after merging upstream development and updated amrex
* CLean up indentation and bit of commented out code
* Inline addProcess method and refactor
* Apply suggestions from code review
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* switched MCC files copyright to Modern Electron
* Remove sync calls, which are unnecessary on support modern hardware
* removed He collision cross-sections and instead the new warpx-data repository to access those files; also added a call in run_test.sh to clone the new repository during tests
* Apply suggestions from code review
Co-authored-by: David Grote <dpgrote@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* cleaned up the MCC documentation a bit
* added include statements now needed by the MCC (after recent PR merges) and updated the MCC test reference values which changed slightly due to changing the value of Boltzmann's constant
* added plot results for 3rd benchmark case from literature and changed documentation to reference uploaded image rather than local image in repo
* updated MCC test file to match earlier execution which changed due to the new warpx use_filter default value being 1
* Apply suggestions from code review
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
* Apply suggestions from code review
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* added warp-data repository clone command to docs
* fix breaking change from earlier commit
Co-authored-by: Peter Scherpelz <peter.scherpelz@modernelectron.com>
Co-authored-by: Phil Miller <phil@intensecomputing.com>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Phil Miller <phil.miller@intensecomputing.com>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-31-245.us-east-2.compute.internal>
Co-authored-by: David Grote <dpgrote@lbl.gov>
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
|
|
* interface silvermueller with refactored boundary interface
* add interface in silver mueller input files
* define first and second half for EvolveB
* add do_pml parse snce RZ needs do_pml to be st to false
* Silver-Mueller boundary condition in docs
* add firsthalf in ApplyBfieldBoundary within PushPSATD as only first first half is used to apply silvermueller
* CallSilverMueller once for all boundaries
* remove unused do silvermueller flag
* fix typo in input file
* Apply suggestions from code review
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
* abort message if silver-mueller is not selected on all valid boundaries
* fix typo
* fix eol
* remove ifdef from inside the Assert message
* check silver-mueller selection after reading all boundaries
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
|
|
* option to randomize azimuthal coordinate of plasma particles
* typo
* do not capture this
* add documentation
* default for random azimuth is true
* Update Source/Particles/WarpXParticleContainer.H
* changes from code review
* minor doc, and avoid calling the RNG in Cartesian geometry
* minor doc
* reset non-psatd benchmarks
* deactivate random azimuth for RZ PSATD CI
* Reset benchmark
* Reset Benchmark of Python_Langmuir_rz_multimode
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: Edoardo Zoni <ezoni@lbl.gov>
|
|
* Add Cost Calculations to ApplyFilter
* Add Cost Calculations to ApplyStencil
* Update Doxygen
|
|
* For RZ, changed the sign of the density corrections near the axis
* Further fixes for deposition correction near axis
* Yet one more sign fix for charge density
* For RZ spectral solver, filled in the guard cells below the radial axis
* Fix white space at end of line
* In RZ spectral backtransform, ensure box is valid
* For RZ inverse volume scaling, fixed use of nGrow to use nGrowVect
* Temporary fix adding damped cells in the domain interior
* Bug fix for RZ PSATD scalar backward transform
* Fixes for damping of the fields in the z-guards
* Bug fix in DampFieldsInGuards
* Bug fix in DampFieldsInGuards (for tiling)
* Added warpx_amr_check_input input parameter
* Removed unneeded damp and zero_in_domain input
* Removed damping related code from picmi
* Improved some comments in code copying field to the radial guard cells
* Update Source/FieldSolver/SpectralSolver/SpectralFieldDataRZ.cpp
Simplify the expression for the sign
Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com>
* Updated benchmarks
* Updated tolerance for Langmuir analysis script
* Updated CI test galilean_rz_psatd_current_correction
Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com>
|
|
Update AMReX to AMReX-Codes/amrex@cd1f5430be29aed0a85348669394d7fbb8355dba
No changes im PICSAR since last update.
```
./Tools/Release/updatePICSAR.py
./Tools/Release/updateAMReX.py
```
|
|
* Added explanatory comments in timestepper.py
* Add comments to WarpInterface.py
* Added comments to WarpXPIC.py
|
|
* Added functionality to pass mpi comm from python script to amrex during initialization
* Fixed missing _ in MPI._sizeof()
* Added functions to get the current processor's rank and total number of processors
* Renamed MPI_Comm to _MPI_Comm_type and defined _MPI_Comm_type in except statement
* Updated comment to explain why mpi4py needs to be imported before loading libwarpx
* Removed ifdef flags that prevent amrex_init_with_inited_mpi from being declared when MPI is off
* Changed amrex_init_with_inited_mpi to be declared even when not using mpi, but will be defined to be functionally the same as amrex_init
* Defined MPI = None to signify whether MPI is used, to add another check when initializing amrex
* Changed ifdef blocks in WarpXWrappers.cpp/h to fix compile errors.
Added ifdef block to conditionally declare amrex_init_with_inited_mpi in WarpXWrappers.h to prevent compile error when not using MPI. Removed ifdef block to declare/define same function in WarpXWrappers.cpp since function needs to be declared even when MPI is not used, but will never be called in that case.
* Changed BL_USE_MPI to AMREX_USE_MPI and removed incorrect MPI=None statement
* Changed BL_USE_MPI to AMREX_USE_MPI
* Added test to verify correct passing of MPI communicator to amrex
* Added ability to pass mpi_comm to sim.step
* Change test to check for differeing outputs when passed different inputs
* Removed obsolete comments refactored program to use more shared code
* Refactored comments
* Updated description to match test
* Removed unecessary imports and updated comments
|
|
* Fix Warnings for RZ Builds
* Add Cost Calculation to RZ Spectral Solver
|
|
This adds support for FFTW search with OpenMP support if FFTW was
installed with CMake. This is, for instance, the case with EasyBuild
based installs on Juelich's JUWELS Booster cluster.
Fixes:
- the FFTW install does NOT define CMake targets for their sub-
targets, so we need to manually find and link the `_omp` lib
- the library dir hint with CMake was empty because of a differing
spelling (upper vs lowercase of FFTW from pkg-config check module)
Tested:
- installed with pkg-config, SP and DP, picked up with `*.pc`
- installed with CMake, SP and DP, picked up with `*config.cmake`
- installed with CMake, SP and DP, picked up with `*.pc`
|
|
BinaryCollision class (#2057)
* Refactor collisions: replace PairWiseCoulombCollision class with BinaryCollision class
* Make BinaryCollision a templated class with the functor type as parameter
* Update README.md
|
|
* Fix small issue when adding moving window upon restart
* Move do_moving_window at the end of WarpXheader
|
|
* Do Not Always Fill Guard Cells with Inverse FFTs
* Query psatd.fill_guards from Inputs
* Clean Up and Reduce Style Changes
* Fix Bug for Periodic Single Box
* Clean Up and Reduce Style Changes
* Fix Bug for RZ PSATD
* Remove Input Parameter, Default 0 Unless Damping
* Fix CI Tests (2D)
* Fix CI Tests (3D)
|
|
Add a few more useful modules.
|
|
Move the calculation of initial space charge fields further down
in `WarpX::InitData` and out of `InitFromScratch`.
This call runs already MLMG routines that rely on a filtered rho,
whose stencils are not initialized if called to early.
|
|
Install BLAS++/LAPACK++ only fr the Python main and RZ tests,
which both build & perform RZ tests with PSATD support.
Save a minute in download & build for the other test matrix entries.
|
|
|
|
The latest yt release (4.0.0) adds a new unconditional `nbody` key,
which breaks our test checksums.
|
|
* Add possibility to start and stop moving window
* Update Benchmark laserInjection_2d
* Update Source/Diagnostics/BTDiagnostics.cpp
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* modification of the MoveWindow function in the python interface
* False to True for move j in python function
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
|
|
|
|
* fix readability-container-size-empty warning raised by clang-tidy
* fix bug
|
|
* Added ParticleBoundaries and reflecting boundary conditions
* Added ParticleBoundaries::AllNone
* Allowed different particle boundary conditions on each side of the domain
* Updated the documentation for particle boundaries
* Fix end of line space in Docs/source/running_cpp/parameters.rst
* Updated the reflecting BC to use boundary input group
* Fixes to reflective boundary conditions
* Bug fix in AsStored
* Added particle boundaries regression test particle_boundaries_3d
* Fixed particle_boundaries_3d.json
* Minor updates
* Added algo.particle_shape to test case
* Remove do_pml from test case
Co-authored-by: Revathi Jambunathan <41089244+RevathiJambunathan@users.noreply.github.com>
* Need to explicitly turn off pml in CI test
* Re-add include
* Fixed includes
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: Revathi Jambunathan <41089244+RevathiJambunathan@users.noreply.github.com>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
|
|
* Added warpx_solver_verbosity input parameter
- This input paramter is for the electrostatic solver to pass into
MLMG::setVerbose(int)
- Originally this value was hardcoded to 2
* readded accidently deleted line
* Handle the default solver_verbosity value on the C++ side
* Verbosity parameter now works the same as warpx.self_fields_max_iters
- The input parameter for mlmg verbosity is now warpx.self_fields_verbosity
- It still has a default value of 2.
* fixed missing comma
* added missing parameter to function call
* Added documentation entry for warpx.self_fields_verbosity
* corrected documentation
* fixed formatting mistsake
|
|
(#2046)
|