diff options
author | 2023-08-12 11:17:38 -0700 | |
---|---|---|
committer | 2023-08-12 11:17:38 -0700 | |
commit | 6c93d9fc13830d574c69ac7b166f5fbdb0809731 (patch) | |
tree | 8742df6045aa2bfdccb5a7991eae436e886e47d1 /Source/Python/WarpX.cpp | |
parent | f6760c8e6d64605f73476f9bc8292dc9d85df454 (diff) | |
download | WarpX-6c93d9fc13830d574c69ac7b166f5fbdb0809731.tar.gz WarpX-6c93d9fc13830d574c69ac7b166f5fbdb0809731.tar.zst WarpX-6c93d9fc13830d574c69ac7b166f5fbdb0809731.zip |
Transition to pyAMReX (#3474)
* pyAMReX: Build System
* CI Updates (Changed Options)
* Callback modernization (#4)
* refactor callbacks.py
* added binding code in `pyWarpX.cpp` to add or remove keys from the callback dictionary
* minor PR cleanups
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Added Python level reference to fetch the multifabs (#3)
* pyAMReX: Build System
* Added Python level reference to Ex_aux
* Now uses the multifab map
* Fix typo
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Add initialization and finalize routines (#5)
A basic PICMI input file will now run to completion.
* Regression Tests: WarpX_PYTHON=ON
* Update Imports to nD pyAMReX
* IPO/LTO Control
Although pybind11 relies heavily on IPO/LTO to create low-latency,
small-binary bindings, some compilers will have troubles with that.
Thus, we add a compile-time option to optionally disable it when
needed.
* Fix: Link Legacy WarpXWrappers.cpp
* Wrap WarpX instance and start multi particle container
* Fix test Python_pass_mpi_comm
* Start wrapper for multiparticle container
* Add return policy
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Update fields to use MultiFabs directly
Remove EOL white space
Removed old routines accessing MultiFabs
Update to use "node_centered"
* Fix compilation with Python
* Update fields.py to use modified MultiFab tag names
* Remove incorrect, unused code
* Add function to extract the WarpX MultiParticleContainer
* Complete class WarpXParticleContainer
* Wrap functions getNprocs / getMyProc
* restore `install___` callback API - could remove later if we want but should maintain backward compatibility for now
* add `gett_new` and `getistep` functions wrappers; fix typos in `callbacks.py`; avoid error in getting `rho` from `fields.py`
* Update callback call and `getNproc`/`getMyProc` function
* Replace function add_n_particles
* Fix setitem in fields.py for 1d and 2d
* also update `gett_new()` in `_libwarpx.py` in case we want to keep that API
* added binding for `WarpXParIter` - needed to port `libwarpx.depositChargeDensity()` which is an ongoing effort
* Wrap function num_real_comp
* added binding for `TotalNumberOfParticles` and continue progress on enabling 1d MCC test to run
* add `SyncRho()` binding and create helper function in `libwarpx.depositChargeDensity` to manage scope of the particle iter
* Clean up issues in fields.py
* update bindings for `get_particle_structs`
* Fix setitem in fields.py
* switch order of initialization for particle container and particle iterator
* switch deposit_charge loop to C++ code; bind `ApplyInverseVolumeScalingToChargeDensity`
* move `WarpXParticleContainer.cpp` and `MultiParticleContainer.cpp` to new Particles folder
* added binding for `ParticleBoundaryBuffer`
* More fixes for fields.py
* Fix: Backtraces from Python
Add the Python executable name with an absolute path, so backtraces
in AMReX work. See linked AMReX issue for details.
* Cleaning
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix: Backtraces from Python Part II
Do not add Python script name - it confuses the AMReX ParmParser to
build its table.
* Fix: CMake Dependencies for Wheel
This fixes a racecondition during `pip_install`: it was possible
that not all dimensions where yet build from pybind before we
start packing them in the wheel for pip install.
* MCC Test: Install Callbacks before Run
Otherwise hangs in aquiring the gil during shutdown.
* addition of `Python/pywarpx/particle_containers.py` and various associated bindings
* Fix: CMake Superbuild w/ Shared AMReX
We MUST build AMReX as a shared (so/dll/dylib) library, otherwise
all the global state in it will cause split-brain situations, where
our Python modules operate on different stacks.
* add `clear_all()` to callbacks in order to remove all callbacks at finalize
* add `-DWarpX_PYTHON=ON` to CI tests that failed to build
* add `get_comp_index` and continue to port particle data bindings
* Add AMReX Module as `libwarpx_so.amr`
Attribute to pass through the already loaded AMReX module with the
matching dimensionality to the simulation.
* Fix for fields accounting for guard cells
* Fix handling of ghost cells in fields
* Update & Test: Particle Boundary Scraping
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* CI: Python Updates
- modernize Python setups
- drop CUDA 11.0 for good and go 11.3+ as documented already
```
Error #3246: Internal Compiler Error (codegen): "there was an error in verifying the lgenfe output!"
```
* CI: Python Updates (chmod)
* Add support for cupy in fields.py
* Add MultiFab reduction routines
* CI: CUDA 11.3 is <= Ubuntu 20.04
* changed `AddNParticles` to take `amrex::Vector` arguments
* setup.py: WarpX_PYTHON=ON
* update various 2d and rz tests with new APIs
* add `-DWarpX_PYTHON_IPO=OFF` to compile string for 2d and 3d Python CI tests to speed up linking
* CI: -DpyAMReX_IPO=OFF
* CI: -DpyAMReX_IPO=OFF
actually adding `=OFF`
* CI: Intel Python
* CI: macOS Python Executable
Ensure we always use the same `python3` executable, as specified
by the `PATH` priority.
* CMake: Python Multi-Config Build
Add support for multi-config generators, especially on Windows.
* __init__.py: Windows DLL Support
Python 3.8+ on Windows: DLL search paths for dependent
shared libraries
Refs.:
- https://github.com/python/cpython/issues/80266
- https://docs.python.org/3.8/library/os.html#os.add_dll_directory
* CI: pywarpx Update
our setup.py cannot install pyamrex yet as a dependency.
* ABLASTR: `ablastr/export.H`
Add a new header to export public globals that are not covered by
`WINDOWS_EXPORT_ALL_SYMBOLS`.
https://stackoverflow.com/questions/54560832/cmake-windows-export-all-symbols-does-not-cover-global-variables/54568678#54568678
* WarpX: EXPORT Globals in `.dll` files
WarpX still uses a lot of globals:
- `static` member variables
- `extern` global variables
These globals cannot be auto-exported with CMake's
`WINDOWS_EXPORT_ALL_SYMBOLS` helper and thus we need to mark them
manually for DLL export (and import) via the new ABLASTR
`ablastr/export.H` helper macros.
This starts to export symbols in the:
- WarpX and particle container classes
- callback hook database map
- ES solver
* CI: pywarpx Clang CXXFLAGS Down
Move CXXFLAGS (`-Werror ...`) down until deps are installed.
* GNUmake: Generate `ablastr/export.H`
* CMake: More Symbol Exports for Windows
* `WarpX-tests.ini`: Simplify Python no-IPO
Also avoids subtle differences in compilation that increase
compile time.
* Update PICMI_inputs_EB_API.py for embedded_boundary_python_API CI test
* Fix Python_magnetostatic_eb_3d
* Update: Python_restart_runtime_components
New Python APIs
* Windows: no dllimport for now
* CI: Skip `PYINSTALLOPTIONS` For Now
* CMake: Dependency Bump Min-Versions
for external packages picked up by `find_package`.
* Fix F and G_fp names in fields.py
* Tests: Disable `Python_pass_mpi_comm`
* Wrappers: Cleanup
* pyWarpX: Include Cleaning
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fields.py: Fix F and G Wrappers
Correct MultiFab names (w/o components).
* Remove unused in fields.py
* Windows: New Export Headers
- ABLASTR: `ablastr/export.H`
- WarpX: `Utils/export.H`
* removed `WarpInterface.py` since that functionality is now in `particle_containers.py`; removed parts of `WarpXWrappers.cpp` that have been ported to pyamrex
* CMake: Link OBJECT Target PRIVATE
* CMake: Remove OBJECT Target
Simplify and make `app` link `lib` (default: static). Remove OBJECT
target.
* Fix in fields.py for the components index
* Update get_particle_id/cpu
As implemented in pyAMReX with
https://github.com/AMReX-Codes/pyamrex/pull/165
* WarpX: Update for Private Constructor
* Import AMReX Before pyd DLL Call
Importing AMReX will add the `add_dll_directory` to a potentially
shared amrex DLL on Windows.
* Windows CI: Set PATH to amrex_Nd.dll
* CMake: AMReX_INSTALL After Python
In superbuild, Python can modify `AMReX_BUILD_SHARED_LIBS`.
* Clang Win CI: Manually Install requirements
Sporadic error is:
```
...
Installing collected packages: pyparsing, numpy, scipy, periodictable, picmistandard
ERROR: Could not install packages due to an OSError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\hostedtoolcache\\windows\\Python\\3.11.4\\x64\\Lib\\site-packages\\numpy\\polynomial\\__init__.py'
Consider using the `--user` option or check the permissions.
```
* Hopefully final fixes to fields.py
* Update getProbLo/getProbHi
* Set plasma length strength
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
* Fix fields method to remove CodeQL notice
* Update Comments & Some Finalize
* Move: set_plasma_lens_strength
to MPC
---------
Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com>
Co-authored-by: David Grote <dpgrote@lbl.gov>
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: Dave Grote <grote1@llnl.gov>
Co-authored-by: Roelof Groenewald <regroenewald@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Diffstat (limited to 'Source/Python/WarpX.cpp')
-rw-r--r-- | Source/Python/WarpX.cpp | 201 |
1 files changed, 201 insertions, 0 deletions
diff --git a/Source/Python/WarpX.cpp b/Source/Python/WarpX.cpp new file mode 100644 index 000000000..9ef5281b3 --- /dev/null +++ b/Source/Python/WarpX.cpp @@ -0,0 +1,201 @@ +/* Copyright 2021-2022 The WarpX Community + * + * Authors: Axel Huebl + * License: BSD-3-Clause-LBNL + */ +#include "pyWarpX.H" + +#include <WarpX.H> +// see WarpX.cpp - full includes for _fwd.H headers +#include <BoundaryConditions/PML.H> +#include <Diagnostics/MultiDiagnostics.H> +#include <Diagnostics/ReducedDiags/MultiReducedDiags.H> +#include <EmbeddedBoundary/WarpXFaceInfoBox.H> +#include <FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H> +#include <FieldSolver/FiniteDifferenceSolver/MacroscopicProperties/MacroscopicProperties.H> +#include <FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H> +#ifdef WARPX_USE_PSATD +# include <FieldSolver/SpectralSolver/SpectralKSpace.H> +# ifdef WARPX_DIM_RZ +# include <FieldSolver/SpectralSolver/SpectralSolverRZ.H> +# include <BoundaryConditions/PML_RZ.H> +# else +# include <FieldSolver/SpectralSolver/SpectralSolver.H> +# endif // RZ ifdef +#endif // use PSATD ifdef +#include <FieldSolver/WarpX_FDTD.H> +#include <Filter/NCIGodfreyFilter.H> +#include <Particles/MultiParticleContainer.H> +#include <Particles/ParticleBoundaryBuffer.H> +#include <AcceleratorLattice/AcceleratorLattice.H> +#include <Utils/TextMsg.H> +#include <Utils/WarpXAlgorithmSelection.H> +#include <Utils/WarpXConst.H> +#include <Utils/WarpXProfilerWrapper.H> +#include <Utils/WarpXUtil.H> + +#include <AMReX.H> +#include <AMReX_ParmParse.H> +#include <AMReX_ParallelDescriptor.H> + +#if defined(AMREX_DEBUG) || defined(DEBUG) +# include <cstdio> +#endif +#include <string> + + +//using namespace warpx; + +namespace warpx { + struct Config {}; +} + +void init_WarpX (py::module& m) +{ + // Expose the WarpX instance + m.def("get_instance", + [] () { return &WarpX::GetInstance(); }, + "Return a reference to the WarpX object."); + + m.def("finalize", &WarpX::Finalize, + "Close out the WarpX related data"); + + py::class_<WarpX> warpx(m, "WarpX"); + warpx + // WarpX is a Singleton Class with a private constructor + // https://github.com/ECP-WarpX/WarpX/pull/4104 + // https://pybind11.readthedocs.io/en/stable/advanced/classes.html?highlight=singleton#custom-constructors + .def(py::init([]() { + return &WarpX::GetInstance(); + })) + .def_static("get_instance", + [] () { return &WarpX::GetInstance(); }, + "Return a reference to the WarpX object." + ) + .def_static("finalize", &WarpX::Finalize, + "Close out the WarpX related data" + ) + + .def("initialize_data", &WarpX::InitData, + "Initializes the WarpX simulation" + ) + .def("evolve", &WarpX::Evolve, + "Evolve the simulation the specified number of steps" + ) + + // from AmrCore->AmrMesh + .def("Geom", + //[](WarpX const & wx, int const lev) { return wx.Geom(lev); }, + py::overload_cast< int >(&WarpX::Geom, py::const_), + py::arg("lev") + ) + .def("DistributionMap", + [](WarpX const & wx, int const lev) { return wx.DistributionMap(lev); }, + //py::overload_cast< int >(&WarpX::DistributionMap, py::const_), + py::arg("lev") + ) + .def("boxArray", + [](WarpX const & wx, int const lev) { return wx.boxArray(lev); }, + //py::overload_cast< int >(&WarpX::boxArray, py::const_), + py::arg("lev") + ) + .def("multifab", + [](WarpX const & wx, std::string const multifab_name) { + if (wx.multifab_map.count(multifab_name) > 0) { + return wx.multifab_map.at(multifab_name); + } else { + throw std::runtime_error("The MultiFab '" + multifab_name + "' is unknown or is not allocated!"); + } + }, + py::arg("multifab_name"), + py::return_value_policy::reference_internal, + "Return MultiFabs by name, e.g., 'Efield_aux[x][l=0]', 'Efield_cp[x][l=0]', ..." + ) + .def("multi_particle_container", + [](WarpX& wx){ return &wx.GetPartContainer(); }, + py::return_value_policy::reference_internal + ) + .def("get_particle_boundary_buffer", + [](WarpX& wx){ return &wx.GetParticleBoundaryBuffer(); }, + py::return_value_policy::reference_internal + ) + + // Expose functions used to sync the charge density multifab + // accross tiles and apply appropriate boundary conditions + .def("sync_rho", + [](WarpX& wx){ wx.SyncRho(); } + ) +#ifdef WARPX_DIM_RZ + .def("apply_inverse_volume_scaling_to_charge_density", + [](WarpX& wx, amrex::MultiFab* rho, int const lev) { + wx.ApplyInverseVolumeScalingToChargeDensity(rho, lev); + }, + py::arg("rho"), py::arg("lev") + ) +#endif + + // Expose functions to get the current simulation step and time + .def("getistep", + [](WarpX const & wx, int lev){ return wx.getistep(lev); }, + py::arg("lev") + ) + .def("gett_new", + [](WarpX const & wx, int lev){ return wx.gett_new(lev); }, + py::arg("lev") + ) + + .def("set_potential_on_eb", + [](WarpX& wx, std::string potential) { + wx.m_poisson_boundary_handler.setPotentialEB(potential); + }, + py::arg("potential") + ) + ; + + py::class_<warpx::Config>(m, "Config") +// .def_property_readonly_static( +// "warpx_version", +// [](py::object) { return Version(); }, +// "WarpX version") + .def_property_readonly_static( + "have_mpi", + [](py::object){ +#ifdef AMREX_USE_MPI + return true; +#else + return false; +#endif + }) + .def_property_readonly_static( + "have_gpu", + [](py::object){ +#ifdef AMREX_USE_GPU + return true; +#else + return false; +#endif + }) + .def_property_readonly_static( + "have_omp", + [](py::object){ +#ifdef AMREX_USE_OMP + return true; +#else + return false; +#endif + }) + .def_property_readonly_static( + "gpu_backend", + [](py::object){ +#ifdef AMREX_USE_CUDA + return "CUDA"; +#elif defined(AMREX_USE_HIP) + return "HIP"; +#elif defined(AMREX_USE_DPCPP) + return "SYCL"; +#else + return py::none(); +#endif + }) + ; +} |