aboutsummaryrefslogtreecommitdiff
path: root/Source/FieldSolver/FiniteDifferenceSolver
diff options
context:
space:
mode:
authorGravatar Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> 2023-06-12 15:40:45 -0700
committerGravatar GitHub <noreply@github.com> 2023-06-12 15:40:45 -0700
commitd81503dd97a4c8154feaec5a9fe2738bc8492cab (patch)
tree2d1a71a49344055a0d2d4d0fc329923099ad39d1 /Source/FieldSolver/FiniteDifferenceSolver
parent2289f4a24e6d0d6a5957f76dd6eed19f129860e6 (diff)
downloadWarpX-d81503dd97a4c8154feaec5a9fe2738bc8492cab.tar.gz
WarpX-d81503dd97a4c8154feaec5a9fe2738bc8492cab.tar.zst
WarpX-d81503dd97a4c8154feaec5a9fe2738bc8492cab.zip
Ohm's law solver (hybrid kinetic-fluid extension) (#3665)
* Add "None" as an option for the Maxwell solver * fixed some of the reasons for failing CI tests * no longer pass `do_electrostatic` to `GuardCellManager` * renamed `MaxwellSolverAlgo` to `ElectromagneticSolverAlgo` * rename `do_electrostatic` to `electrostatic_solver_id` * rename `maxwell_solver_id` to `electromagnetic_solver_id` * start of hybrid solver logic * changes requested during PR review * remove `do_no_deposit` from tests without field evolution * added `HybridSolveE.cpp` * bulk of the hybrid solver implementation * mostly reproduce 1d cold ion mirror results * ion Bernstein modes reproduced with this version * fix bug with reduced diagnostic FieldProbe in 1d * added hybrid solver installation to PICMI and added example script generating ion-Bernstein modes * enable the use of `FieldProbe` default parameter values * use default field-probe values * steady progress * add `do_not_push` flag to picmi * possibly use nodal fields * added extra multifab for current calculated from curl B * added `CalculateTotalCurrent` functions * rewrote implementation to calculate J x curl B on a nodal grid first * fill boundary for auxiliary MFs used during hybrid E solve * properly handle nonzero resistivity * updated hybrid model example * clean up example scripts * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fixed invalid memory access for GPU and other code cleanups * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * refinements on the example scripts * added ion beam instability example * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * added EM modes and ion beam examples to CI test suite * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * started docs section on the hybrid model * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * more progress on documentation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * added ion Landau damping verification test / example * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add checksum benchmark for Landau damping example * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * added fields.py wrapper to access total current density in hybrid case * refactored the charge deposition fix to be performed with the field data rather than individual particles * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * also correct current density at PEC boundary * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * made resistivity a parsed function of `rho` * work on PEC boundary condition * corrections pointed out during code review * fix build fails due to unused variables * fix issue with GPU builds * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * actually apply rho boundary correction in EM case * take one sided derivative at PEC boundary when calculating div Pe * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * actually apply rho boundary correction in EM case * removed specific treatment of E-field on PEC boundary for Ohm's law solver * first round of CI fixes * second round of CI fixes * added description of deposition logic with PEC boundaries to documentation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * third round of CI fixing * move J and rho boundary handling to after `SyncRho` and `SyncCurrent` calls * properly order the application of boundary conditions on rho, for electrostatic simulations * fourth round of CI fixing * moved calculation of total current (Ampere's law) to seperate function * add random seed specification to `picmi` * code clean-up -> renamed hybrid model to hybrid-PIC model * added magnetic reconnection example * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * code cleanup & benchmark updates * update PICMI class name for hybrid solver to `HybridPICSolver` * don't apply J field boundary in * don't apply J field boundary in `MultiParticleContainer::DepositCurrent` * apply changes requested during code review * Apply suggestions from code review Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> * Loosen tolerance on failing CI test * Removed unused variable * code cleanup: make use of `MultiParticleContainer::DepositCurrent` in `AddSpaceChargeFieldLabFrame` * switch to using a rho_fp_temp multifab for old and averaged charge density field, also no longer require particles to move only one cell per step * use `ablastr::coarsen::sample` namespace in `HybridPICSolveECartesian` * switched to using `MultiFab::LinComb` instead of self written GPU kernels to calculated averaged or extrapolated current density * add verbosity flag for the Ohm solver tests * deal with fine versus coarse patches * add theoretical instability growth / damping rates to hybrid-PIC examples * update ion-Bernstein mode plot in documentation * move the `ApplyRhofieldBoundary` call to after `SumBoundary` * use a uniform calculation for the number of cells a given index is from the boundary * remove unused variable * limit number of ghost cells updated during PEC BC application * the number of ghost cells to consider depends on whether the field is nodal or not * attempt 1 to fix failing CI tests * attempt 2 to fix failing CI tests and code cleanup * attempt 3 to fix failing CI tests * attempt 4 to fix failing CI tests and docs cleanup * switched to using bibtex citations * move hybrid solver input parameters documentation to its own section * clean up ion beam instability analysis script * Apply suggestions from code review Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Apply suggestions from code review Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> * add inline comments describing the meaning of each argument for the `amrex::MultiFab::LinComb` calls used * make `HybridPICSolver` a child class of `picmistandard.base._ClassWithInit` * apply changes requested during code review * add warning about using hybrid-PIC solver with Esirkepov current deposition * add Stanier 2020 reference to recommend linear particles with hybrid-PIC * add call to FillBoundary for `current_fp` - needed for accurate interpolation to nodal grid * changes requested from code review * Apply suggestions from code review Co-authored-by: Remi Lehe <remi.lehe@normalesup.org> * include physics accuracy check for ion beam instability; switch CI tests to use direct current deposition * reset benchmark values after switching to direct current deposition * update ion beam instability benchmark * minor changes requested during code review * remove guard cells for `enE_nodal_mf` as well as corresponding `FillBoundary` call * refactor: moved hybrid-PIC specific multifabs and `CalculateElectronPressure()` to `HybridPICModel` * add assert that load balancing is not used with the hybrid-PIC solver since the new multifabs are not yet properly redistributed * move `CalculateCurrentAmpere` to `HybridPICModel` * refactor: moved `HybridPICSolveE` to `HybridPICModel` class --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Diffstat (limited to 'Source/FieldSolver/FiniteDifferenceSolver')
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt2
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/EvolveB.cpp7
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H83
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.cpp7
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/CMakeLists.txt7
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H181
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp338
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel_fwd.H15
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package3
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp511
-rw-r--r--Source/FieldSolver/FiniteDifferenceSolver/Make.package2
11 files changed, 1154 insertions, 2 deletions
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt b/Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt
index cf29954ea..6fbb86439 100644
--- a/Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt
+++ b/Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt
@@ -12,9 +12,11 @@ foreach(D IN LISTS WarpX_DIMS)
EvolveG.cpp
EvolveECTRho.cpp
FiniteDifferenceSolver.cpp
+ HybridPICSolveE.cpp
MacroscopicEvolveE.cpp
ApplySilverMuellerBoundary.cpp
)
endforeach()
+add_subdirectory(HybridPICModel)
add_subdirectory(MacroscopicProperties)
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/EvolveB.cpp b/Source/FieldSolver/FiniteDifferenceSolver/EvolveB.cpp
index 81a569461..1a50dc236 100644
--- a/Source/FieldSolver/FiniteDifferenceSolver/EvolveB.cpp
+++ b/Source/FieldSolver/FiniteDifferenceSolver/EvolveB.cpp
@@ -69,6 +69,9 @@ void FiniteDifferenceSolver::EvolveB (
if (m_fdtd_algo == ElectromagneticSolverAlgo::Yee){
ignore_unused(Gfield, face_areas);
EvolveBCylindrical <CylindricalYeeAlgorithm> ( Bfield, Efield, lev, dt );
+ } else if (m_fdtd_algo == ElectromagneticSolverAlgo::HybridPIC) {
+ ignore_unused(Gfield, face_areas);
+ EvolveBCylindrical <CylindricalYeeAlgorithm> ( Bfield, Efield, lev, dt );
#else
if(m_grid_type == GridType::Collocated || m_fdtd_algo != ElectromagneticSolverAlgo::ECT){
amrex::ignore_unused(face_areas);
@@ -82,6 +85,10 @@ void FiniteDifferenceSolver::EvolveB (
EvolveBCartesian <CartesianYeeAlgorithm> ( Bfield, Efield, Gfield, lev, dt );
+ } else if (m_fdtd_algo == ElectromagneticSolverAlgo::HybridPIC) {
+
+ EvolveBCartesian <CartesianYeeAlgorithm> ( Bfield, Efield, Gfield, lev, dt );
+
} else if (m_fdtd_algo == ElectromagneticSolverAlgo::CKC) {
EvolveBCartesian <CartesianCKCAlgorithm> ( Bfield, Efield, Gfield, lev, dt );
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H b/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H
index aecf4ed9e..aec2bd330 100644
--- a/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H
+++ b/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H
@@ -12,6 +12,8 @@
#include "FiniteDifferenceSolver_fwd.H"
#include "BoundaryConditions/PML_fwd.H"
+#include "Evolve/WarpXDtType.H"
+#include "HybridPICModel/HybridPICModel_fwd.H"
#include "MacroscopicProperties/MacroscopicProperties_fwd.H"
#include <AMReX_GpuContainers.H>
@@ -129,6 +131,46 @@ class FiniteDifferenceSolver
std::array< amrex::MultiFab*, 3 > const Efield,
amrex::Real const dt );
+ /**
+ * \brief E-update in the hybrid PIC algorithm as described in
+ * Winske et al. (2003) Eq. 10.
+ * https://link.springer.com/chapter/10.1007/3-540-36530-3_8
+ *
+ * \param[out] Efield vector of electric field MultiFabs updated at a given level
+ * \param[in] Jfield vector of total current MultiFabs at a given level
+ * \param[in] Jifield vector of ion current density MultiFabs at a given level
+ * \param[in] Bfield vector of magnetic field MultiFabs at a given level
+ * \param[in] rhofield scalar ion charge density Multifab at a given level
+ * \param[in] Pefield scalar electron pressure MultiFab at a given level
+ * \param[in] edge_lengths length of edges along embedded boundaries
+ * \param[in] lev level number for the calculation
+ * \param[in] hybrid_pic_model instance of the hybrid-PIC model
+ */
+ void HybridPICSolveE ( std::array< std::unique_ptr<amrex::MultiFab>, 3>& Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3>& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Jifield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::unique_ptr<amrex::MultiFab> const& Pefield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev, HybridPICModel const* hybrid_pic_model,
+ DtType a_dt_type );
+
+ /**
+ * \brief Calculation of total current using Ampere's law (without
+ * displacement current): J = (curl x B) / mu0.
+ *
+ * \param[out] Jfield vector of current MultiFabs at a given level
+ * \param[in] Bfield vector of magnetic field MultiFabs at a given level
+ * \param[in] edge_lengths length of edges along embedded boundaries
+ * \param[in] lev level number for the calculation
+ */
+ void CalculateCurrentAmpere (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3>& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev );
+
private:
int m_fdtd_algo;
@@ -184,6 +226,27 @@ class FiniteDifferenceSolver
void ComputeDivECylindrical (
const std::array<std::unique_ptr<amrex::MultiFab>,3>& Efield,
amrex::MultiFab& divE );
+
+ template<typename T_Algo>
+ void HybridPICSolveECylindrical (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3>& Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Jifield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::unique_ptr<amrex::MultiFab> const& Pefield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev, HybridPICModel const* hybrid_pic_model,
+ DtType a_dt_type );
+
+ template<typename T_Algo>
+ void CalculateCurrentAmpereCylindrical (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 >& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Bfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev
+ );
+
#else
template< typename T_Algo >
void EvolveBCartesian (
@@ -267,6 +330,26 @@ class FiniteDifferenceSolver
void EvolveFPMLCartesian ( amrex::MultiFab* Ffield,
std::array< amrex::MultiFab*, 3 > const Efield,
amrex::Real const dt );
+
+ template<typename T_Algo>
+ void HybridPICSolveECartesian (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3>& Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Jifield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::unique_ptr<amrex::MultiFab> const& Pefield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev, HybridPICModel const* hybrid_pic_model,
+ DtType a_dt_type );
+
+ template<typename T_Algo>
+ void CalculateCurrentAmpereCartesian (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 >& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Bfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev
+ );
#endif
};
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.cpp b/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.cpp
index 26da8352c..851ff1a1a 100644
--- a/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.cpp
+++ b/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.cpp
@@ -45,7 +45,8 @@ FiniteDifferenceSolver::FiniteDifferenceSolver (
m_dr = cell_size[0];
m_nmodes = WarpX::GetInstance().n_rz_azimuthal_modes;
m_rmin = WarpX::GetInstance().Geom(0).ProbLo(0);
- if (fdtd_algo == ElectromagneticSolverAlgo::Yee) {
+ if (fdtd_algo == ElectromagneticSolverAlgo::Yee ||
+ fdtd_algo == ElectromagneticSolverAlgo::HybridPIC ) {
CylindricalYeeAlgorithm::InitializeStencilCoefficients( cell_size,
m_h_stencil_coefs_r, m_h_stencil_coefs_z );
m_stencil_coefs_r.resize(m_h_stencil_coefs_r.size());
@@ -67,7 +68,9 @@ FiniteDifferenceSolver::FiniteDifferenceSolver (
CartesianNodalAlgorithm::InitializeStencilCoefficients( cell_size,
m_h_stencil_coefs_x, m_h_stencil_coefs_y, m_h_stencil_coefs_z );
- } else if (fdtd_algo == ElectromagneticSolverAlgo::Yee || fdtd_algo == ElectromagneticSolverAlgo::ECT) {
+ } else if (fdtd_algo == ElectromagneticSolverAlgo::Yee ||
+ fdtd_algo == ElectromagneticSolverAlgo::ECT ||
+ fdtd_algo == ElectromagneticSolverAlgo::HybridPIC) {
CartesianYeeAlgorithm::InitializeStencilCoefficients( cell_size,
m_h_stencil_coefs_x, m_h_stencil_coefs_y, m_h_stencil_coefs_z );
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/CMakeLists.txt b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/CMakeLists.txt
new file mode 100644
index 000000000..729f58ff5
--- /dev/null
+++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/CMakeLists.txt
@@ -0,0 +1,7 @@
+foreach(D IN LISTS WarpX_DIMS)
+ warpx_set_suffix_dims(SD ${D})
+ target_sources(WarpX_${SD}
+ PRIVATE
+ HybridPICModel.cpp
+ )
+endforeach()
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H
new file mode 100644
index 000000000..0793b3b27
--- /dev/null
+++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H
@@ -0,0 +1,181 @@
+/* Copyright 2023 The WarpX Community
+ *
+ * This file is part of WarpX.
+ *
+ * Authors: Roelof Groenewald (TAE Technologies)
+ *
+ * License: BSD-3-Clause-LBNL
+ */
+
+#ifndef WARPX_HYBRIDPICMODEL_H_
+#define WARPX_HYBRIDPICMODEL_H_
+
+#include "HybridPICModel_fwd.H"
+
+#include "FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H"
+#include "Utils/Parser/ParserUtils.H"
+#include "Utils/WarpXConst.H"
+#include "Utils/WarpXProfilerWrapper.H"
+#include "WarpX.H"
+
+#include <AMReX_Array.H>
+#include <AMReX_REAL.H>
+
+
+/**
+ * \brief This class contains the parameters needed to evaluate hybrid field
+ * solutions (kinetic ions with fluid electrons).
+ */
+class HybridPICModel
+{
+public:
+ HybridPICModel (int nlevs_max); // constructor
+
+ /** Read user-defined model parameters. Called in constructor. */
+ void ReadParameters ();
+
+ /** Allocate hybrid-PIC specific multifabs. Called in constructor. */
+ void AllocateMFs (int nlevs_max);
+ void AllocateLevelMFs (int lev, const amrex::BoxArray& ba, const amrex::DistributionMapping& dm,
+ const int ncomps, const amrex::IntVect& ngJ, const amrex::IntVect& ngRho,
+ const amrex::IntVect& jx_nodal_flag, const amrex::IntVect& jy_nodal_flag,
+ const amrex::IntVect& jz_nodal_flag, const amrex::IntVect& rho_nodal_flag);
+
+ /** Helper function to clear values from hybrid-PIC specific multifabs. */
+ void ClearLevel (int lev);
+
+ void InitData ();
+
+ /**
+ * \brief
+ * Function to calculate the total current based on Ampere's law while
+ * neglecting displacement current (J = curl x B). Used in the Ohm's law
+ * solver (kinetic-fluid hybrid model).
+ *
+ * \param[in] Bfield Magnetic field from which the current is calculated.
+ * \param[in] edge_lengths Length of cell edges taking embedded boundaries into account
+ */
+ void CalculateCurrentAmpere (
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& Bfield,
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& edge_lengths
+ );
+ void CalculateCurrentAmpere (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& edge_lengths,
+ const int lev
+ );
+
+ /**
+ * \brief
+ * Function to update the E-field using Ohm's law (hybrid-PIC model).
+ */
+ void HybridPICSolveE (
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>>& Efield,
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& Jfield,
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& Bfield,
+ amrex::Vector<std::unique_ptr<amrex::MultiFab>> const& rhofield,
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& edge_lengths,
+ DtType dt_type);
+ void HybridPICSolveE (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3>& Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& edge_lengths,
+ const int lev, DtType dt_type);
+ void HybridPICSolveE (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3>& Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& edge_lengths,
+ const int lev, PatchType patch_type, DtType dt_type);
+
+ /**
+ * \brief
+ * Function to calculate the electron pressure at a given timestep type
+ * using the simulation charge density. Used in the Ohm's law solver
+ * (kinetic-fluid hybrid model).
+ */
+ void CalculateElectronPressure ( DtType a_dt_type);
+ void CalculateElectronPressure (const int lev, DtType a_dt_type);
+
+ /**
+ * \brief Fill the electron pressure multifab given the kinetic particle
+ * charge density (and assumption of quasi-neutrality) using the user
+ * specified electron equation of state.
+ *
+ * \param[out] Pe scalar electron pressure MultiFab at a given level
+ * \param[in] rhofield scalar ion chrge density Multifab at a given level
+ */
+ void FillElectronPressureMF (
+ std::unique_ptr<amrex::MultiFab> const& Pe,
+ amrex::MultiFab* const& rhofield );
+
+ // Declare variables to hold hybrid-PIC model parameters
+ /** Number of substeps to take when evolving B */
+ int m_substeps = 100;
+
+ /** Electron temperature in eV */
+ amrex::Real m_elec_temp;
+ /** Reference electron density */
+ amrex::Real m_n0_ref = 1.0;
+ /** Electron pressure scaling exponent */
+ amrex::Real m_gamma = 5.0/3.0;
+
+ /** Plasma density floor - if n < n_floor it will be set to n_floor */
+ amrex::Real m_n_floor = 1.0;
+
+ /** Plasma resistivity */
+ std::string m_eta_expression = "0.0";
+ std::unique_ptr<amrex::Parser> m_resistivity_parser;
+ amrex::ParserExecutor<1> m_eta;
+
+ // Declare multifabs specifically needed for the hybrid-PIC model
+ amrex::Vector< std::unique_ptr<amrex::MultiFab> > rho_fp_temp;
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3 > > current_fp_temp;
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3 > > current_fp_ampere;
+ amrex::Vector< std::unique_ptr<amrex::MultiFab> > electron_pressure_fp;
+
+ // Helper functions to retrieve hybrid-PIC multifabs
+ amrex::MultiFab * get_pointer_current_fp_ampere (int lev, int direction) const { return current_fp_ampere[lev][direction].get(); }
+ amrex::MultiFab * get_pointer_electron_pressure_fp (int lev) const { return electron_pressure_fp[lev].get(); }
+
+ /** Gpu Vector with index type of the Jx multifab */
+ amrex::GpuArray<int, 3> Jx_IndexType;
+ /** Gpu Vector with index type of the Jy multifab */
+ amrex::GpuArray<int, 3> Jy_IndexType;
+ /** Gpu Vector with index type of the Jz multifab */
+ amrex::GpuArray<int, 3> Jz_IndexType;
+ /** Gpu Vector with index type of the Bx multifab */
+ amrex::GpuArray<int, 3> Bx_IndexType;
+ /** Gpu Vector with index type of the By multifab */
+ amrex::GpuArray<int, 3> By_IndexType;
+ /** Gpu Vector with index type of the Bz multifab */
+ amrex::GpuArray<int, 3> Bz_IndexType;
+ /** Gpu Vector with index type of the Ex multifab */
+ amrex::GpuArray<int, 3> Ex_IndexType;
+ /** Gpu Vector with index type of the Ey multifab */
+ amrex::GpuArray<int, 3> Ey_IndexType;
+ /** Gpu Vector with index type of the Ez multifab */
+ amrex::GpuArray<int, 3> Ez_IndexType;
+};
+
+/**
+ * \brief
+ * This struct contains only static functions to compute the electron pressure
+ * using the particle density at a given point and the user provided reference
+ * density and temperatures.
+ */
+struct ElectronPressure {
+
+ AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE
+ static amrex::Real get_pressure (amrex::Real const n0,
+ amrex::Real const T0,
+ amrex::Real const gamma,
+ amrex::Real const rho) {
+ return n0 * T0 * pow((rho/PhysConst::q_e)/n0, gamma);
+ }
+};
+
+#endif // WARPX_HYBRIDPICMODEL_H_
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp
new file mode 100644
index 000000000..9e04045d0
--- /dev/null
+++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp
@@ -0,0 +1,338 @@
+/* Copyright 2023 The WarpX Community
+ *
+ * This file is part of WarpX.
+ *
+ * Authors: Roelof Groenewald (TAE Technologies)
+ *
+ * License: BSD-3-Clause-LBNL
+ */
+
+#include "HybridPICModel.H"
+
+using namespace amrex;
+
+HybridPICModel::HybridPICModel ( int nlevs_max )
+{
+ ReadParameters();
+ AllocateMFs(nlevs_max);
+}
+
+void HybridPICModel::ReadParameters ()
+{
+ ParmParse pp_hybrid("hybrid_pic_model");
+
+ // The B-field update is subcycled to improve stability - the number
+ // of sub steps can be specified by the user (defaults to 50).
+ utils::parser::queryWithParser(pp_hybrid, "substeps", m_substeps);
+
+ // The hybrid model requires an electron temperature, reference density
+ // and exponent to be given. These values will be used to calculate the
+ // electron pressure according to p = n0 * Te * (n/n0)^gamma
+ utils::parser::queryWithParser(pp_hybrid, "gamma", m_gamma);
+ if (!utils::parser::queryWithParser(pp_hybrid, "elec_temp", m_elec_temp)) {
+ Abort("hybrid_pic_model.elec_temp must be specified when using the hybrid solver");
+ }
+ bool n0_ref_given = utils::parser::queryWithParser(pp_hybrid, "n0_ref", m_n0_ref);
+ if (m_gamma != 1.0 && !n0_ref_given) {
+ Abort("hybrid_pic_model.n0_ref should be specified if hybrid_pic_model.gamma != 1");
+ }
+
+ pp_hybrid.query("plasma_resistivity(rho)", m_eta_expression);
+ utils::parser::queryWithParser(pp_hybrid, "n_floor", m_n_floor);
+
+ // convert electron temperature from eV to J
+ m_elec_temp *= PhysConst::q_e;
+}
+
+void HybridPICModel::AllocateMFs (int nlevs_max)
+{
+ electron_pressure_fp.resize(nlevs_max);
+ rho_fp_temp.resize(nlevs_max);
+ current_fp_temp.resize(nlevs_max);
+ current_fp_ampere.resize(nlevs_max);
+}
+
+void HybridPICModel::AllocateLevelMFs (int lev, const BoxArray& ba, const DistributionMapping& dm,
+ const int ncomps, const IntVect& ngJ, const IntVect& ngRho,
+ const IntVect& jx_nodal_flag,
+ const IntVect& jy_nodal_flag,
+ const IntVect& jz_nodal_flag,
+ const IntVect& rho_nodal_flag)
+{
+ // set human-readable tag for each MultiFab
+ auto const tag = [lev]( std::string tagname ) {
+ tagname.append("[l=").append(std::to_string(lev)).append("]");
+ return tagname;
+ };
+
+ auto & warpx = WarpX::GetInstance();
+
+ // The "electron_pressure_fp" multifab stores the electron pressure calculated
+ // from the specified equation of state.
+ // The "rho_fp_temp" multifab is used to store the ion charge density
+ // interpolated or extrapolated to appropriate timesteps.
+ // The "current_fp_temp" multifab is used to store the ion current density
+ // interpolated or extrapolated to appropriate timesteps.
+ // The "current_fp_ampere" multifab stores the total current calculated as
+ // the curl of B.
+ warpx.AllocInitMultiFab(electron_pressure_fp[lev], amrex::convert(ba, rho_nodal_flag),
+ dm, ncomps, ngRho, tag("electron_pressure_fp"), 0.0_rt);
+
+ warpx.AllocInitMultiFab(rho_fp_temp[lev], amrex::convert(ba, rho_nodal_flag),
+ dm, ncomps, ngRho, tag("rho_fp_temp"), 0.0_rt);
+
+ warpx.AllocInitMultiFab(current_fp_temp[lev][0], amrex::convert(ba, jx_nodal_flag),
+ dm, ncomps, ngJ, tag("current_fp_temp[x]"), 0.0_rt);
+ warpx.AllocInitMultiFab(current_fp_temp[lev][1], amrex::convert(ba, jy_nodal_flag),
+ dm, ncomps, ngJ, tag("current_fp_temp[y]"), 0.0_rt);
+ warpx.AllocInitMultiFab(current_fp_temp[lev][2], amrex::convert(ba, jz_nodal_flag),
+ dm, ncomps, ngJ, tag("current_fp_temp[z]"), 0.0_rt);
+
+ warpx.AllocInitMultiFab(current_fp_ampere[lev][0], amrex::convert(ba, jx_nodal_flag),
+ dm, ncomps, ngJ, tag("current_fp_ampere[x]"), 0.0_rt);
+ warpx.AllocInitMultiFab(current_fp_ampere[lev][1], amrex::convert(ba, jy_nodal_flag),
+ dm, ncomps, ngJ, tag("current_fp_ampere[y]"), 0.0_rt);
+ warpx.AllocInitMultiFab(current_fp_ampere[lev][2], amrex::convert(ba, jz_nodal_flag),
+ dm, ncomps, ngJ, tag("current_fp_ampere[z]"), 0.0_rt);
+}
+
+void HybridPICModel::ClearLevel (int lev)
+{
+ electron_pressure_fp[lev].reset();
+ rho_fp_temp[lev].reset();
+ for (int i = 0; i < 3; ++i) {
+ current_fp_temp[lev][i].reset();
+ current_fp_ampere[lev][i].reset();
+ }
+}
+
+void HybridPICModel::InitData ()
+{
+ m_resistivity_parser = std::make_unique<amrex::Parser>(
+ utils::parser::makeParser(m_eta_expression, {"rho"}));
+ m_eta = m_resistivity_parser->compile<1>();
+
+ auto & warpx = WarpX::GetInstance();
+
+ // Get the grid staggering of the fields involved in calculating E
+ amrex::IntVect Jx_stag = warpx.getcurrent_fp(0,0).ixType().toIntVect();
+ amrex::IntVect Jy_stag = warpx.getcurrent_fp(0,1).ixType().toIntVect();
+ amrex::IntVect Jz_stag = warpx.getcurrent_fp(0,2).ixType().toIntVect();
+ amrex::IntVect Bx_stag = warpx.getBfield_fp(0,0).ixType().toIntVect();
+ amrex::IntVect By_stag = warpx.getBfield_fp(0,1).ixType().toIntVect();
+ amrex::IntVect Bz_stag = warpx.getBfield_fp(0,2).ixType().toIntVect();
+ amrex::IntVect Ex_stag = warpx.getEfield_fp(0,0).ixType().toIntVect();
+ amrex::IntVect Ey_stag = warpx.getEfield_fp(0,1).ixType().toIntVect();
+ amrex::IntVect Ez_stag = warpx.getEfield_fp(0,2).ixType().toIntVect();
+
+ // copy data to device
+ for ( int idim = 0; idim < AMREX_SPACEDIM; ++idim) {
+ Jx_IndexType[idim] = Jx_stag[idim];
+ Jy_IndexType[idim] = Jy_stag[idim];
+ Jz_IndexType[idim] = Jz_stag[idim];
+ Bx_IndexType[idim] = Bx_stag[idim];
+ By_IndexType[idim] = By_stag[idim];
+ Bz_IndexType[idim] = Bz_stag[idim];
+ Ex_IndexType[idim] = Ex_stag[idim];
+ Ey_IndexType[idim] = Ey_stag[idim];
+ Ez_IndexType[idim] = Ez_stag[idim];
+ }
+
+ // Below we set all the unused dimensions to have nodal values for J, B & E
+ // since these values will be interpolated onto a nodal grid - if this is
+ // not done the Interp function returns nonsense values.
+#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) || defined(WARPX_DIM_1D_Z)
+ Jx_IndexType[2] = 1;
+ Jy_IndexType[2] = 1;
+ Jz_IndexType[2] = 1;
+ Bx_IndexType[2] = 1;
+ By_IndexType[2] = 1;
+ Bz_IndexType[2] = 1;
+ Ex_IndexType[2] = 1;
+ Ey_IndexType[2] = 1;
+ Ez_IndexType[2] = 1;
+#endif
+#if defined(WARPX_DIM_1D_Z)
+ Jx_IndexType[1] = 1;
+ Jy_IndexType[1] = 1;
+ Jz_IndexType[1] = 1;
+ Bx_IndexType[1] = 1;
+ By_IndexType[1] = 1;
+ Bz_IndexType[1] = 1;
+ Ex_IndexType[1] = 1;
+ Ey_IndexType[1] = 1;
+ Ez_IndexType[1] = 1;
+#endif
+}
+
+void HybridPICModel::CalculateCurrentAmpere (
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& Bfield,
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& edge_lengths)
+{
+ auto& warpx = WarpX::GetInstance();
+ for (int lev = 0; lev <= warpx.finestLevel(); ++lev)
+ {
+ CalculateCurrentAmpere(Bfield[lev], edge_lengths[lev], lev);
+ }
+}
+
+void HybridPICModel::CalculateCurrentAmpere (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& edge_lengths,
+ const int lev)
+{
+ WARPX_PROFILE("WarpX::CalculateCurrentAmpere()");
+
+ auto& warpx = WarpX::GetInstance();
+ warpx.get_pointer_fdtd_solver_fp(lev)->CalculateCurrentAmpere(
+ current_fp_ampere[lev], Bfield, edge_lengths, lev
+ );
+
+ // we shouldn't apply the boundary condition to J since J = J_i - J_e but
+ // the boundary correction was already applied to J_i and the B-field
+ // boundary ensures that J itself complies with the boundary conditions, right?
+ // ApplyJfieldBoundary(lev, Jfield[0].get(), Jfield[1].get(), Jfield[2].get());
+ for (int i=0; i<3; i++) current_fp_ampere[lev][i]->FillBoundary(warpx.Geom(lev).periodicity());
+}
+
+void HybridPICModel::HybridPICSolveE (
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> & Efield,
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& Jfield,
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& Bfield,
+ amrex::Vector<std::unique_ptr<amrex::MultiFab>> const& rhofield,
+ amrex::Vector<std::array< std::unique_ptr<amrex::MultiFab>, 3>> const& edge_lengths,
+ DtType a_dt_type)
+{
+ auto& warpx = WarpX::GetInstance();
+ for (int lev = 0; lev <= warpx.finestLevel(); ++lev)
+ {
+ HybridPICSolveE(
+ Efield[lev], Jfield[lev], Bfield[lev], rhofield[lev],
+ edge_lengths[lev], lev, a_dt_type
+ );
+ }
+}
+
+void HybridPICModel::HybridPICSolveE (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> & Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& edge_lengths,
+ const int lev, DtType a_dt_type)
+{
+ WARPX_PROFILE("WarpX::HybridPICSolveE()");
+
+ HybridPICSolveE(
+ Efield, Jfield, Bfield, rhofield, edge_lengths, lev,
+ PatchType::fine, a_dt_type
+ );
+ if (lev > 0)
+ {
+ amrex::Abort(Utils::TextMsg::Err(
+ "HybridPICSolveE: Only one level implemented for hybrid-PIC solver."));
+ }
+}
+
+void HybridPICModel::HybridPICSolveE (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> & Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& edge_lengths,
+ const int lev, PatchType patch_type, DtType a_dt_type)
+{
+ auto& warpx = WarpX::GetInstance();
+
+ // Solve E field in regular cells
+ // The first half step uses t=n quantities, the second half t=n+1/2
+ // quantities and the full step uses t=n+1 quantities
+ if (a_dt_type == DtType::FirstHalf) {
+ warpx.get_pointer_fdtd_solver_fp(lev)->HybridPICSolveE(
+ Efield, current_fp_ampere[lev],
+ current_fp_temp[lev], Bfield,
+ rho_fp_temp[lev],
+ electron_pressure_fp[lev],
+ edge_lengths, lev, this, a_dt_type
+ );
+ }
+ else if (a_dt_type == DtType::SecondHalf) {
+ warpx.get_pointer_fdtd_solver_fp(lev)->HybridPICSolveE(
+ Efield, current_fp_ampere[lev],
+ Jfield, Bfield,
+ rho_fp_temp[lev],
+ electron_pressure_fp[lev],
+ edge_lengths, lev, this, a_dt_type
+ );
+ }
+ else {
+ warpx.get_pointer_fdtd_solver_fp(lev)->HybridPICSolveE(
+ Efield, current_fp_ampere[lev],
+ current_fp_temp[lev], Bfield,
+ rhofield,
+ electron_pressure_fp[lev],
+ edge_lengths, lev, this, a_dt_type
+ );
+ }
+
+ warpx.ApplyEfieldBoundary(lev, patch_type);
+}
+
+void HybridPICModel::CalculateElectronPressure(DtType a_dt_type)
+{
+ auto& warpx = WarpX::GetInstance();
+ for (int lev = 0; lev <= warpx.finestLevel(); ++lev)
+ {
+ CalculateElectronPressure(lev, a_dt_type);
+ }
+}
+
+void HybridPICModel::CalculateElectronPressure(const int lev, DtType a_dt_type)
+{
+ WARPX_PROFILE("WarpX::CalculateElectronPressure()");
+
+ auto& warpx = WarpX::GetInstance();
+ // The full step uses rho^{n+1}, otherwise use the old or averaged
+ // charge density.
+ if (a_dt_type == DtType::Full) {
+ FillElectronPressureMF(
+ electron_pressure_fp[lev], warpx.get_pointer_rho_fp(lev)
+ );
+ } else {
+ FillElectronPressureMF(
+ electron_pressure_fp[lev], rho_fp_temp[lev].get()
+ );
+ }
+ warpx.ApplyElectronPressureBoundary(lev, PatchType::fine);
+ electron_pressure_fp[lev]->FillBoundary(warpx.Geom(lev).periodicity());
+}
+
+
+void HybridPICModel::FillElectronPressureMF (
+ std::unique_ptr<amrex::MultiFab> const& Pe_field,
+ amrex::MultiFab* const& rho_field )
+{
+ const auto n0_ref = m_n0_ref;
+ const auto elec_temp = m_elec_temp;
+ const auto gamma = m_gamma;
+
+ // Loop through the grids, and over the tiles within each grid
+#ifdef AMREX_USE_OMP
+#pragma omp parallel if (amrex::Gpu::notInLaunchRegion())
+#endif
+ for ( MFIter mfi(*Pe_field, TilingIfNotGPU()); mfi.isValid(); ++mfi )
+ {
+ // Extract field data for this grid/tile
+ Array4<Real const> const& rho = rho_field->const_array(mfi);
+ Array4<Real> const& Pe = Pe_field->array(mfi);
+
+ // Extract tileboxes for which to loop
+ const Box& tilebox = mfi.tilebox();
+
+ ParallelFor(tilebox, [=] AMREX_GPU_DEVICE (int i, int j, int k) {
+ Pe(i, j, k) = ElectronPressure::get_pressure(
+ n0_ref, elec_temp, gamma, rho(i, j, k)
+ );
+ });
+ }
+}
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel_fwd.H b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel_fwd.H
new file mode 100644
index 000000000..a17fde6eb
--- /dev/null
+++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel_fwd.H
@@ -0,0 +1,15 @@
+/* Copyright 2023 The WarpX Community
+ *
+ * This file is part of WarpX.
+ *
+ * Authors: Roelof Groenewald (TAE Technologies)
+ *
+ * License: BSD-3-Clause-LBNL
+ */
+
+#ifndef WARPX_HYBRIDPICMODEL_FWD_H
+#define WARPX_HYBRIDPICMODEL_FWD_H
+
+class HybridPICModel;
+
+#endif /* WARPX_HYBRIDPICMODEL_FWD_H */
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package
new file mode 100644
index 000000000..8145cfcef
--- /dev/null
+++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package
@@ -0,0 +1,3 @@
+CEXE_sources += HybridPICModel.cpp
+
+VPATH_LOCATIONS += $(WARPX_HOME)/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp
new file mode 100644
index 000000000..24e17d040
--- /dev/null
+++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp
@@ -0,0 +1,511 @@
+/* Copyright 2023 The WarpX Community
+ *
+ * This file is part of WarpX.
+ *
+ * Authors: Roelof Groenewald (TAE Technologies)
+ *
+ * License: BSD-3-Clause-LBNL
+ */
+
+#include "FiniteDifferenceSolver.H"
+
+#ifdef WARPX_DIM_RZ
+# include "FiniteDifferenceAlgorithms/CylindricalYeeAlgorithm.H"
+#else
+# include "FiniteDifferenceAlgorithms/CartesianYeeAlgorithm.H"
+#endif
+#include "HybridPICModel/HybridPICModel.H"
+#include "Utils/TextMsg.H"
+#include "WarpX.H"
+
+#include <ablastr/coarsen/sample.H>
+
+using namespace amrex;
+
+void FiniteDifferenceSolver::CalculateCurrentAmpere (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3>& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3> const& Bfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev )
+{
+ // Select algorithm (The choice of algorithm is a runtime option,
+ // but we compile code for each algorithm, using templates)
+ if (m_fdtd_algo == ElectromagneticSolverAlgo::HybridPIC) {
+#ifdef WARPX_DIM_RZ
+ CalculateCurrentAmpereCylindrical <CylindricalYeeAlgorithm> (
+ Jfield, Bfield, edge_lengths, lev
+ );
+
+#else
+ CalculateCurrentAmpereCartesian <CartesianYeeAlgorithm> (
+ Jfield, Bfield, edge_lengths, lev
+ );
+
+#endif
+ } else {
+ amrex::Abort(Utils::TextMsg::Err(
+ "CalculateCurrentAmpere: Unknown algorithm choice."));
+ }
+}
+
+// /**
+// * \brief Calculate electron current from Ampere's law without displacement
+// * current and the kinetically tracked ion currents i.e. J_e = curl B.
+// *
+// * \param[out] Jfield vector of electron current MultiFabs at a given level
+// * \param[in] Bfield vector of magnetic field MultiFabs at a given level
+// */
+#ifdef WARPX_DIM_RZ
+template<typename T_Algo>
+void FiniteDifferenceSolver::CalculateCurrentAmpereCylindrical (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 >& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Bfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev
+)
+{
+ amrex::ignore_unused(Jfield, Bfield, edge_lengths, lev);
+ amrex::Abort(Utils::TextMsg::Err(
+ "currently hybrid E-solve does not work for RZ"));
+}
+#else
+template<typename T_Algo>
+void FiniteDifferenceSolver::CalculateCurrentAmpereCartesian (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 >& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Bfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev
+)
+{
+ // for the profiler
+ amrex::LayoutData<amrex::Real>* cost = WarpX::getCosts(lev);
+
+#ifndef AMREX_USE_EB
+ amrex::ignore_unused(edge_lengths);
+#endif
+
+ // reset Jfield
+ Jfield[0]->setVal(0);
+ Jfield[1]->setVal(0);
+ Jfield[2]->setVal(0);
+
+ // Loop through the grids, and over the tiles within each grid
+#ifdef AMREX_USE_OMP
+#pragma omp parallel if (amrex::Gpu::notInLaunchRegion())
+#endif
+ for ( MFIter mfi(*Jfield[0], TilingIfNotGPU()); mfi.isValid(); ++mfi ) {
+ if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers)
+ {
+ amrex::Gpu::synchronize();
+ }
+ Real wt = amrex::second();
+
+ // Extract field data for this grid/tile
+ Array4<Real> const& Jx = Jfield[0]->array(mfi);
+ Array4<Real> const& Jy = Jfield[1]->array(mfi);
+ Array4<Real> const& Jz = Jfield[2]->array(mfi);
+ Array4<Real const> const& Bx = Bfield[0]->const_array(mfi);
+ Array4<Real const> const& By = Bfield[1]->const_array(mfi);
+ Array4<Real const> const& Bz = Bfield[2]->const_array(mfi);
+
+#ifdef AMREX_USE_EB
+ amrex::Array4<amrex::Real> const& lx = edge_lengths[0]->array(mfi);
+ amrex::Array4<amrex::Real> const& ly = edge_lengths[1]->array(mfi);
+ amrex::Array4<amrex::Real> const& lz = edge_lengths[2]->array(mfi);
+#endif
+
+ // Extract stencil coefficients
+ Real const * const AMREX_RESTRICT coefs_x = m_stencil_coefs_x.dataPtr();
+ int const n_coefs_x = m_stencil_coefs_x.size();
+ Real const * const AMREX_RESTRICT coefs_y = m_stencil_coefs_y.dataPtr();
+ int const n_coefs_y = m_stencil_coefs_y.size();
+ Real const * const AMREX_RESTRICT coefs_z = m_stencil_coefs_z.dataPtr();
+ int const n_coefs_z = m_stencil_coefs_z.size();
+
+ // Extract tileboxes for which to loop
+ Box const& tjx = mfi.tilebox(Jfield[0]->ixType().toIntVect());
+ Box const& tjy = mfi.tilebox(Jfield[1]->ixType().toIntVect());
+ Box const& tjz = mfi.tilebox(Jfield[2]->ixType().toIntVect());
+
+ Real const one_over_mu0 = 1._rt / PhysConst::mu0;
+
+ // First calculate the total current using Ampere's law on the
+ // same grid as the E-field
+ amrex::ParallelFor(tjx, tjy, tjz,
+
+ // Jx calculation
+ [=] AMREX_GPU_DEVICE (int i, int j, int k){
+#ifdef AMREX_USE_EB
+ // Skip if this cell is fully covered by embedded boundaries
+ if (lx(i, j, k) <= 0) return;
+#endif
+ Jx(i, j, k) = one_over_mu0 * (
+ - T_Algo::DownwardDz(By, coefs_z, n_coefs_z, i, j, k)
+ + T_Algo::DownwardDy(Bz, coefs_y, n_coefs_y, i, j, k)
+ );
+ },
+
+ // Jy calculation
+ [=] AMREX_GPU_DEVICE (int i, int j, int k){
+#ifdef AMREX_USE_EB
+ // Skip if this cell is fully covered by embedded boundaries
+#ifdef WARPX_DIM_3D
+ if (ly(i,j,k) <= 0) return;
+#elif defined(WARPX_DIM_XZ)
+ // In XZ Jy is associated with a mesh node, so we need to check if the mesh node is covered
+ amrex::ignore_unused(ly);
+ if (lx(i, j, k)<=0 || lx(i-1, j, k)<=0 || lz(i, j-1, k)<=0 || lz(i, j, k)<=0) return;
+#endif
+#endif
+ Jy(i, j, k) = one_over_mu0 * (
+ - T_Algo::DownwardDx(Bz, coefs_x, n_coefs_x, i, j, k)
+ + T_Algo::DownwardDz(Bx, coefs_z, n_coefs_z, i, j, k)
+ );
+ },
+
+ // Jz calculation
+ [=] AMREX_GPU_DEVICE (int i, int j, int k){
+#ifdef AMREX_USE_EB
+ // Skip if this cell is fully covered by embedded boundaries
+ if (lz(i,j,k) <= 0) return;
+#endif
+ Jz(i, j, k) = one_over_mu0 * (
+ - T_Algo::DownwardDy(Bx, coefs_y, n_coefs_y, i, j, k)
+ + T_Algo::DownwardDx(By, coefs_x, n_coefs_x, i, j, k)
+ );
+ }
+ );
+
+ if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers)
+ {
+ amrex::Gpu::synchronize();
+ wt = amrex::second() - wt;
+ amrex::HostDevice::Atomic::Add( &(*cost)[mfi.index()], wt);
+ }
+ }
+}
+#endif
+
+
+void FiniteDifferenceSolver::HybridPICSolveE (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 >& Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 >& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Jifield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::unique_ptr<amrex::MultiFab> const& Pefield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev, HybridPICModel const* hybrid_model,
+ DtType a_dt_type )
+{
+
+ WARPX_ALWAYS_ASSERT_WITH_MESSAGE(
+ WarpX::grid_type == GridType::Staggered,
+ "Ohm's law E-solve only works with a staggered (Yee) grid.");
+
+
+ // Select algorithm (The choice of algorithm is a runtime option,
+ // but we compile code for each algorithm, using templates)
+ if (m_fdtd_algo == ElectromagneticSolverAlgo::HybridPIC) {
+#ifdef WARPX_DIM_RZ
+
+ HybridPICSolveECylindrical <CylindricalYeeAlgorithm> (
+ Efield, Jfield, Jifield, Bfield, rhofield, Pefield,
+ edge_lengths, lev, hybrid_model, a_dt_type
+ );
+
+#else
+
+ HybridPICSolveECartesian <CartesianYeeAlgorithm> (
+ Efield, Jfield, Jifield, Bfield, rhofield, Pefield,
+ edge_lengths, lev, hybrid_model, a_dt_type
+ );
+
+#endif
+ } else {
+ amrex::Abort(Utils::TextMsg::Err(
+ "HybridSolveE: The hybrid-PIC electromagnetic solver algorithm must be used"));
+ }
+}
+
+#ifdef WARPX_DIM_RZ
+template<typename T_Algo>
+void FiniteDifferenceSolver::HybridPICSolveECylindrical (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 >& Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Jifield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::unique_ptr<amrex::MultiFab> const& Pefield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev, HybridPICModel const* hybrid_model,
+ DtType a_dt_type )
+{
+#ifndef AMREX_USE_EB
+ amrex::ignore_unused(edge_lengths);
+#endif
+ amrex::ignore_unused(
+ Efield, Jfield, Jifield, Bfield, rhofield, Pefield, edge_lengths,
+ lev, hybrid_model, a_dt_type
+ );
+ amrex::Abort(Utils::TextMsg::Err(
+ "currently hybrid E-solve does not work for RZ"));
+}
+
+#else
+
+template<typename T_Algo>
+void FiniteDifferenceSolver::HybridPICSolveECartesian (
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 >& Efield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Jfield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Jifield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& Bfield,
+ std::unique_ptr<amrex::MultiFab> const& rhofield,
+ std::unique_ptr<amrex::MultiFab> const& Pefield,
+ std::array< std::unique_ptr<amrex::MultiFab>, 3 > const& edge_lengths,
+ int lev, HybridPICModel const* hybrid_model,
+ DtType a_dt_type )
+{
+#ifndef AMREX_USE_EB
+ amrex::ignore_unused(edge_lengths);
+#endif
+
+ // for the profiler
+ amrex::LayoutData<amrex::Real>* cost = WarpX::getCosts(lev);
+
+ using namespace ablastr::coarsen::sample;
+
+ // get hybrid model parameters
+ const auto eta = hybrid_model->m_eta;
+ const auto rho_floor = hybrid_model->m_n_floor * PhysConst::q_e;
+
+ // Index type required for interpolating fields from their respective
+ // staggering to the Ex, Ey, Ez locations
+ amrex::GpuArray<int, 3> const& Ex_stag = hybrid_model->Ex_IndexType;
+ amrex::GpuArray<int, 3> const& Ey_stag = hybrid_model->Ey_IndexType;
+ amrex::GpuArray<int, 3> const& Ez_stag = hybrid_model->Ez_IndexType;
+ amrex::GpuArray<int, 3> const& Jx_stag = hybrid_model->Jx_IndexType;
+ amrex::GpuArray<int, 3> const& Jy_stag = hybrid_model->Jy_IndexType;
+ amrex::GpuArray<int, 3> const& Jz_stag = hybrid_model->Jz_IndexType;
+ amrex::GpuArray<int, 3> const& Bx_stag = hybrid_model->Bx_IndexType;
+ amrex::GpuArray<int, 3> const& By_stag = hybrid_model->By_IndexType;
+ amrex::GpuArray<int, 3> const& Bz_stag = hybrid_model->Bz_IndexType;
+
+ // Parameters for `interp` that maps from Yee to nodal mesh and back
+ amrex::GpuArray<int, 3> const& nodal = {1, 1, 1};
+ // The "coarsening is just 1 i.e. no coarsening"
+ amrex::GpuArray<int, 3> const& coarsen = {1, 1, 1};
+
+ // The E-field calculation is done in 2 steps:
+ // 1) The J x B term is calculated on a nodal mesh in order to ensure
+ // energy conservation.
+ // 2) The nodal E-field values are averaged onto the Yee grid and the
+ // electron pressure & resistivity terms are added (these terms are
+ // naturally located on the Yee grid).
+
+ // Create a temporary multifab to hold the nodal E-field values
+ // Note the multifab has 3 values for Ex, Ey and Ez which we can do here
+ // since all three components will be calculated on the same grid.
+ // Also note that enE_nodal_mf does not need to have any guard cells since
+ // these values will be interpolated to the Yee mesh which is contained
+ // by the nodal mesh.
+ auto const& ba = convert(rhofield->boxArray(), IntVect::TheNodeVector());
+ MultiFab enE_nodal_mf(ba, rhofield->DistributionMap(), 3, IntVect::TheZeroVector());
+
+ // Loop through the grids, and over the tiles within each grid for the
+ // initial, nodal calculation of E
+#ifdef AMREX_USE_OMP
+#pragma omp parallel if (amrex::Gpu::notInLaunchRegion())
+#endif
+ for ( MFIter mfi(enE_nodal_mf, TilingIfNotGPU()); mfi.isValid(); ++mfi ) {
+ if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers)
+ {
+ amrex::Gpu::synchronize();
+ }
+ Real wt = amrex::second();
+
+ Array4<Real> const& enE_nodal = enE_nodal_mf.array(mfi);
+ Array4<Real const> const& Jx = Jfield[0]->const_array(mfi);
+ Array4<Real const> const& Jy = Jfield[1]->const_array(mfi);
+ Array4<Real const> const& Jz = Jfield[2]->const_array(mfi);
+ Array4<Real const> const& Jix = Jifield[0]->const_array(mfi);
+ Array4<Real const> const& Jiy = Jifield[1]->const_array(mfi);
+ Array4<Real const> const& Jiz = Jifield[2]->const_array(mfi);
+ Array4<Real const> const& Bx = Bfield[0]->const_array(mfi);
+ Array4<Real const> const& By = Bfield[1]->const_array(mfi);
+ Array4<Real const> const& Bz = Bfield[2]->const_array(mfi);
+
+ // Loop over the cells and update the nodal E field
+ amrex::ParallelFor(mfi.tilebox(), [=] AMREX_GPU_DEVICE (int i, int j, int k){
+
+ // interpolate the total current to a nodal grid
+ auto const jx_interp = Interp(Jx, Jx_stag, nodal, coarsen, i, j, k, 0);
+ auto const jy_interp = Interp(Jy, Jy_stag, nodal, coarsen, i, j, k, 0);
+ auto const jz_interp = Interp(Jz, Jz_stag, nodal, coarsen, i, j, k, 0);
+
+ // interpolate the ion current to a nodal grid
+ auto const jix_interp = Interp(Jix, Jx_stag, nodal, coarsen, i, j, k, 0);
+ auto const jiy_interp = Interp(Jiy, Jy_stag, nodal, coarsen, i, j, k, 0);
+ auto const jiz_interp = Interp(Jiz, Jz_stag, nodal, coarsen, i, j, k, 0);
+
+ // interpolate the B field to a nodal grid
+ auto const Bx_interp = Interp(Bx, Bx_stag, nodal, coarsen, i, j, k, 0);
+ auto const By_interp = Interp(By, By_stag, nodal, coarsen, i, j, k, 0);
+ auto const Bz_interp = Interp(Bz, Bz_stag, nodal, coarsen, i, j, k, 0);
+
+ // calculate enE = (J - Ji) x B
+ enE_nodal(i, j, k, 0) = (
+ (jy_interp - jiy_interp) * Bz_interp
+ - (jz_interp - jiz_interp) * By_interp
+ );
+ enE_nodal(i, j, k, 1) = (
+ (jz_interp - jiz_interp) * Bx_interp
+ - (jx_interp - jix_interp) * Bz_interp
+ );
+ enE_nodal(i, j, k, 2) = (
+ (jx_interp - jix_interp) * By_interp
+ - (jy_interp - jiy_interp) * Bx_interp
+ );
+ });
+
+ if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers)
+ {
+ amrex::Gpu::synchronize();
+ wt = amrex::second() - wt;
+ amrex::HostDevice::Atomic::Add( &(*cost)[mfi.index()], wt);
+ }
+ }
+
+ // Loop through the grids, and over the tiles within each grid again
+ // for the Yee grid calculation of the E field
+#ifdef AMREX_USE_OMP
+#pragma omp parallel if (amrex::Gpu::notInLaunchRegion())
+#endif
+ for ( MFIter mfi(*Efield[0], TilingIfNotGPU()); mfi.isValid(); ++mfi ) {
+ if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers)
+ {
+ amrex::Gpu::synchronize();
+ }
+ Real wt = amrex::second();
+
+ // Extract field data for this grid/tile
+ Array4<Real> const& Ex = Efield[0]->array(mfi);
+ Array4<Real> const& Ey = Efield[1]->array(mfi);
+ Array4<Real> const& Ez = Efield[2]->array(mfi);
+ Array4<Real const> const& Jx = Jfield[0]->const_array(mfi);
+ Array4<Real const> const& Jy = Jfield[1]->const_array(mfi);
+ Array4<Real const> const& Jz = Jfield[2]->const_array(mfi);
+ Array4<Real const> const& enE = enE_nodal_mf.const_array(mfi);
+ Array4<Real const> const& rho = rhofield->const_array(mfi);
+ Array4<Real> const& Pe = Pefield->array(mfi);
+
+#ifdef AMREX_USE_EB
+ amrex::Array4<amrex::Real> const& lx = edge_lengths[0]->array(mfi);
+ amrex::Array4<amrex::Real> const& ly = edge_lengths[1]->array(mfi);
+ amrex::Array4<amrex::Real> const& lz = edge_lengths[2]->array(mfi);
+#endif
+
+ // Extract stencil coefficients
+ Real const * const AMREX_RESTRICT coefs_x = m_stencil_coefs_x.dataPtr();
+ int const n_coefs_x = m_stencil_coefs_x.size();
+ Real const * const AMREX_RESTRICT coefs_y = m_stencil_coefs_y.dataPtr();
+ int const n_coefs_y = m_stencil_coefs_y.size();
+ Real const * const AMREX_RESTRICT coefs_z = m_stencil_coefs_z.dataPtr();
+ int const n_coefs_z = m_stencil_coefs_z.size();
+
+ Box const& tex = mfi.tilebox(Efield[0]->ixType().toIntVect());
+ Box const& tey = mfi.tilebox(Efield[1]->ixType().toIntVect());
+ Box const& tez = mfi.tilebox(Efield[2]->ixType().toIntVect());
+
+ // Loop over the cells and update the E field
+ amrex::ParallelFor(tex, tey, tez,
+
+ // Ex calculation
+ [=] AMREX_GPU_DEVICE (int i, int j, int k){
+#ifdef AMREX_USE_EB
+ // Skip if this cell is fully covered by embedded boundaries
+ if (lx(i, j, k) <= 0) return;
+#endif
+ // Interpolate to get the appropriate charge density in space
+ Real rho_val = Interp(rho, nodal, Ex_stag, coarsen, i, j, k, 0);
+
+ // safety condition since we divide by rho_val later
+ if (rho_val < rho_floor) rho_val = rho_floor;
+
+ // Get the gradient of the electron pressure
+ auto grad_Pe = T_Algo::UpwardDx(Pe, coefs_x, n_coefs_x, i, j, k);
+
+ // interpolate the nodal neE values to the Yee grid
+ auto enE_x = Interp(enE, nodal, Ex_stag, coarsen, i, j, k, 0);
+
+ Ex(i, j, k) = (enE_x - grad_Pe) / rho_val;
+
+ // Add resistivity only if E field value is used to update B
+ if (a_dt_type != DtType::Full) Ex(i, j, k) += eta(rho_val) * Jx(i, j, k);
+ },
+
+ // Ey calculation
+ [=] AMREX_GPU_DEVICE (int i, int j, int k){
+#ifdef AMREX_USE_EB
+ // Skip field solve if this cell is fully covered by embedded boundaries
+#ifdef WARPX_DIM_3D
+ if (ly(i,j,k) <= 0) return;
+#elif defined(WARPX_DIM_XZ)
+ //In XZ Ey is associated with a mesh node, so we need to check if the mesh node is covered
+ amrex::ignore_unused(ly);
+ if (lx(i, j, k)<=0 || lx(i-1, j, k)<=0 || lz(i, j-1, k)<=0 || lz(i, j, k)<=0) return;
+#endif
+#endif
+ // Interpolate to get the appropriate charge density in space
+ Real rho_val = Interp(rho, nodal, Ey_stag, coarsen, i, j, k, 0);
+
+ // safety condition since we divide by rho_val later
+ if (rho_val < rho_floor) rho_val = rho_floor;
+
+ // Get the gradient of the electron pressure
+ auto grad_Pe = T_Algo::UpwardDy(Pe, coefs_y, n_coefs_y, i, j, k);
+
+ // interpolate the nodal neE values to the Yee grid
+ auto enE_y = Interp(enE, nodal, Ey_stag, coarsen, i, j, k, 1);
+
+ Ey(i, j, k) = (enE_y - grad_Pe) / rho_val;
+
+ // Add resistivity only if E field value is used to update B
+ if (a_dt_type != DtType::Full) Ey(i, j, k) += eta(rho_val) * Jy(i, j, k);
+ },
+
+ // Ez calculation
+ [=] AMREX_GPU_DEVICE (int i, int j, int k){
+
+#ifdef AMREX_USE_EB
+ // Skip field solve if this cell is fully covered by embedded boundaries
+ if (lz(i,j,k) <= 0) return;
+#endif
+ // Interpolate to get the appropriate charge density in space
+ Real rho_val = Interp(rho, nodal, Ez_stag, coarsen, i, j, k, 0);
+
+ // safety condition since we divide by rho_val later
+ if (rho_val < rho_floor) rho_val = rho_floor;
+
+ // Get the gradient of the electron pressure
+ auto grad_Pe = T_Algo::UpwardDz(Pe, coefs_z, n_coefs_z, i, j, k);
+
+ // interpolate the nodal neE values to the Yee grid
+ auto enE_z = Interp(enE, nodal, Ez_stag, coarsen, i, j, k, 2);
+
+ Ez(i, j, k) = (enE_z - grad_Pe) / rho_val;
+
+ // Add resistivity only if E field value is used to update B
+ if (a_dt_type != DtType::Full) Ez(i, j, k) += eta(rho_val) * Jz(i, j, k);
+ }
+ );
+
+ if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers)
+ {
+ amrex::Gpu::synchronize();
+ wt = amrex::second() - wt;
+ amrex::HostDevice::Atomic::Add( &(*cost)[mfi.index()], wt);
+ }
+ }
+}
+#endif
diff --git a/Source/FieldSolver/FiniteDifferenceSolver/Make.package b/Source/FieldSolver/FiniteDifferenceSolver/Make.package
index 898b54a2f..b3708c411 100644
--- a/Source/FieldSolver/FiniteDifferenceSolver/Make.package
+++ b/Source/FieldSolver/FiniteDifferenceSolver/Make.package
@@ -10,7 +10,9 @@ CEXE_sources += EvolveBPML.cpp
CEXE_sources += EvolveEPML.cpp
CEXE_sources += EvolveFPML.cpp
CEXE_sources += ApplySilverMuellerBoundary.cpp
+CEXE_sources += HybridPICSolveE.cpp
+include $(WARPX_HOME)/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package
include $(WARPX_HOME)/Source/FieldSolver/FiniteDifferenceSolver/MacroscopicProperties/Make.package
VPATH_LOCATIONS += $(WARPX_HOME)/Source/FieldSolver/FiniteDifferenceSolver