aboutsummaryrefslogtreecommitdiff
path: root/Examples/Modules/ParticleBoundaryScrape/analysis_scrape.py
diff options
context:
space:
mode:
authorGravatar Phil Miller <phil@intensecomputing.com> 2021-08-30 16:26:55 -0400
committerGravatar GitHub <noreply@github.com> 2021-08-30 13:26:55 -0700
commite130e57efcee4fb08cae9a5888c67c83db63abb4 (patch)
tree2e8cc1688c5842ba2d478c04887d8ba214bcd6f6 /Examples/Modules/ParticleBoundaryScrape/analysis_scrape.py
parentc17b786f935a52530e7d559b7bae4c6ab740ae85 (diff)
downloadWarpX-e130e57efcee4fb08cae9a5888c67c83db63abb4.tar.gz
WarpX-e130e57efcee4fb08cae9a5888c67c83db63abb4.tar.zst
WarpX-e130e57efcee4fb08cae9a5888c67c83db63abb4.zip
Make buffer of scraped particles available to Python code (#2164)
* Added wrapper to get number of particle species tracked by the scraper Not sure if this is going to be useful, but it demonstrates a method to get information from the ParticleBoundaryBuffer into Python. * Stubbed out the main wrapper functions * Added parameters to wrapper * Added wrapper for getting the number of particles scraped of a species on a boundary * added picmi arguments to scrape particles at the domain boundary * Added wrapper to get the full particle buffer into python * rearanged the getBuffer properties code a little * Added docstrings +other suggested changes * Added num_particles_impacted_boundary docstring * fixed mistake in docstring * Changed boundary parameter to be a string for clarity * Fixed issue with the boundary parameter for scraping * Fixed issue with the boundary input for scraping stats wrapper * Added demonstration of particle scraping wrapper * Added analysis.py file * Fix typo in one of the dimension maps Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> * Added before esolve to warpx evolve * added test for the scraped particle buffer wrappers * Moved python PICMI particle boundary scrape test * Renamed test file to the correct name * Removed old test * added special functionality to get the timestep at which particles were scraped * removed debug print * added python wrapper for the clearParticles() function of the scraper buffer * added special wrapper function to get the timesteps at which the particles in the boundary buffer were scraped * updated test to match the non-PICMI test for the particle scraper buffer * Fix uncaught rebase mistake * re-activated picmi test of accessing the scraped particle buffers via python * added documentation for the new parameters involved in the scraped particle buffer and fixed remaining issue with picmi test * changes requested during code review Co-authored-by: mkieburtz <michaelkieburtz@gmail.com> Co-authored-by: Roelof <roelof.groenewald@modernelectron.com> Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com>
Diffstat (limited to 'Examples/Modules/ParticleBoundaryScrape/analysis_scrape.py')
-rwxr-xr-xExamples/Modules/ParticleBoundaryScrape/analysis_scrape.py13
1 files changed, 11 insertions, 2 deletions
diff --git a/Examples/Modules/ParticleBoundaryScrape/analysis_scrape.py b/Examples/Modules/ParticleBoundaryScrape/analysis_scrape.py
index b970c4933..c325495db 100755
--- a/Examples/Modules/ParticleBoundaryScrape/analysis_scrape.py
+++ b/Examples/Modules/ParticleBoundaryScrape/analysis_scrape.py
@@ -1,6 +1,7 @@
#! /usr/bin/env python
import yt
+from pathlib import Path
# This test shoots a beam of electrons at cubic embedded boundary geometry
# At time step 40, none of the particles have hit the boundary yet. At time
@@ -9,11 +10,19 @@ import yt
# the problem domain yet.
# all particles are still there
-ds40 = yt.load("particle_scrape_plt00040")
+if Path("particle_scrape_plt00040").is_dir():
+ filename = "particle_scrape_plt00040"
+else:
+ filename = "Python_particle_scrape_plt00040"
+ds40 = yt.load(filename)
np40 = ds40.index.particle_headers['electrons'].num_particles
assert(np40 == 612)
# all particles have been removed
-ds60 = yt.load("particle_scrape_plt00060")
+if Path("particle_scrape_plt00060").is_dir():
+ filename = "particle_scrape_plt00060"
+else:
+ filename = "Python_particle_scrape_plt00060"
+ds60 = yt.load(filename)
np60 = ds60.index.particle_headers['electrons'].num_particles
assert(np60 == 0)