Skip to content

Conversation

matt-pharr
Copy link
Collaborator

Updated Vacuum using new code provided by M. S. Chance, which is based on the following reference:

M. S. Chance, A. D. Turnbull, and P. B. Snyder, Journal of Computational Physics 221, 330 (2007).

Should include improvements of accuracy in high toroidal mode number cases. Benchmarking plots will follow.

@matt-pharr matt-pharr marked this pull request as draft August 23, 2025 21:48
@matt-pharr matt-pharr self-assigned this Aug 24, 2025
@matt-pharr matt-pharr marked this pull request as ready for review August 24, 2025 22:12
@matt-pharr
Copy link
Collaborator Author

@logan-nc oops... It looks like the new vacuum green's function was in our code after all, albeit not in use, lying around in aleg_new.f.

@matt-pharr
Copy link
Collaborator Author

matt-pharr commented Aug 25, 2025

image image image

Here are three plots, showing the log of the difference, the absolute difference, and the new, corrected matrices for four varied values of $n$. This is using the DIII-D ideal example case. I also added a flag to tell vacuum to use the original green's functions instead.

Differences, at least in all the cases seen here, seem to be relatively minimal, for DCON's sake. That is, save for a very large one-element difference in the two higher $n$ cases, along the upper border of each plot to the right.

@matt-pharr
Copy link
Collaborator Author

@JaeBeom1019 You may be interested in this.

@matt-pharr matt-pharr requested a review from logan-nc August 25, 2025 06:22
@JaeBeom1019
Copy link
Collaborator

JaeBeom1019 commented Aug 25, 2025

image

@matt-pharr Interesting. I've plotted the minimum and maximum values of $log(n\hat\rho)$ for various devices. A 2007 paper recommends using a new computational method when $n\hat\rho>>10$ and the old method when $n\hat\rho<<0.1$. I think we need to be particularly cautious when running codes like DCON for small devices, whereas this is less of a concern for larger, DEMO-sized machines. Here I used 200 dots of boundary.

devices = {
"KDEMO": {"R0": 6.8, "a": 2.1, "kappa": 2.0, "delta": 0.625, "color": "blue"},
"ITER": {"R0": 6.2, "a": 2.0, "kappa": 1.85, "delta": 0.4, "color": "red"},
"JET": {"R0": 2.96, "a": 1.25, "kappa": 1.68, "delta": 0.4, "color": "magenta"},
"KSTAR": {"R0": 1.8, "a": 0.5, "kappa": 2.0, "delta": 0.8, "color": "green"},
"DIII-D":{"R0": 1.66,"a": 0.67,"kappa": 1.8, "delta": 0.6, "color": "purple"},
"VEST": {"R0": 0.4, "a": 0.3, "kappa": 1.7, "delta": 0.2, "color": "orange"},
}

---n value when maximum $n\hat\rho$ crossing 10 ---
VEST : n = 4.43
DIII-D : n = 9.80
JET : n = 10.75
KDEMO : n = 12.65
KSTAR : n = 13.78
ITER : n = 13.92

---n value when minimum $n\hat\rho$ crossing 0.1 ---
VEST : n = 8.74
JET : n = 14.03
DIII-D : n = 16.26
ITER : n = 19.43
KDEMO : n = 23.37
KSTAR : n = 34.17

@JaeBeom1019
Copy link
Collaborator

For reference, I'm also attaching a plot generated with a different number of points.. Of course the maximum $n$ limit does not changed by number of points.

image

---n value when maximum $n\hat\rho$ crossing 10 ---
VEST : n = 4.43
DIII-D : n = 9.80
JET : n = 10.75
KDEMO : n = 12.65
KSTAR : n = 13.78
ITER : n = 13.92

---n value when minimum $n\hat\rho$ crossing 0.1 $ ---
VEST : n = 21.84
JET : n = 35.09
DIII-D : n = 40.65
ITER : n = 48.59
KDEMO : n = 58.42
KSTAR : n = 85.53

For 1000 points,
image
---n value when maximum $n\hat\rho$ crossing 10 ---
VEST : n = 4.43
DIII-D : n = 9.80
JET : n = 10.75
KDEMO : n = 12.65
KSTAR : n = 13.78
ITER : n = 13.92
---n value when minimum $n\hat\rho$ crossing 0.1 $ ---
VEST : n = 43.69
JET : n = 70.19
DIII-D : n = 81.30
ITER : n = 97.18
KDEMO : n = 116.87
KSTAR : n = 171.05

@parkjk
Copy link
Contributor

parkjk commented Aug 25, 2025

Isn't new routine slow? I recall I've tested the new vacuum for NSTX high n and found no big difference but with noticeably slow computational speed. I can't find any record (so far) since we were using ancient code repository like cvs or svn at that time before git migration. Thanks for this systematic testing by the way.

@matt-pharr
Copy link
Collaborator Author

@parkjk it is not noticeably slow in my experience running it now. I can benchmark it, but at least comparatively it is instant next to kinetic DCON. If we find a case where it is particularly slow, we could consider parallelizing it in the Julia version.

@matt-pharr
Copy link
Collaborator Author

And regarding accuracy, you are correct there is no noticeable difference for low n, but in the two high n cases there are elements with a very large (>>100%) changes. They are only single elements, but they could definitely change an eigenvalue spectrum. This was only tried with a wall at infinity, notably.

@logan-nc
Copy link
Contributor

@JaeBeom1019 is the min an max $n\hat\rho$ set by something in the vacuum namelist? Which parameter are you changing when you say "number of points"? My poor understanding is that $\rho$ is the vector from the plasma boundary to a point is vacuum so it can technically limit to 0 for any plasma and the minimum values you are seeing must be some setting in the vacuum code's use of finite integration bounds. Is that right? Or is there truly a fundamental max and min for a given tokamak size/shape?

@logan-nc
Copy link
Contributor

@parkjk I think the Chance paper does a very good job of proving that this new technique is necessary for high $n\hat\rho$.

@matt-pharr has all the times for the runs above in the dcon netcdf files, and will supply a plot comparing runtimes vs n with and without the new technique.
@matt-pharr has also agreed to plot the abs((W_1_with • W_1_without) vs n, which will show how much the least stable eigenmode is changed (1= unchanged, 0=completely different).

@logan-nc
Copy link
Contributor

I think we need to be particularly cautious when running codes like DCON for small devices, whereas this is less of a concern for larger, DEMO-sized machines.

I would add to this that we do not want users to have to make this decision themselves. We need the code to automatically select the appropriate method based on $n\hat\rho$ as it goes. Skimming this with @matt-pharr it looked like there might be some logic that does this already... can anyone confirm this?

@JaeBeom1019
Copy link
Collaborator

@logan-nc The number of points corresponds to mths. The term ρ_hat represents the normalized distance between the source point and the observation point of the Green's function; it emerges naturally when you follow the Green's function formula. My plot was made assuming a no-wall. So it shows the maximum and minimum normalized distance (ρ_hat) between all pairs of the mths points on the plasma boundary, plotted against n. If we consider a scenario with a wall, we must assume the vacuum can be either a source or an observation point. In that case, the maximum value will increase.

@JaeBeom1019
Copy link
Collaborator

I would add to this that we do not want users to have to make this decision themselves.

I agree.

@matt-pharr
Copy link
Collaborator Author

image image

So first, it seems like the difference in runtime is at most 10% with ideal dcon, and gets lower with higher n. In reality, this worked out to a second or two at most, much less than that at least. The least stable modes, however, are essentially unchanging.

@logan-nc
Copy link
Contributor

Thank you everyone for getting this implemented, documented, and tested so quickly!
My conclusions from our discussions above are as follows:

  • The $n\hat\rho > 0.1$ condition is indeed met for existing tokamaks at "moderate to high" n~4-15ish, so we should and will implement this to handle those points
  • Even when the max $n\hat\rho > 0.1$, the majority of $\hat\rho$ integrated over do not, so it ends up being fine within the precision needed for cases like DIII-D n~12-14. This is good. IT seems that almost everything done with GPEC so far has been fine. But we want to eventually push further, so it is good to be doing it right.
  • The new routine does indeed implement an automatic handoff IF ( nloc*rhohat >= 0.1 ) [here], so by defaulting to the new routine users do not need to be aware of this subtlety or change settings depending on the machine, n, etc.
  • The new routine makes our most common cases of DCON runs with $n=1-3$ slower by ~0.3 s or ~10-20%, but something running in 1 s or 1.3 s is not a meaningful change so I see no issue here. The wall time difference stays around ~1 s or less even out to high n where the plasma part of DCON starts to take 100s of seconds, so at high n this becomes both important for accuracy and inconsequential for run times.

So, in summary, this is a great addition that automatically reproduces the old behavior where we have been running things to date but enables our future plans to extend to higher n, with little meaningful cost in terms of runtime.

Thanks everyone!

@logan-nc logan-nc merged commit 2d16c27 into develop Aug 27, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants