Conversation
…ions, deleting useless functions
…less functions, renaming functions
|
I have not had the chance to look at this but note that you created a new directory at |
|
I don't remember the details, but I found that I needed to put the 2-Yukawa support libraries into sasmodels.TwoYukawa rather than sasmodels.models.TwoYukawa. Similarly, the additional quadrature support code cannot be in the sasmodels.models namespace. We may want to create a sasmodels.lib package to contain the support libraries for pure python models. Currently these are in sasmodels.special, the sasmodels.TwoYukawa package, and the new sasmodels.quadrature module. Note: PEP8 says package names should be snake_case. Somehow both the author (me) and the reviewers messed up with TwoYukawa. We can fix that if we move support libraries to lib. I'll create a ticket for this. |
I wasn't using the weights because of this line, not sure why i put it ?
pkienzle
left a comment
There was a problem hiding this comment.
You could convert a lot of the steps to use numpy vector operators. I don't think numba will give a big speedup so I didn't try it. Torch would allow you to compute on the GPU, and that would probably be faster.
I think you are only running this with a small number of vertices. If the number of vertices were significantly larger than the number of q points then you may want to use vector operations across the vertices and loop over the q values. (This is unlikely given that q points for all |q| shells in the Iq calculation are sent is as one big vector.)
sasmodels/models/nanoprisms.py
Outdated
| for j in range(2): | ||
| coordinates.append((extended_vertices[i+1][j]+extended_vertices[i][j])/2) | ||
| edgecenter.append(coordinates) | ||
| return edgecenter |
There was a problem hiding this comment.
with numpy [2 x n] => [2 x n]:
return (vertices + np.roll(vertices, -1, axis=1))/2There was a problem hiding this comment.
All the vectorization were made in the last commit 38207eb. I resolved all the comments in which I applied your suggestions.
sasmodels/models/nanoprisms.py
Outdated
| # Compute intensity | ||
| intensity = Iqabc(qa, qb, qc, nsides, Rave, L) # shape (nq, npoints) | ||
| # Uniform average over the sphere | ||
| integral = np.sum(w[np.newaxis, :] * intensity, axis=1) |
There was a problem hiding this comment.
Fibonacci is equal-weighted, so no need for w. If it weren't equal-weighted, then you could use integral = intensity @ w to produce the weighted sum in place rather than first forming the matrix Iqij*wi then summing the rows.
There was a problem hiding this comment.
Indeed, the weights are the same for the fibonacci quadrature, I simply replaced with np.mean directly, done in 38207eb.
…ions, deleting useless functions
…less functions, renaming functions
I wasn't using the weights because of this line, not sure why i put it ?
| indices = np.arange(0, npoints_fibonacci, dtype=float) + 0.5 | ||
| phi = np.arccos(1.0 - 2.0 * indices / npoints_fibonacci) | ||
| theta = 2.0 * np.pi * indices / ((1.0 + 5.0 ** 0.5) / 2.0) | ||
|
|
There was a problem hiding this comment.
theta and phi here are opposite to what we are using in the models. Make sure the x,y,z equations correspond to the correct a,b,c for the canonical shape orientation (c-axis along the beam, b-axis up, a-axis across).
Depending on the symmetries in your object you may not want to the entire sphere. If for example you do a scatter plot of (phi mod π/2, theta mod π/2) you will see that the points are no longer nicely distributed.
Swapping theta and phi, you should modify it as theta = np.arccos(1 - indices/npoints) if you only need equator to pole, and modify it as phi = (2π/φ k) mod π/2 from phi = (2π/φ k) mod 2π if you only need a quarter circle (φ is the golden ratio).
There was a problem hiding this comment.
The calculation is fast on the entire sphere. To compute the orientation average, I therefore suggest to keep working on the entire sphere.
Things get more complicated if we are interested in specific directions. Since the golden ratio is irrational, the problem becomes case dependent. I believe it is out for scope for the moment.
I suggest the following changes to comply with sasview conventions (swap orientation labels)
'
golden_ratio = ((1+5.0**0.5)/2.0)
theta = np.arccos(1.0 - 2.0 * indices / npoints) # polaire
phi = (2.0 * np.pi * indices / golden_ratio) # azimutal
x = np.sin(theta) * np.cos(phi) # axis a
y = np.sin(theta) * np.sin(phi) # axis b
z = np.cos(theta) # axis c
'
There was a problem hiding this comment.
The problem is that the Fibonacci points on the sphere are not well distributed after you take symmetries into account. If the shape is mirror symmetric across all three dimensions then all points fold into a single quadrant. Within that quadrant they are not well distributed.
import numpy as np
import matplotlib.pyplot as plt
def fib(npoints, quadrant=False, fold=False):
indices = np.arange(0, npoints, dtype=float) + 0.5
theta = 1.0 - (1 if quadrant else 2.0) * indices/npoints
theta = np.arccos(theta)
golden_ratio = (1.0 + np.sqrt(5.0))/2.0
phi = 2*np.pi * indices / golden_ratio
phi = np.fmod(phi, np.pi/2 if quadrant else 2*np.pi)
theta, phi = np.degrees(theta), np.degrees(phi)
if fold:
# theta is mirror symmetric around 90
theta = 90 - abs(90 - theta)
# phi is mirror symmetric around 180, and then around 90
phi = 180 - abs(180 - phi)
phi = 90 - abs(90 - phi)
plt.subplot(211)
plt.scatter(phi, theta, alpha=0.7)
plt.xlabel("phi")
plt.ylabel("theta")
plt.title(f"Fibonacci points {quadrant=} {fold=}")
plt.subplot(223)
plt.hist(theta, bins=50)
plt.xlabel("theta distribution")
plt.subplot(224)
plt.hist(phi, bins=50)
plt.xlabel("phi distribution")
plt.figure()
fib(999, quadrant=False, fold=False)
plt.figure()
fib(999, quadrant=False, fold=True)
plt.figure()
fib(999, quadrant=True)
plt.show()
There was a problem hiding this comment.
Hypothesis: A nanoprism model computed with cycle length 2π/nsides for phi and theta in [0, π/2) will be more accurate than the full sphere for the same number of sample points. You will need to weight the sum by 2*nsides/npoints to get the full sphere equivalent from a single quadrant.
There was a problem hiding this comment.
What about a 1e-4 relative error instead of 1e-5 ?
What do you get for the number of integration points ?
There was a problem hiding this comment.
Adaptative Fibonacci integration scheme might be a nice improvement as well.
Maybe we should discuss about it soon during some visio call ?
Right now, Sara and I are in France, and Nicolas in Australia, so we need to think to some appropriate time for all of us ... including US time !
What about April 15th or April 22th ?
There was a problem hiding this comment.
BTW, here's the error curve for the same model (nsides=3, Rave=25 nm, L=4 nm) with 500 points throughout the q range instead of splitting it into intervals:
$ python -m sasmodels.compare nanoprisms,nanoprisms -double -highq background=0 -nq=40 nsides=3 L=40 Rave=250 npoints=1000000,500
PY[64] t=3170.07 ms, intensity=125290498
PY[64] t=2.11 ms, intensity=125285563
|npoints=1000000.0-npoints=500.0| max:6.310e+02 median:1.857e+01 98%:6.291e+02 rms:2.252e+02 zero-offset:+1.234e+02
|(npoints=1000000.0-npoints=500.0)/npoints=500.0| max:1.695e+01 median:5.404e-04 98%:1.368e+01 rms:3.881e+00 zero-offset:+1.370e+00
We can avoid memory problems by chunking the q values when the number of points is large, keeping the size of the (q_chunk x q_unit) outer product smaller than 10 million or so elements. This is tedious but necessary: I crashed my Mac when using too many Fibonacci points.
These details need to be hidden in a library rather than cluttering up every model that requires spherical integration.
There was a problem hiding this comment.
Implementation of orientational average based on adaptative Fibonacci quadrature
Number of points used in orientation averaging
As q increases, the number of quadrature points should be increased, particularly when form factor starts to oscillate. The parameter
To account for this, the
where:
$N_\mathrm{min} = 300$ $N_\mathrm{max} = (q_\mathrm{max} \cdot R_\mathrm{char})^2$ -
$R_\mathrm{char}$ is the characteristic size of the nanoprism:
Fix discontinuities in adaptive orientation averaging
The adaptive quadrature currently has a fundamental issue: quadratures for different
When
Proposed fix
Use linear interpolation between successive grids:
- compute
$I(q)$ at$N_{\mathrm{lo}}$ and$N_{\mathrm{hi}}$ - interpolate using the position of
$N_{\mathrm{exact}}$ between them
This removes discontinuities in
Changes in the code
sasmodels/special/fibonacci.py
New function:
def orientation_average(q, R_char, Iqabc_func, *args,
N_MIN=300, N_MAX=None, N_GRID=500,
MAX_ELEMENTS=50_000_000):
"""
Orientation average of a 3-D scattering intensity using adaptive Fibonacci
quadrature.
This function can be used by any SAS model whose form factor amplitude is
available as a function of the three Cartesian components of the scattering
vector. The number of quadrature points scales as N ~ (q * R_char)^2.
Parameters
----------
q : array_like
R_char : float
Iqabc_func : callable
*args : forwarded to Iqabc_func
"""
q = np.atleast_1d(q)
if N_MAX is None:
n_at_qmax = (float(q.max()) * R_char) ** 2
N_MAX = max(int(np.ceil(n_at_qmax / N_GRID) * N_GRID), N_MIN)
n_exact = np.clip((q * R_char) ** 2, N_MIN, N_MAX)
n_lo = np.clip(
(np.floor(n_exact / N_GRID) * N_GRID).astype(int), N_MIN, N_MAX)
n_hi = np.clip(
(np.ceil(n_exact / N_GRID) * N_GRID).astype(int), N_MIN, N_MAX)
gap = (n_hi - n_lo).astype(float)
w_hi = np.where(gap > 0, (n_exact - n_lo) / gap, 0.0)
all_n = np.union1d(np.unique(n_lo), np.unique(n_hi))
sphere_cache = {int(n): fibonacci_sphere(int(n))[0] for n in all_n}
def _eval(n_arr):
out = np.empty_like(q, dtype=float)
for n in np.unique(n_arr):
n = int(n)
q_unit = sphere_cache[n]
mask = n_arr == n
q_group = q[mask]
chunk_size = max(1, MAX_ELEMENTS // n)
grp = np.empty(len(q_group), dtype=float)
for start in range(0, len(q_group), chunk_size):
q_c = q_group[start:start + chunk_size]
qa = q_c[:, np.newaxis] * q_unit[:, 0]
qb = q_c[:, np.newaxis] * q_unit[:, 1]
qc = q_c[:, np.newaxis] * q_unit[:, 2]
intensity = Iqabc_func(
qa.ravel(), qb.ravel(), qc.ravel(), *args
).reshape(qa.shape)
grp[start:start + len(q_c)] = np.mean(intensity, axis=1)
out[mask] = grp
return out
I_lo = _eval(n_lo)
if (w_hi == 0.0).all():
return I_lo
I_hi = _eval(n_hi)
return I_lo + w_hi * (I_hi - I_lo)Key points:
-
Adaptive rule:
$N(q) = clip((\left(\frac{q}{R_{\mathrm{char}}}\right)^2, N_{\min}, N_{\max})$
where$N_{\min}=300$ ,
and$N_{\max}=\left(\frac{q_{\max}}{R_{\mathrm{char}}}\right)^2$ -
Automatic
$N_{\max}$ from$(q_{\max}, R_{\mathrm{char}})$ -
Linear interpolation between
$N_{\mathrm{lo}}$ and$N_{\mathrm{hi}}$ -
Cached Fibonacci spheres (no duplication)
-
Chunking for large arrays
sasmodels/models/nanoprisms.py
Updated Iq:
def Iq(q, sld, sld_solvent, nsides: int, Rave, L):
nsides = int(nsides)
q = np.atleast_1d(q)
R_circ = Rave / np.sqrt(np.sinc(2 / nsides))
R_char = max(R_circ, L / 2)
I_avg = orientation_average(q, R_char, Iqabc, nsides, Rave, L)
return I_avg * (sld - sld_solvent) ** 2 * 1e-4With:
For
Results
- Discontinuities in
$I(q)$ removed - Relative error
$\lesssim 1%$ - Speed-up
$\sim 5.9\times$ (depends on particle size)
Summary
This makes adaptive orientation averaging continuous and more robust with minimal overhead. The parameter n_points is no longer used as directly constrained by the size parameters of the models.
To avoid memory overflow, the computation is split into chunks of at most MAX_ELEMENTS elements.
Feedback welcome.
There was a problem hiding this comment.
Note that I haven't updated the code on the repository. I am not sure on the procedure and don't want to make a mess...
…i with np.mean and move fibonacci.py to special/ + ruff
|
Code looks okay. Math seems to match the paper. I would prefer to update the fibonacci interface to allow for symmetries before merging. Once we define an interface in It would be nice to decide how to control the number of integration points in the model. Ideally this would use max qr for the enclosing cylinder ≈ √Rave² + L²} rather than making the user decide. The For consistency with other models, I suggest using Model names are singular (barbell, sphere, ellipsoid) not plural (nanoprisms). |
|
Regarding the model name, we do not have nanospheres or nanocylinders. You could use plain prism instead. |
rectangular_prism is already one model of sasmodels. Maybe regular_prism or simply prism could do it indeed ? Allmost there ! |
Yes, this set of choice is fine: |
|
I would prefer to keep n_points out of the model and use adaptive integration instead. I only exposed it so I could easily make the comparison plots. |


Description
Adding a pure python model for nanoprisms of different cross sections with orientation averaging using Fibonacci quadrature.
Fixes #685
How Has This Been Tested?
Unit tests worked.
This model was compared to an another numerical calculation : for a gold nanoprism with a pentagonal cross section, agreement was found with the Debye equation (Debye calculator: https://github.com/FrederikLizakJohansen/DebyeCalculator). The Debye equation is based on atomic positions while SasView model is based on analytical expressions.
The model was also used to fit an experimental data (AuAg nanoprism of pentagonal cross section D = 30 nm, L = 117 nm).
Since it's a python model, there is still the issue: "the support orientational dispersion in pure python model" ( #695).
More tests should be made on small q. Indeed, like the previous model (octahedron_truncated model), we encouter issues when it comes to small q (q<10^-6 Angs). More precise mathematical should be used (see PR #694 comments).
Note : the fibonacci quadrature code (sasmodels/quadratures/fibonacci.py) was added to a new repository called "quadratures" because it could be also useful for other models.
To do:
Review Checklist:
[if using the editor, use
[x]in place of[ ]to check a box]Documentation (check at least one)
Installers
Licensing (untick if necessary)