Skip to content

Adding nanoprisms model (pure python)#699

Open
sara-mokhtari wants to merge 22 commits intomasterfrom
adding_nanoprisms_model
Open

Adding nanoprisms model (pure python)#699
sara-mokhtari wants to merge 22 commits intomasterfrom
adding_nanoprisms_model

Conversation

@sara-mokhtari
Copy link
Copy Markdown
Contributor

Description

Adding a pure python model for nanoprisms of different cross sections with orientation averaging using Fibonacci quadrature.

Fixes #685

How Has This Been Tested?

Unit tests worked.
This model was compared to an another numerical calculation : for a gold nanoprism with a pentagonal cross section, agreement was found with the Debye equation (Debye calculator: https://github.com/FrederikLizakJohansen/DebyeCalculator). The Debye equation is based on atomic positions while SasView model is based on analytical expressions.
The model was also used to fit an experimental data (AuAg nanoprism of pentagonal cross section D = 30 nm, L = 117 nm).

Since it's a python model, there is still the issue: "the support orientational dispersion in pure python model" ( #695).

More tests should be made on small q. Indeed, like the previous model (octahedron_truncated model), we encouter issues when it comes to small q (q<10^-6 Angs). More precise mathematical should be used (see PR #694 comments).

Note : the fibonacci quadrature code (sasmodels/quadratures/fibonacci.py) was added to a new repository called "quadratures" because it could be also useful for other models.

To do:

  • take care of the orientational dispersion for pure python model
  • add numba acceleration (tests have to be made first)
  • take care of the singularities

Review Checklist:

[if using the editor, use [x] in place of [ ] to check a box]

Documentation (check at least one)

  • There is nothing that needs documenting
  • Documentation changes are in this PR
  • There is an issue open for the documentation (link?)

Installers

  • There is a chance this will affect the installers, if so
    • Windows installer (GH artifact) has been tested (installed and worked)
    • MacOSX installer (GH artifact) has been tested (installed and worked)
    • Wheels installer (GH artifact) has been tested (installed and worked)

Licensing (untick if necessary)

  • The introduced changes comply with SasView license (BSD 3-Clause)

@butlerpd
Copy link
Copy Markdown
Member

butlerpd commented Mar 9, 2026

I have not had the chance to look at this but note that you created a new directory at sasmodels.quadratures. I thought all our current quadratures reside in sasmodels.models.lib It may be that we want to move quadruture code into its own directory but that should be a discussion I guess? Also not sure the higher level (under sasmodels rather than sasmodels.models is the right answer?

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Mar 9, 2026

I don't remember the details, but I found that I needed to put the 2-Yukawa support libraries into sasmodels.TwoYukawa rather than sasmodels.models.TwoYukawa. Similarly, the additional quadrature support code cannot be in the sasmodels.models namespace.

We may want to create a sasmodels.lib package to contain the support libraries for pure python models. Currently these are in sasmodels.special, the sasmodels.TwoYukawa package, and the new sasmodels.quadrature module.

Note: PEP8 says package names should be snake_case. Somehow both the author (me) and the reviewers messed up with TwoYukawa. We can fix that if we move support libraries to lib.

I'll create a ticket for this.

I wasn't using the weights because of this line, not sure why i put it ?
Copy link
Copy Markdown
Contributor

@pkienzle pkienzle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could convert a lot of the steps to use numpy vector operators. I don't think numba will give a big speedup so I didn't try it. Torch would allow you to compute on the GPU, and that would probably be faster.

I think you are only running this with a small number of vertices. If the number of vertices were significantly larger than the number of q points then you may want to use vector operations across the vertices and loop over the q values. (This is unlikely given that q points for all |q| shells in the Iq calculation are sent is as one big vector.)

for j in range(2):
coordinates.append((extended_vertices[i+1][j]+extended_vertices[i][j])/2)
edgecenter.append(coordinates)
return edgecenter
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with numpy [2 x n] => [2 x n]:

return (vertices + np.roll(vertices, -1, axis=1))/2

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the vectorization were made in the last commit 38207eb. I resolved all the comments in which I applied your suggestions.

# Compute intensity
intensity = Iqabc(qa, qb, qc, nsides, Rave, L) # shape (nq, npoints)
# Uniform average over the sphere
integral = np.sum(w[np.newaxis, :] * intensity, axis=1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fibonacci is equal-weighted, so no need for w. If it weren't equal-weighted, then you could use integral = intensity @ w to produce the weighted sum in place rather than first forming the matrix Iqij*wi then summing the rows.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, the weights are the same for the fibonacci quadrature, I simply replaced with np.mean directly, done in 38207eb.

indices = np.arange(0, npoints_fibonacci, dtype=float) + 0.5
phi = np.arccos(1.0 - 2.0 * indices / npoints_fibonacci)
theta = 2.0 * np.pi * indices / ((1.0 + 5.0 ** 0.5) / 2.0)

Copy link
Copy Markdown
Contributor

@pkienzle pkienzle Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

theta and phi here are opposite to what we are using in the models. Make sure the x,y,z equations correspond to the correct a,b,c for the canonical shape orientation (c-axis along the beam, b-axis up, a-axis across).

Depending on the symmetries in your object you may not want to the entire sphere. If for example you do a scatter plot of (phi mod π/2, theta mod π/2) you will see that the points are no longer nicely distributed.

Swapping theta and phi, you should modify it as theta = np.arccos(1 - indices/npoints) if you only need equator to pole, and modify it as phi = (2π/φ k) mod π/2 from phi = (2π/φ k) mod 2π if you only need a quarter circle (φ is the golden ratio).

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The calculation is fast on the entire sphere. To compute the orientation average, I therefore suggest to keep working on the entire sphere.
Things get more complicated if we are interested in specific directions. Since the golden ratio is irrational, the problem becomes case dependent. I believe it is out for scope for the moment.

I suggest the following changes to comply with sasview conventions (swap orientation labels)
'
golden_ratio = ((1+5.0**0.5)/2.0)
theta = np.arccos(1.0 - 2.0 * indices / npoints) # polaire
phi = (2.0 * np.pi * indices / golden_ratio) # azimutal

x = np.sin(theta) * np.cos(phi) # axis a
y = np.sin(theta) * np.sin(phi) # axis b
z = np.cos(theta) # axis c
'

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the Fibonacci points on the sphere are not well distributed after you take symmetries into account. If the shape is mirror symmetric across all three dimensions then all points fold into a single quadrant. Within that quadrant they are not well distributed.

import numpy as np
import matplotlib.pyplot as plt
def fib(npoints, quadrant=False, fold=False):
     indices = np.arange(0, npoints, dtype=float) + 0.5
     theta = 1.0 - (1 if quadrant else 2.0) * indices/npoints
     theta = np.arccos(theta)
     golden_ratio = (1.0 + np.sqrt(5.0))/2.0
     phi = 2*np.pi * indices / golden_ratio
     phi = np.fmod(phi, np.pi/2 if quadrant else 2*np.pi)
     theta, phi = np.degrees(theta), np.degrees(phi)
     if fold:
         # theta is mirror symmetric around 90
         theta = 90 - abs(90 - theta)
         # phi is mirror symmetric around 180, and then around 90
         phi = 180 - abs(180 - phi)
         phi = 90 - abs(90 - phi)

     plt.subplot(211)
     plt.scatter(phi, theta, alpha=0.7)
     plt.xlabel("phi")
     plt.ylabel("theta")
     plt.title(f"Fibonacci points {quadrant=} {fold=}")
     plt.subplot(223)
     plt.hist(theta, bins=50)
     plt.xlabel("theta distribution")
     plt.subplot(224)
     plt.hist(phi, bins=50)
     plt.xlabel("phi distribution")

plt.figure()
fib(999, quadrant=False, fold=False)
plt.figure()
fib(999, quadrant=False, fold=True)
plt.figure()
fib(999, quadrant=True)
plt.show()
image image

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correction:

...
cycle_length = np.pi/2 if quadrant else 2*pi
phi = cycle_length * indices / golden_ratio
phi = np.fmod(phi, cycle_length)
...
image

And the full sphere for comparison:
image

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hypothesis: A nanoprism model computed with cycle length 2π/nsides for phi and theta in [0, π/2) will be more accurate than the full sphere for the same number of sample points. You will need to weight the sum by 2*nsides/npoints to get the full sphere equivalent from a single quadrant.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about a 1e-4 relative error instead of 1e-5 ?
What do you get for the number of integration points ?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adaptative Fibonacci integration scheme might be a nice improvement as well.
Maybe we should discuss about it soon during some visio call ?
Right now, Sara and I are in France, and Nicolas in Australia, so we need to think to some appropriate time for all of us ... including US time !
What about April 15th or April 22th ?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, here's the error curve for the same model (nsides=3, Rave=25 nm, L=4 nm) with 500 points throughout the q range instead of splitting it into intervals:

$ python -m sasmodels.compare nanoprisms,nanoprisms -double -highq background=0 -nq=40 nsides=3 L=40 Rave=250 npoints=1000000,500                                                  
PY[64] t=3170.07 ms, intensity=125290498
PY[64] t=2.11 ms, intensity=125285563
|npoints=1000000.0-npoints=500.0|                  max:6.310e+02  median:1.857e+01  98%:6.291e+02  rms:2.252e+02  zero-offset:+1.234e+02
|(npoints=1000000.0-npoints=500.0)/npoints=500.0|  max:1.695e+01  median:5.404e-04  98%:1.368e+01  rms:3.881e+00  zero-offset:+1.370e+00
image

We can avoid memory problems by chunking the q values when the number of points is large, keeping the size of the (q_chunk x q_unit) outer product smaller than 10 million or so elements. This is tedious but necessary: I crashed my Mac when using too many Fibonacci points.

These details need to be hidden in a library rather than cluttering up every model that requires spherical integration.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation of orientational average based on adaptative Fibonacci quadrature

Number of points used in orientation averaging

As q increases, the number of quadrature points should be increased, particularly when form factor starts to oscillate. The parameter $N$, number of quadrature points, is thus somehow related to the size of the nanoprism.

To account for this, the $q$-dependence of $N$ is set according to the nanoprism size parameters:

$$N(q) = \mathrm{clip}((q \cdot R_\mathrm{char})^2,\ N_\mathrm{min},\ N_\mathrm{max})$$

where:

  • $N_\mathrm{min} = 300$
  • $N_\mathrm{max} = (q_\mathrm{max} \cdot R_\mathrm{char})^2$
  • $R_\mathrm{char}$ is the characteristic size of the nanoprism:

$$R_\mathrm{char} = \max(R_\mathrm{circ},\ L/2)$$

$$R_\mathrm{circ} = \frac{R_\mathrm{ave}}{\sqrt{\mathrm{sinc}(2\pi/n)}}$$

npoints

Fix discontinuities in adaptive orientation averaging

The adaptive quadrature currently has a fundamental issue: quadratures for different $q$ are not nested.

When $N(q)$ changes, the integration grid changes as well. For two nearby $q$ values straddling a jump in $N$, different orientations are sampled, which introduces artificial discontinuities in $I(q)$. This affects Fibonacci, Gauss–Legendre, and Lebedev quadratures alike.

Proposed fix

Use linear interpolation between successive grids:

  • compute $I(q)$ at $N_{\mathrm{lo}}$ and $N_{\mathrm{hi}}$
  • interpolate using the position of $N_{\mathrm{exact}}$ between them

This removes discontinuities in $I(q)$.


Changes in the code

sasmodels/special/fibonacci.py

New function:

def orientation_average(q, R_char, Iqabc_func, *args,
                        N_MIN=300, N_MAX=None, N_GRID=500,
                        MAX_ELEMENTS=50_000_000):
    """
    Orientation average of a 3-D scattering intensity using adaptive Fibonacci
    quadrature.

    This function can be used by any SAS model whose form factor amplitude is
    available as a function of the three Cartesian components of the scattering
    vector. The number of quadrature points scales as N ~ (q * R_char)^2.

    Parameters
    ----------
    q : array_like
    R_char : float
    Iqabc_func : callable
    *args : forwarded to Iqabc_func
    """

    q = np.atleast_1d(q)

    if N_MAX is None:
        n_at_qmax = (float(q.max()) * R_char) ** 2
        N_MAX = max(int(np.ceil(n_at_qmax / N_GRID) * N_GRID), N_MIN)

    n_exact = np.clip((q * R_char) ** 2, N_MIN, N_MAX)

    n_lo = np.clip(
        (np.floor(n_exact / N_GRID) * N_GRID).astype(int), N_MIN, N_MAX)
    n_hi = np.clip(
        (np.ceil(n_exact / N_GRID) * N_GRID).astype(int), N_MIN, N_MAX)

    gap  = (n_hi - n_lo).astype(float)
    w_hi = np.where(gap > 0, (n_exact - n_lo) / gap, 0.0)

    all_n = np.union1d(np.unique(n_lo), np.unique(n_hi))
    sphere_cache = {int(n): fibonacci_sphere(int(n))[0] for n in all_n}

    def _eval(n_arr):
        out = np.empty_like(q, dtype=float)
        for n in np.unique(n_arr):
            n = int(n)
            q_unit = sphere_cache[n]
            mask = n_arr == n
            q_group = q[mask]
            chunk_size = max(1, MAX_ELEMENTS // n)
            grp = np.empty(len(q_group), dtype=float)
            for start in range(0, len(q_group), chunk_size):
                q_c = q_group[start:start + chunk_size]
                qa  = q_c[:, np.newaxis] * q_unit[:, 0]
                qb  = q_c[:, np.newaxis] * q_unit[:, 1]
                qc  = q_c[:, np.newaxis] * q_unit[:, 2]
                intensity = Iqabc_func(
                    qa.ravel(), qb.ravel(), qc.ravel(), *args
                ).reshape(qa.shape)
                grp[start:start + len(q_c)] = np.mean(intensity, axis=1)
            out[mask] = grp
        return out

    I_lo = _eval(n_lo)
    if (w_hi == 0.0).all():
        return I_lo
    I_hi = _eval(n_hi)
    return I_lo + w_hi * (I_hi - I_lo)

Key points:

  • Adaptive rule:
    $N(q) = clip((\left(\frac{q}{R_{\mathrm{char}}}\right)^2, N_{\min}, N_{\max})$
    where $N_{\min}=300$,
    and $N_{\max}=\left(\frac{q_{\max}}{R_{\mathrm{char}}}\right)^2$

  • Automatic $N_{\max}$ from $(q_{\max}, R_{\mathrm{char}})$

  • Linear interpolation between $N_{\mathrm{lo}}$ and $N_{\mathrm{hi}}$

  • Cached Fibonacci spheres (no duplication)

  • Chunking for large arrays


sasmodels/models/nanoprisms.py

Updated Iq:

def Iq(q, sld, sld_solvent, nsides: int, Rave, L):
    nsides = int(nsides)
    q = np.atleast_1d(q)

    R_circ = Rave / np.sqrt(np.sinc(2 / nsides))
    R_char = max(R_circ, L / 2)

    I_avg = orientation_average(q, R_char, Iqabc, nsides, Rave, L)
    return I_avg * (sld - sld_solvent) ** 2 * 1e-4

With:

$$ R_{\mathrm{char}} = \max\left(R_{\mathrm{circ}}, \frac{L}{2}\right), \quad R_{\mathrm{circ}} = \frac{R_{\mathrm{ave}}}{\sqrt{\mathrm{sinc}(2\pi/n)}} $$

For $n = 3$:

$$ R_{\mathrm{circ}} \approx 1.56,R_{\mathrm{ave}} $$


Results

comparison
  • Discontinuities in $I(q)$ removed
  • Relative error $\lesssim 1%$
  • Speed-up $\sim 5.9\times$ (depends on particle size)

Summary

This makes adaptive orientation averaging continuous and more robust with minimal overhead. The parameter n_points is no longer used as directly constrained by the size parameters of the models.
To avoid memory overflow, the computation is split into chunks of at most MAX_ELEMENTS elements.

Feedback welcome.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that I haven't updated the code on the repository. I am not sure on the procedure and don't want to make a mess...

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Apr 3, 2026

Code looks okay. Math seems to match the paper.

I would prefer to update the fibonacci interface to allow for symmetries before merging. Once we define an interface in sasmodels.special we need to maintain backward compatibility indefinitely. If you imagine that we might add other integration rules to the mix, keeping the interfaces similar would be nice, so I suggest n or npoints as the argument rather than npoints_fibonacci.

It would be nice to decide how to control the number of integration points in the model. Ideally this would use max qr for the enclosing cylinder ≈ √Rave² + L²} rather than making the user decide. The npoints_fibonacci parameter in Iq won't be used by SasView or the rest of sasmodels. I would remove it but I'm okay with ignoring it until we have a better way to control integration.

For consistency with other models, I suggest using length, radius_average and n_sides. If you decide that the circumradius is a better parameter to report in a paper then the name radius would be good enough. For numbers of things I see n_shells, n_steps, n_aggreg, n_stacking, n, num_pearls, arms (for star polymer) and level (for unifified power Rg). Discussion here.

Model names are singular (barbell, sphere, ellipsoid) not plural (nanoprisms).

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Apr 7, 2026

Regarding the model name, we do not have nanospheres or nanocylinders. You could use plain prism instead.

@marimperorclerc
Copy link
Copy Markdown
Contributor

Regarding the model name, we do not have nanospheres or nanocylinders. You could use plain prism instead.

rectangular_prism is already one model of sasmodels.

Maybe regular_prism or simply prism could do it indeed ?

Allmost there !

@marimperorclerc
Copy link
Copy Markdown
Contributor

For consistency with other models, I suggest using length, radius_average and n_sides. If you decide that the circumradius is a better parameter to report in a paper then the name radius would be good enough. For numbers of things I see n_shells, n_steps, n_aggreg, n_stacking, n, num_pearls, arms (for star polymer) and level (for unifified power Rg). Discussion here.

Yes, this set of choice is fine: length, radius_average and n_sides + npoints for number of points on Fibonacci sphere.

@pkienzle
Copy link
Copy Markdown
Contributor

pkienzle commented Apr 8, 2026

I would prefer to keep n_points out of the model and use adaptive integration instead. I only exposed it so I could easily make the comparison plots.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Nanoprism model

7 participants