Skip to content

feat: Integrate MPSSynapse Component#140

Open
antonvice wants to merge 8 commits intoNACLab:mainfrom
antonvice:feature/mps-synapse
Open

feat: Integrate MPSSynapse Component#140
antonvice wants to merge 8 commits intoNACLab:mainfrom
antonvice:feature/mps-synapse

Conversation

@antonvice
Copy link

This PR introduces the MPSSynapse component, allowing for Matrix Product State (MPS) compressed synaptic transformations. This enables high-dimensional layers to scale within memory constraints of biological and robotic inference systems. Includes a utility for SVD-based matrix decomposition into MPS cores.

@ago109 ago109 self-requested a review February 22, 2026 21:51
Copy link
Member

@ago109 ago109 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great (and interesting!); thank you for contributing the MPSSynapse!

One small comment / possible minor update - is there a paper reference we could possibly attach to the main doc-string of MPSSynapse?
For example, in the MSTDPETSynapse, we refer to the source where the mathematical model that the synapse represents comes from where in the main doc-string we have:
"""

| References:
| Florian, Răzvan V. "Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity."
| Neural computation 19.6 (2007): 1468-1502.
"""
That way, we pay credit to you/your team or the researchers that this synapse embodies.

If there is no reference, then possibly a link to the blog-post/tutorial/talk or source where perhaps this was proposed works as well =]

@antonvice
Copy link
Author

antonvice commented Feb 23, 2026

Hey Alex,
To give credit where it's due, the core math for the compression comes from Stoudenmire/Schwab and Novikov. More recently, Nuijten and Chris Fields have been doing some heavy lifting to prove that Active Inference effectively is a tensor network, so I wanted to bring that logic into ngclearn.
I actually need it for my work on T-AIF (Tensor Active Inference Framework). Trying to figure out how to actually bridge these quantum-inspired contractions with local error signals so we can scale up generative world models without blowing out the RAM.

PS
| References:
| Stoudenmire, E. Miles, and David J. Schwab. "Supervised learning with
| quantum-inspired tensor networks." Advances in neural information
| processing systems 29 (2016).
|
| Novikov, Alexander, et al. "Tensorizing neural networks." Advances in
| neural information processing systems 28 (2015).
|
| Nuijten, W. W. L., et al. "A Message Passing Realization of Expected
| Free Energy Minimization." arXiv preprint arXiv:2501.03154 (2025).
|
| Wilson, P. "Performing Active Inference with Explainable Tensor
| Networks." (2024).
|
| Fields, Chris, et al. "Control flow in active inference systems."
| arXiv preprint arXiv:2303.01514 (2023).

@ago109 ago109 assigned ago109 and rxng8 and unassigned ago109 Feb 24, 2026
@rxng8
Copy link
Member

rxng8 commented Feb 24, 2026

@antonvice Thank you for taking the time to contribute! We really appreciate it!
I have run pytest on your branch, here is the output specifically for the MPSSynapse:

...
>       np.testing.assert_allclose(y_mps, y_dense, atol=1e-5)
E       AssertionError: 
E       Not equal to tolerance rtol=1e-07, atol=1e-05
E       
E       Mismatched elements: 10 / 10 (100%)
E       Max absolute difference among violations: 0.00253725
E       Max relative difference among violations: 0.00251471
E        ACTUAL: array([[-0.096463,  1.658276, -2.307746,  5.297636, -0.901365,  7.161191,
E               -1.484671,  3.614738,  5.318543,  0.788785]], dtype=float32)
E        DESIRED: array([[-0.096226,  1.65959 , -2.30778 ,  5.298367, -0.899104,  7.158653,
E               -1.48596 ,  3.614866,  5.317003,  0.788097]], dtype=float32)

tests/components/synapses/test_mpsSynapse.py:42: AssertionError

You can increase the tolerance of numpy assert equal a little bit to loosen the condition (as last resort). For example, playing around with rtol and atol.

@ago109
Copy link
Member

ago109 commented Feb 26, 2026

Hello @antonvice could you please consider @rxng8's suggestion for loosening the tolerance a tiny bit for the unit-test?

@rxng8
Copy link
Member

rxng8 commented Mar 1, 2026

The test now works for me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants