Skip to content
View ElMonstroDelBrest's full-sized avatar
🏠
Working from home
🏠
Working from home

Block or report ElMonstroDelBrest

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ElMonstroDelBrest/README.md

George-Daniel Gherasim

AI Infrastructure & Systems β€” ENSTA Paris, 2nd year


What I'm building

ChaosAI β€” Time-series world model trained from scratch.

  • 38M-parameter Mamba-2 JEPA encoder, trained on 838M tokens across 8,969 financial assets
  • Full JAX/Flax pipeline: FSQ tokenizer β†’ SSM encoder β†’ OT-CFM stochastic predictor β†’ TD-MPC2 RL agent
  • Auto-sharding on TPU v6e clusters (GSPMD, 2D mesh, XLA production flags)
  • Data lake: raw parquet β†’ ArrayRecord on GCS, zero idle cost

The core insight: JEPA (Joint Embedding Predictive Architecture) learns structured latent representations of relationships and context β€” not next-token prediction. Same philosophy as knowledge graphs for agents.


Stack

AI/Compute: JAX, Flax, Optax, PyTorch, XLA, TPU Pod topology
Infra: GCP, GCS, Grain/ArrayRecord, Docker, FinOps
Systems: Python, C, Bash


πŸ“« george-daniel.gherasim@ensta.fr

Pinned Loading

  1. ChaosAI ChaosAI Public

    38M-param time-series world model: FSQ tokenizer β†’ Mamba-2 JEPA β†’ OT-CFM β†’ TD-MPC2 agent. 838M tokens, TPU v6e, JAX/Flax.

    Python 3