[QDP] Add SVHN IQP encoding benchmark with PennyLane baseline and QDP pipeline #1186
Open
ryankert01 wants to merge 3 commits intoapache:mainfrom
Open
[QDP] Add SVHN IQP encoding benchmark with PennyLane baseline and QDP pipeline #1186ryankert01 wants to merge 3 commits intoapache:mainfrom
ryankert01 wants to merge 3 commits intoapache:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
1. Introduction
This report presents a benchmark comparing two quantum encoding pipelines for a
binary classification task on the SVHN (Street View House Numbers) dataset. The
task discriminates digit 1 vs digit 7 using an IQP (Instantaneous Quantum
Polynomial) encoding followed by a variational quantum classifier.
The two pipelines under comparison are:
pennylane_baseline/svhn_iqp.py) — IQP encoding isembedded inside the quantum circuit and re-executed on every forward/backward
pass during training.
qdp_pipeline/svhn_iqp.py) — IQP encoding is performedonce upfront on GPU via QDP's CUDA kernels. The training circuit loads
pre-encoded state vectors using
StatePrep.The central question is: how much wall-clock time does one-shot GPU encoding
save compared to re-encoding on every circuit evaluation, given identical
quantum states and identical training configurations?
2. Method
2.1 Dataset and Preprocessing
train_32x32.mat+test_32x32.mat(Stanford)n_samplesfrom the filtered poolStandardScalerthen PCA ton_qubitsdimensions2.2 IQP Encoding
Both pipelines implement the same IQP circuit:
where the diagonal unitary$U_{\text{phase}}$ applies:
This matches QDP's CUDA kernel (
iqp.cu), which computes:with $\theta(x) = \sum_i x_i \cdot \text{data}i + \sum{i<j} x_i \cdot x_j \cdot \text{data}_{ij}$.
PennyLane baseline constructs this circuit with explicit PennyLane gates
(
Hadamard,PhaseShift,ControlledPhaseShift) inside a@qml.qnode. It isre-evaluated on every forward and backward pass.
QDP pipeline calls
QdpEngine.encode(method="iqp")once on GPU, convertsthe resulting state vectors to NumPy, and feeds them via
StatePrepduringtraining.
2.3 Variational Classifier
Both pipelines share the same classifier architecture:
num_layersrepetitions ofRot(theta, phi, omega)on each qubit + a ring of CNOTs
expval(PauliZ(0))+ trainable biasbatch_sizeper step2.4 Experimental Configuration
All runs use the following parameters unless stated otherwise:
--n-samples--n-qubits--iters--batch-size--layers--lr--optimizer--seed--test-size--early-stopHardware: Single NVIDIA GPU (CUDA), CPU for PennyLane
default.qubit.2.5 Fairness Controls
To ensure an apples-to-apples comparison, the following controls are enforced:
circuit with the same phase convention (
main()using
np.random.default_rng(seed).permutation()before the trial loop.run_training(), the RNG isinitialized fresh with
np.random.default_rng(seed)with no priorpermutation()call, so both pipelines draw the same mini-batch sequences.torch.Tensor.cpu().numpy())transfer.
3. Results
3.1 Single-Trial Comparison (seed = 42)
of the two encoding implementations.
3.2 Multi-Trial Run (3 trials, PennyLane baseline)
Seeds: 42, 43, 44. Same train/test split across all trials.
Aggregate statistics:
The consistent train/test partition across trials (verified by constant
n_train = 160, n_test = 40) confirms the split-once-in-main fix.
3.3 Timing Breakdown (QDP pipeline)
default.qubit+ AdamThe IQP encoding is negligible relative to training time for this problem size.
4. Discussion
4.1 Encoding Equivalence
The exact match in train/test accuracy (0.6625/0.6750) across pipelines
confirms that the PennyLane H-D-H circuit and QDP's CUDA kernel produce
identical quantum states. This is expected since:
4.2 Performance
At 200 samples and 6 qubits, the state dimension is only$2^6 = 64$ , so the
encoding cost is negligible in both pipelines. The ~7% speed advantage of QDP
comes from not re-running the IQP gates on every forward/backward pass. This
advantage is expected to grow with:
4.3 Limitations
--iters 200and--n-samples 200aredeliberately small for benchmarking speed, not classification quality.
default.qubit(CPU) in both pipelines. AGPU-native training backend would shift the bottleneck.
5. Reproduction