feat: add SVHN Quantum Kernel SVM benchmark#1175
feat: add SVHN Quantum Kernel SVM benchmark#1175rich7420 wants to merge 3 commits intoapache:mainfrom
Conversation
ryankert01
left a comment
There was a problem hiding this comment.
lg, just need to be careful when we want to write an article about it:
- baseline: cpu
- qdp_pipeline: cpu -> gpu(encoding) -> cpu (train)
|
I think in benchmark, we will have several scenarios (eg. 3). We can use different directories to showcase these scenarios, eg. (can be a follow up to organize these) current 2 scenario looks like these
Tho I think there might be better way to split scenarios. These can be further discussed in next community sync Asia. |
guan404ming
left a comment
There was a problem hiding this comment.
Left some nits, great one. Thank you!
| # limitations under the License. | ||
|
|
||
| """ | ||
| Quantum Kernel SVM — PennyLane baseline (CPU encoding) — SVHN dataset. |
There was a problem hiding this comment.
Should we remove the pennylane here as well?
| print() | ||
|
|
||
| # Step 1: StandardScaler + Encode (GPU) | ||
| torch.cuda.synchronize() |
There was a problem hiding this comment.
This is called before time.perf_counter() starts, but there's no prior GPU work to synchronize at that point (the data is still on CPU). This first synchronize is a no-op.
Related Issues
N/A
Changes
Why
A Quantum Kernel SVM pipeline eliminates the iterative training loop — encoding becomes ~19% of total pipeline time, making QDP's GPU encoding advantage clearly visible in end-to-end results.
How
Added two new benchmark scripts that implement a Quantum Kernel SVM classification pipeline on SVHN:
New files
pennylane_baseline/svhn_kernel_amplitude.py— CPU encoding (L2-norm + zero-pad)qdp_pipeline/svhn_kernel_amplitude.py— QDP GPU encoding (amplitude)Benchmark results (RTX 3080)
5000 samples, 5-fold stratified CV, binary classification (digit 1 vs 7), C=100
Accuracy: 0.9104 ± 0.0091 (identical for both pipelines)
How to run
Checklist