Draft
Conversation
This commit introduces a new test case to benchmark synthetic dataset training convergence, verifying improvements in mAP@50 and validation loss after training.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #638 +/- ##
========================================
+ Coverage 25% 54% +29%
========================================
Files 47 47
Lines 6342 6342
========================================
+ Hits 1577 3400 +1823
+ Misses 4765 2942 -1823 🚀 New features to boost your workflow:
|
- Use `tqdm.auto` for better progress bar compatibility. - Extract synthetic dataset generation logic into a reusable pytest fixture. - Simplify benchmark test by integrating the fixture.
…rgence test - Introduce `synthetic_color_dataset_dir` for color-based dataset generation. - Rename `synthetic_dataset_dir` to `synthetic_shape_dataset_dir` for clarity. - Refine synthetic convergence test with stricter assertions on mAP@50 and validation loss.
…vergence test configuration - Delete `synthetic_color_dataset_dir` fixture as it is no longer used. - Update model initialization to disable pretrain weights. - Increase training epochs in synthetic convergence test. - Relax mAP@50 assertion threshold and refine validation loss message.
…eneration - Enable control over the minimum and maximum number of objects per image. - Update `generate_synthetic_sample` calls and docstrings accordingly.
… loss tracking - Add GPU support fallback for test execution. - Include train dataset evaluation and diagnostics for better loss comparison. - Refine loss and mAP@50 assertions for improved test accuracy.
- Update assertion messages to include variable names for better readability. - Adjust messages to improve debugging context in test failures.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request adds a new benchmark test to ensure that training on a synthetic dataset improves model performance, specifically mean average precision (mAP@50) and validation loss. The test generates a synthetic COCO-style dataset, evaluates the model before and after training, and asserts that training leads to measurable improvements.
New synthetic benchmark test:
test_synthetic_training_improves_map50totests/benchmarks/test_synthetic_convergence.pythat:RFDETRNanomodel's baseline mAP@50 and validation loss.synthetic_benchmark.json.