ENH: Machine-readable validate output with store/reload#1822
ENH: Machine-readable validate output with store/reload#1822yarikoptic merged 23 commits intomasterfrom
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1822 +/- ##
==========================================
+ Coverage 75.11% 76.27% +1.16%
==========================================
Files 85 87 +2
Lines 11932 12482 +550
==========================================
+ Hits 8963 9521 +558
+ Misses 2969 2961 -8
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
4894aa1 to
2ddd5d4
Compare
|
|
||
| # First produce a JSONL to load | ||
| outfile = tmp_path / "input.jsonl" | ||
| r = CliRunner().invoke( |
Check warning
Code scanning / CodeQL
Variable defined multiple times Warning test
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 days ago
In general, when a variable is assigned multiple times and the first assigned value is never read before being overwritten, the earlier assignment should be removed or changed so that the variable is not captured unnecessarily. This keeps the code clear and avoids confusion about which value is actually used.
Here, the best fix is to keep the first CliRunner().invoke(...) call (because we need to produce the JSONL file), but stop assigning its return value to r. The later assignment on line 884 is the only one whose value is actually used (checked via r.exit_code), so we leave that as is. Concretely, in dandi/cli/tests/test_cmd_validate.py, in test_validate_auto_companion_skipped_with_load, change line 874 from r = CliRunner().invoke(...) to just CliRunner().invoke(...). No imports, helper methods, or additional definitions are needed.
| @@ -871,7 +871,7 @@ | ||
| """--load suppresses auto-save companion.""" | ||
| # First produce a JSONL to load | ||
| outfile = tmp_path / "input.jsonl" | ||
| r = CliRunner().invoke( | ||
| CliRunner().invoke( | ||
| main, ["validate", "-f", "json_lines", "-o", str(outfile), str(simple2_nwb)] | ||
| ) | ||
| assert outfile.exists() |
| """ | ||
| # Avoid heavy imports by importing with function: | ||
| from ..upload import upload | ||
| from ..upload import upload as upload_ |
There was a problem hiding this comment.
This is better of course than clobbering the namespace, but in the long term a better name that distinguishes API vs CLI functions (with CLI usually marked private since it is odd to expose them to library) is still preferred
Which would be a breaking change, not saying do it here, just thinking out loud
There was a problem hiding this comment.
yeah -- typically we just clobbered click's interfaces (sorry didn't interleave... but at least manually pruned some unrelated)
❯ grep -l 'from \.\.\(.*\) import \1' dandi/cli/cmd_* | xargs grep '^def '
dandi/cli/cmd_delete.py:def delete(paths, skip_missing, dandi_instance, force, devel_debug=False):
dandi/cli/cmd_download.py:def download(
dandi/cli/cmd_move.py:def move(
dandi/cli/cmd_organize.py:def organize(
dandi/cli/cmd_upload.py:def upload(
dandi/cli/cmd_validate.py:def validate_bids(
dandi/cli/cmd_validate.py:def validate(
❯ grep 'from \.\.\(.*\) import \1' dandi/cli/cmd_*
dandi/cli/cmd_delete.py: from ..delete import delete
dandi/cli/cmd_download.py: from .. import download
dandi/cli/cmd_move.py: from .. import move as move_mod
dandi/cli/cmd_organize.py: from ..organize import organize
dandi/cli/cmd_upload.py: from ..upload import upload
dandi/cli/cmd_validate.py:from ..validate import validate as validate_but frankly and unfortunately here it doesn't matter much since those click functions are not usable as python interfaces! I wish it was otherwise. So, no point of giving them any special names really.
There was a problem hiding this comment.
but frankly and unfortunately here it doesn't matter much since those click functions are not usable as python interfaces!
How do you mean? Any other library can import them and manipulate the click groups; sometimes that might even be intentional, but I don't think so here
from dandi.cli.command import main
from dandi.cli.cmd_upload import upload # Implying it is not private and intended to be imported and customized
import click
@main.command("upload2") # Or even re-register under the same name if attempting some nasty injection
@click.pass_context
def wrapped_original(ctx):
click.echo("Before original") # Inject custom code here
ctx.invoke(upload)
click.echo("After original") # Inject more custom code hereThere was a problem hiding this comment.
and what can I now invoke as a python function (with all the options cli has)? I have tried but failed -- a complete demo would be aprecitated
There was a problem hiding this comment.
You would have to use .main(..., standalone_mode=False) on the method or .invoke(...) on the function to directly call it like that, which is awkward
I did not mean to imply that anyone would actually want to do it that way, only that because they have no specific names or any marked naming (such as _cli_upload) indicating intent (besides their submodule, which doesn't actually mean it is the CLI method since the same submodule could also house the API helper it wraps)
Usually, ending in underscore is a convention to avoid clobbering built-ins or other definitions with nearly the same meaning (such as for input kwargs), not the same as marking this method as 'not intended to import'
Whereas the only reason to expose for import is the full example I posted above showing CLI manipulation
dandi/cli/cmd_validate.py
Outdated
| if not paths: | ||
| paths = (os.curdir,) | ||
| # below we are using load_namespaces but it causes HDMF to whine if there | ||
| # is no cached name spaces in the file. It is benign but not really useful | ||
| # at this point, so we ignore it although ideally there should be a formal | ||
| # way to get relevant warnings (not errors) from PyNWB | ||
| ignore_benign_pynwb_warnings() |
There was a problem hiding this comment.
Again, pointing out that if this PR had been broken into smaller modular PRs, the negative breaks of this changelog may have aligned better with the relevant changes to catch any relevant drops or alterations from current behavior
Add severity, id, validator, standard, and dandiset as --grouping options. Uses section headers with counts (e.g. "=== ERROR (5 issues) ===") for human output. Structured output is unaffected (always flat). Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
Limit how many results are shown per leaf group (or in the flat list
when no grouping is used). Excess results are replaced by a
TruncationNotice placeholder — a distinct dataclass (not a
ValidationResult) so consumers can isinstance() check.
- TruncationNotice dataclass + LeafItem/TruncatedResults type aliases
- _truncate_leaves() walks the grouped tree, caps leaf lists
- Human output: "... and N more issues" in cyan
- Structured output: {"_truncated": true, "omitted_count": N} sentinel
- Headers show original counts including omitted items
- Works without grouping (flat list) and with multi-level grouping
Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <noreply@anthropic.com>
The _auto_save_sidecar() call was only in the structured-to-stdout branch, so the default human format (the most common usage) never wrote the _validation.jsonl sidecar next to the log file. Move the sidecar write and _exit_if_errors() into a shared path that runs after all rendering branches. The sidecar is now written whenever there are results, unless --output or --load is active. Also update the validate docstring/help text to document the sidecar behavior, and update the design spec (Phase 1b, Phase 3, testing strategy) to reflect the --validation-log CLI option for upload and proper CLI integration testing via CliRunner through main(). Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
- Rename "human" output format to "text" throughout cmd_validate and tests (Click option, default values, function names _render_human → _render_text, _render_human_grouped → _render_text_grouped, test names, docstrings) - Add field docstring to TruncationNotice.omitted_count - Fix CodeQL warning: remove unused `r =` assignment in test_validate_load - Use match statements in _get_formatter and _group_key - Simplify cmd_upload sidecar path derivation to conditional expression - Merge implicit string concatenation in validate/io.py warning Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
…ave companion unfiltered in cmd_validate In BIDS, "sidecar" specifically refers to .json files accompanying data files. Rename all internal references to the _validation.jsonl file from "sidecar" to "companion" to avoid confusion: - validation_sidecar_path() → validation_companion_path() - _auto_save_sidecar() → _auto_save_companion() - Variable names, docstrings, comments, and spec prose The only remaining "sidecar" reference is in validate/types.py where it correctly describes BIDS sidecar JSON files. Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
Extend grouping test coverage from only severity to all grouping values and composite (multi-level) grouping specs: - Parametrize text and JSON CLI tests with 8 specs each: 5 single values (severity, id, validator, standard, dandiset) + 3 composite (severity+id, validator+severity, id+validator) - Parametrize --load and --output tests with single and composite specs - Add _grouping_opts() helper to compose -g args, reused across tests - Assert known issue ID (DANDI.NO_DANDISET_FOUND) in output - Assert nested indentation for composite groupings in text format - Assert nested dict structure for composite groupings in JSON format Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
Eliminate duplication between write_validation_jsonl and
append_validation_jsonl by adding a keyword-only `append` parameter
to write_validation_jsonl. The two functions had identical bodies
differing only in the file open mode ("w" vs "a").
Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
352b7f5 to
9881001
Compare
Refactoring of codebase into dandi/validate/ subpackage for larger #1822
Pure git mv to preserve rename tracking. Import updates follow in the next commit. Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
- Update all import paths to use _io, _core, _types across 16 files - Add io functions to __init__.py re-exports and __all__ - Change load_validation_jsonl from variadic *paths to Iterable[paths] - Move record_version check from io loader into ValidationResult.model_post_init (fires regardless of load method) Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
Move write_validation_jsonl import in upload.py and load_validation_jsonl import in cmd_validate.py to module level. These had no circular import justification (unlike the validate_bids import in files/bids.py which must stay lazy). Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
Reuse the shared _GROUPING_SPECS list (with composite groupings) in test_validate_grouping_json_cli, replacing the separate hardcoded list. Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
Add TextFormatter to formatter.py following the same Formatter protocol as JSONFormatter, JSONLinesFormatter, YAMLFormatter. Merge _render_text and _render_structured into a unified _render() that handles all formats through a common path. The three-way branch in validate() (text vs output-file vs structured-stdout) collapses to two-way (output-file vs stdout). Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
…rsist validation logs)
=== Do not change lines below ===
{
"chain": [],
"cmd": "yolo -v /home/yoh/proj/dandi/dandi-cli-enh-validators:/home/yoh/proj/dandi/dandi-cli-enh-validators:ro -- 'In /home/yoh/proj/dandi/dandi-cli-enh-validators which is submitted as dandi/dandi-cli#1822 we significantly improved validation interfacing -- we serialize validation outputs and store so we could reload and potentially review with different filtering or use external tools like visidata to navigate. I would like here to improve our https://docs.dandiarchive.org/user-guide-sharing/validating-files/ section with improved documentation, reflecting the state of that PR. We should demonstrate that we store companion validation files during upload so they could be re-reviewed/analyzed. We should show basic use of visidata to quickly review them. Could use bids-examples repo and some sample dandisets (should be sufficiently small) to show how e.g. to compose multiple validation files. Ideally we should script production of example outputs, and/or store/share validation output example for easier access. Do research how other projects using mkdocs produce similar demo walkthroughs, and what we have done so far in this repo. Do research, build plan for content and also implementation details.'",
"exit": 0,
"extra_inputs": [],
"inputs": [],
"outputs": [],
"pwd": "."
}
^^^ Do not change lines above ^^^
…rsist validation logs)
=== Do not change lines below ===
{
"chain": [],
"cmd": "yolo -v /home/yoh/proj/dandi/dandi-cli-enh-validators:/home/yoh/proj/dandi/dandi-cli-enh-validators:ro -- --resume",
"exit": 0,
"extra_inputs": [],
"inputs": [],
"outputs": [],
"pwd": "."
}
^^^ Do not change lines above ^^^
|
Hard to find any specific comment here now -- @CodyCBakerPhD , you wanted more docs, here is WiP as suggested (not promised ;) ):
|
…rsist validation logs)
=== Do not change lines below ===
{
"chain": [],
"cmd": "yolo -v /home/yoh/proj/dandi/dandi-cli-enh-validators:/home/yoh/proj/dandi/dandi-cli-enh-validators:ro -- 'In /home/yoh/proj/dandi/dandi-cli-enh-validators which is submitted as dandi/dandi-cli#1822 we significantly improved validation interfacing -- we serialize validation outputs and store so we could reload and potentially review with different filtering or use external tools like visidata to navigate. I would like here to improve our https://docs.dandiarchive.org/user-guide-sharing/validating-files/ section with improved documentation, reflecting the state of that PR. We should demonstrate that we store companion validation files during upload so they could be re-reviewed/analyzed. We should show basic use of visidata to quickly review them. Could use bids-examples repo and some sample dandisets (should be sufficiently small) to show how e.g. to compose multiple validation files. Ideally we should script production of example outputs, and/or store/share validation output example for easier access. Do research how other projects using mkdocs produce similar demo walkthroughs, and what we have done so far in this repo. Do research, build plan for content and also implementation details.'",
"exit": 0,
"extra_inputs": [],
"inputs": [],
"outputs": [],
"pwd": "."
}
^^^ Do not change lines above ^^^
…rsist validation logs)
=== Do not change lines below ===
{
"chain": [],
"cmd": "yolo -v /home/yoh/proj/dandi/dandi-cli-enh-validators:/home/yoh/proj/dandi/dandi-cli-enh-validators:ro -- --resume",
"exit": 0,
"extra_inputs": [],
"inputs": [],
"outputs": [],
"pwd": "."
}
^^^ Do not change lines above ^^^
|
At this point impractical to do deeper dive without wasting time 'Seems safe enough' as a new feature Hopefully doesn't break anything in practice (e.g., trigger new errors during Some concerns about docs falling out of date, would be easier IMO to have such tests of output here but can delegate to followup |
|
ok, let's proceed then. |
Summary
Design plan for machine-readable
validateoutput with store/reload capability. Adds structured output formats, automatic persistence of validation results alongside log files, and the ability to reload and re-render results with different grouping/filtering options.Key design decisions:
All structured formats (json, json_pp, json_lines, yaml) emit a uniform flat list of
ValidationResultrecords — no envelope/non-envelope splitJSONL as the primary interchange format:
cat/jq/grep/vd(VisiData) composable_record_versionfield on each record for forward-compatible deserializationGrouping affects human display only; structured output is always a stable flat schema
Auto-save
_validation.jsonlsidecarcompanion next to existing.logfilesRefactor
dandi/validate.py+dandi/validate_types.pyintodandi/validate/subpackageCloses validate: Add -f|--format option to optionally serialize into json, json_pp, json_lines or yaml #1515
Closes Provide easy means for introspecting upload validation failures #1753
Largely replaces Add filtering of issues by type/ID or by file location #1743
Enhances Add filtering of issues by type/ID or by file location #1743, upload,validate: Add --validators option #1737, Tidy up the
validatecommand function incmd_validate.py#1748TODO
dandi/validate/subpackage (git mvcommitted separately from import updates)cmd_validate.py— extract_collect_results(),_filter_results(),_render_results()_record_versiontoValidationResult--format(-f) option:human|json|json_pp|json_lines|yaml--output(-o) + auto-save_validation.jsonlcompanion file--summaryflag--load(multiple paths, mutually exclusive with positional args)dandi uploadseverity,id,validator,standard,dandiset--max-per-grouptruncation — cap results per leaf group with placeholder notice--max-per-groupfeature (Step 5)Limits how many results are shown per leaf group (or in the flat list when no grouping). Excess results are replaced by a
TruncationNoticeplaceholder — a distinct data structure (not aValidationResult), so it won't be confused with real results if the output is saved/reloaded.Examples (against 147k+ validation results from bids-examples)
Flat truncation —
--max-per-group 5with no grouping:Grouped truncation —
-g severity --max-per-group 3:and actually those are colored if output is not redirected
Multi-level leaf-only truncation —
-g severity -g id --max-per-group 2:Structured output —
-g severity -f json_pp --max-per-group 2emits_truncatedplaceholders:{ "ERROR": [ { "id": "DANDI.NO_DANDISET_FOUND", "severity": "ERROR", ... }, { "id": "BIDS.NIFTI_HEADER_UNREADABLE", "severity": "ERROR", ... }, { "_truncated": true, "omitted_count": 9567 } ], "HINT": [ { "id": "BIDS.JSON_KEY_RECOMMENDED", "severity": "HINT", ... }, { "id": "BIDS.JSON_KEY_RECOMMENDED", "severity": "HINT", ... }, { "_truncated": true, "omitted_count": 138015 } ] }Headers show original counts (e.g. "9569 issues") even when only a few are displayed. The
_truncatedsentinel follows the_record_versionnaming convention for metadata fields.Test plan
--formatoutput viaclick.CliRunnerValidationResultJSONL--loadwith multi-file concatenation, mutual exclusivity enforcement--outputis usedUpload companion integration test with Docker Compose fixture-- forgot what I meant--max-per-groupflat truncation, grouped truncation, multi-level, JSON placeholder, no-truncation when under limit_truncate_leaves()helperSome demos
See also
dandi validateacross bids-examples and then using visidata for navigation of composite dump of recordsTODOs
validatewithout-odoes not store the_validation.jsonlGenerated with Claude Code