Conversation
Summary of ChangesHello @lazarusA, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request initiates a refactoring of the dashboard plotting logic, specifically targeting LSTM model outputs, to leverage Makie's Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a work-in-progress that refactors the plotting dashboard for LSTM model outputs. The changes update how plotting observables are initialized and updated during training to correctly handle data with a time dimension.
My review identified a critical issue where an inconsistent function name (_get_target_ŷ vs. _get_target_ŷ) would lead to a runtime error. I've also provided several suggestions to improve the code:
- Re-instating the 1:1 reference lines on the prediction-vs-observation scatter plots, which are very helpful for assessing model performance.
- Removing commented-out code left over from refactoring.
- Refactoring a new helper function to be more concise and idiomatic using Julia's generator expressions.
These changes should improve the robustness and maintainability of the new dashboarding code.
| current_y_train_hat = _get_target_ŷ(current_ŷ_train, current_y_train, t) | ||
|
|
||
| current_y_val = _get_target_y(y_val, t) | ||
| current_y_val_hat = _get_target_ŷ(current_ŷ_val, current_y_val, t) |
There was a problem hiding this comment.
There is an inconsistent use of the function name here. The function is imported as _get_target_ŷ on line 9 and used with that name on line 453. However, here it is called as _get_target_ŷ. This will cause an UndefVarError. Please use the consistent name _get_target_ŷ.
current_y_val_hat = _get_target_ŷ(current_ŷ_val, current_y_val, t)|
|
||
| Makie.scatter!(ax_tr, p_tr_sub, o_tr_sub; color = :grey25, alpha = 0.6, markersize = 6) | ||
| Makie.lines!(ax_tr, sort(o_tr), sort(o_tr); color = :black, linestyle = :dash) | ||
| # Makie.lines!(ax_tr, sort(o_tr), sort(o_tr); color = :black, linestyle = :dash) |
There was a problem hiding this comment.
The 1:1 reference line for the scatter plot has been commented out. This line is very useful for visualizing model performance. I recommend adding it back. The original implementation sort(o_tr), sort(o_tr) was not ideal. A better approach is to use the mn and mx values that are already calculated to draw the line across the full range of the axes.
Makie.lines!(ax_tr, [mn, mx], [mn, mx]; color = :black, linestyle = :dash)|
|
||
| Makie.scatter!(ax_val, p_val_sub, o_val_sub; color = :tomato, alpha = 0.6, markersize = 6) | ||
| Makie.lines!(ax_val, sort(o_val), sort(o_val); color = :black, linestyle = :dash) | ||
| # Makie.lines!(ax_val, sort(o_val), sort(o_val); color = :black, linestyle = :dash) |
There was a problem hiding this comment.
The 1:1 reference line for the validation scatter plot has been commented out. For consistency with the training plot and to better assess performance, I recommend adding it back using the same mn and mx range as the training plot.
Makie.lines!(ax_val, [mn, mx], [mn, mx]; color = :black, linestyle = :dash)| # train_preds = to_obs_tuple(init_ŷ_train, target_names) | ||
| # val_preds = to_obs_tuple(init_ŷ_val, target_names) | ||
| # train_obs = to_tuple(y_train, target_names) | ||
| # val_obs = to_tuple(y_val, target_names) |
| function to_obs_tuple(ŷ, y, target_names) | ||
| # first get observations, they could have a time dimension | ||
| tmp_obs = [] | ||
| tmp_pred = [] | ||
| for t in target_names | ||
| y_ = _get_target_y(y, t) | ||
| ŷ_ = _get_target_ŷ(ŷ, y_, t) # this is to match the shape of the observations, which could have a time dimension | ||
| push!(tmp_obs, t => y_,) | ||
| push!(tmp_pred, t => to_obs(ŷ_,)) | ||
| end | ||
| out_pred = (; tmp_pred...) | ||
| out_obs = (; tmp_obs...) | ||
| return out_pred, out_obs | ||
| end |
There was a problem hiding this comment.
This function can be written more idiomatically and concisely using generator expressions to construct the named tuples directly, instead of using temporary arrays and a loop. This avoids intermediate allocations and improves readability.
function to_obs_tuple(ŷ, y, target_names)
# first get observations, they could have a time dimension
out_pred = (; (t => to_obs(_get_target_ŷ(ŷ, _get_target_y(y, t), t)) for t in target_names)...)
out_obs = (; (t => _get_target_y(y, t) for t in target_names)...)
return out_pred, out_obs
end
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #235 +/- ##
==========================================
- Coverage 50.89% 50.58% -0.31%
==========================================
Files 28 28
Lines 1959 1971 +12
==========================================
Hits 997 997
- Misses 962 974 +12 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
What do you think @lazarusA - can I test this? Might be actually handy for me to see why the hybrid lstm will not train much |
|
Unfortunately, is not functional yet. I will have this in mind during the next round of patches. |
WIP.
After looking back at the dashboard extension, I think we should refactor using the new makie SpecApi/compute_graph approach. That could help us integrate new components in a more composable way.