Skip to content

dashboard LSTMs outputs#235

Open
lazarusA wants to merge 2 commits intomainfrom
la/update_dashboard
Open

dashboard LSTMs outputs#235
lazarusA wants to merge 2 commits intomainfrom
la/update_dashboard

Conversation

@lazarusA
Copy link
Member

@lazarusA lazarusA commented Feb 23, 2026

WIP.

After looking back at the dashboard extension, I think we should refactor using the new makie SpecApi/compute_graph approach. That could help us integrate new components in a more composable way.

@gemini-code-assist
Copy link

Summary of Changes

Hello @lazarusA, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request initiates a refactoring of the dashboard plotting logic, specifically targeting LSTM model outputs, to leverage Makie's SpecApi/compute_graph approach. The changes aim to enhance the composability and integration of new components within the plotting system by introducing more explicit handling of observation data and predictions, and by refining how plotting observables are initialized and updated.

Highlights

  • Plotting Enabled in Example: The plotting flag in the example_synthetic_lstm.jl tutorial was changed from false to true, enabling visualization for the LSTM training process.
  • New Helper Functions Imported: The EasyHybridMakie.jl extension now imports _get_target_y and _get_target_ŷ from EasyHybrid, indicating new logic for handling target observations and predictions.
  • Dashboard Plotting Refinements: The 1:1 reference lines in the training and validation scatter plots within the dashboard (EasyHybridMakie.jl) were commented out, potentially for a cleaner visual or future re-implementation.
  • Enhanced Plotting Observables Update: The update_plotting_observables function in both EasyHybridMakie.jl and src/train.jl now explicitly accepts y_train and y_val (training and validation observation data) as arguments, providing more context for plotting updates.
  • Refactored Prediction and Observation Handling: A new to_obs_tuple method was added in src/plotrecipes.jl that takes both predictions () and observations (y), allowing for more robust handling of data, especially those with time dimensions, and ensuring predictions match observation shapes.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • docs/literate/tutorials/example_synthetic_lstm.jl
    • Enabled plotting for the train function call.
  • ext/EasyHybridMakie.jl
    • Imported _get_target_y and _get_target_ŷ from EasyHybrid.
    • Commented out Makie.lines! calls for 1:1 reference lines in training and validation scatter plots.
    • Added y_train and y_val parameters to the update_plotting_observables function signature.
    • Modified prediction update logic to use _get_target_y and _get_target_ŷ for train_preds and val_preds.
  • src/plotrecipes.jl
    • Updated the call to to_obs_tuple in initialize_plotting_observables to pass y_train and y_val.
    • Added a new method for to_obs_tuple that accepts , y, and target_names, returning both predictions and observations, handling time dimensions.
  • src/train.jl
    • Modified the call to update_plotting_observables to include y_train and y_val arguments.
Activity
  • The pull request is currently marked as 'WIP' (Work In Progress) by the author, lazarusA.
  • No specific review comments or other activities have been recorded yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a work-in-progress that refactors the plotting dashboard for LSTM model outputs. The changes update how plotting observables are initialized and updated during training to correctly handle data with a time dimension.

My review identified a critical issue where an inconsistent function name (_get_target_ŷ vs. _get_target_ŷ) would lead to a runtime error. I've also provided several suggestions to improve the code:

  • Re-instating the 1:1 reference lines on the prediction-vs-observation scatter plots, which are very helpful for assessing model performance.
  • Removing commented-out code left over from refactoring.
  • Refactoring a new helper function to be more concise and idiomatic using Julia's generator expressions.

These changes should improve the robustness and maintainability of the new dashboarding code.

current_y_train_hat = _get_target_ŷ(current_ŷ_train, current_y_train, t)

current_y_val = _get_target_y(y_val, t)
current_y_val_hat = _get_target_ŷ(current_ŷ_val, current_y_val, t)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is an inconsistent use of the function name here. The function is imported as _get_target_ŷ on line 9 and used with that name on line 453. However, here it is called as _get_target_ŷ. This will cause an UndefVarError. Please use the consistent name _get_target_ŷ.

        current_y_val_hat = _get_target_ŷ(current_ŷ_val, current_y_val, t)


Makie.scatter!(ax_tr, p_tr_sub, o_tr_sub; color = :grey25, alpha = 0.6, markersize = 6)
Makie.lines!(ax_tr, sort(o_tr), sort(o_tr); color = :black, linestyle = :dash)
# Makie.lines!(ax_tr, sort(o_tr), sort(o_tr); color = :black, linestyle = :dash)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The 1:1 reference line for the scatter plot has been commented out. This line is very useful for visualizing model performance. I recommend adding it back. The original implementation sort(o_tr), sort(o_tr) was not ideal. A better approach is to use the mn and mx values that are already calculated to draw the line across the full range of the axes.

        Makie.lines!(ax_tr, [mn, mx], [mn, mx]; color = :black, linestyle = :dash)


Makie.scatter!(ax_val, p_val_sub, o_val_sub; color = :tomato, alpha = 0.6, markersize = 6)
Makie.lines!(ax_val, sort(o_val), sort(o_val); color = :black, linestyle = :dash)
# Makie.lines!(ax_val, sort(o_val), sort(o_val); color = :black, linestyle = :dash)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The 1:1 reference line for the validation scatter plot has been commented out. For consistency with the training plot and to better assess performance, I recommend adding it back using the same mn and mx range as the training plot.

        Makie.lines!(ax_val, [mn, mx], [mn, mx]; color = :black, linestyle = :dash)

Comment on lines +65 to +68
# train_preds = to_obs_tuple(init_ŷ_train, target_names)
# val_preds = to_obs_tuple(init_ŷ_val, target_names)
# train_obs = to_tuple(y_train, target_names)
# val_obs = to_tuple(y_val, target_names)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of commented-out code appears to be a remnant of a refactoring. It should be removed to improve code clarity.

Comment on lines +119 to +132
function to_obs_tuple(ŷ, y, target_names)
# first get observations, they could have a time dimension
tmp_obs = []
tmp_pred = []
for t in target_names
y_ = _get_target_y(y, t)
ŷ_ = _get_target_ŷ(ŷ, y_, t) # this is to match the shape of the observations, which could have a time dimension
push!(tmp_obs, t => y_,)
push!(tmp_pred, t => to_obs(ŷ_,))
end
out_pred = (; tmp_pred...)
out_obs = (; tmp_obs...)
return out_pred, out_obs
end

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function can be written more idiomatically and concisely using generator expressions to construct the named tuples directly, instead of using temporary arrays and a loop. This avoids intermediate allocations and improves readability.

function to_obs_tuple(ŷ, y, target_names)
    # first get observations, they could have a time dimension
    out_pred = (; (t => to_obs(_get_target_ŷ(ŷ, _get_target_y(y, t), t)) for t in target_names)...)
    out_obs = (; (t => _get_target_y(y, t) for t in target_names)...)
    return out_pred, out_obs
end

@codecov
Copy link

codecov bot commented Feb 23, 2026

Codecov Report

❌ Patch coverage is 0% with 20 lines in your changes missing coverage. Please review.
✅ Project coverage is 50.58%. Comparing base (813a022) to head (668749c).

Files with missing lines Patch % Lines
src/plotrecipes.jl 0.00% 14 Missing ⚠️
ext/EasyHybridMakie.jl 0.00% 6 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #235      +/-   ##
==========================================
- Coverage   50.89%   50.58%   -0.31%     
==========================================
  Files          28       28              
  Lines        1959     1971      +12     
==========================================
  Hits          997      997              
- Misses        962      974      +12     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@BernhardAhrens
Copy link
Collaborator

What do you think @lazarusA - can I test this? Might be actually handy for me to see why the hybrid lstm will not train much

@lazarusA
Copy link
Member Author

lazarusA commented Mar 9, 2026

Unfortunately, is not functional yet. I will have this in mind during the next round of patches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants