Rework period/vintage indices in limit tables#281
Rework period/vintage indices in limit tables#281idelder wants to merge 8 commits intoTemoaProject:unstablefrom
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (13)
WalkthroughConstraint logic, data loading, viable-set filtering, DB schema, migration, and tests were made vintage-aware: constraint signatures, index sets, loader/validator mappings, and SQL schemas now include vintages and tech-group variants; model Sets/Params and loader filters were rewired to use vintage-oriented indices. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Loader as HybridLoader
participant Manager as CommodityNetworkManager
participant Model as TemoaModel
Loader->>Manager: build_filters(tech_groups) -> expanded viable sets (rtv, rtvo, rpt, ...)
Manager-->>Loader: viable_rtvo, viable_rpt, viable_rtv, ... (filter maps)
Loader->>Model: load parameters using viable_rtvo/rtv/rggv indices (include vintage)
Model->>Model: initialize vintage-aware Sets & Params (rtv, rtvo, rggv, rptvo)
Model-->>Model: construct constraints indexed by vintage (new_capacity, share, annual_capacity_factor)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@temoa/components/limits.py`:
- Around line 202-214: The current limit_annual_capacity_factor_indices builds
tuples for every candidate output token o without verifying that o is actually
produced by the resolved process (r,t,v) path, causing invalid
v_flow_out/v_flow_out_annual lookups; modify
limit_annual_capacity_factor_indices to only emit (r,p,t,v,o,op) when o is a
valid output of the resolved process — e.g., after expanding groups with
geography.gather_group_regions and technology.gather_group_techs and selecting
period p, check that o appears in the process outputs for that (_r, p, _t, v)
combination (use whatever model mapping holds process outputs) before adding the
tuple; apply the same guard to the analogous index-builder at the other location
mentioned (lines ~494-510) so both only produce indices for real (t,v,o) paths.
- Around line 512-519: The RHS of possible_activity_rptvo uses
model.v_capacity_available_by_period_and_tech[_r, p, _t] which is tech-wide;
change it to use the vintage-keyed available-capacity term that matches the LHS
vintage v (e.g. model.v_capacity_available_by_period_and_tech[_r, p, _t, v] or
the appropriately named vintage-indexed variable in the model) so the quicksum
term is vintage-specific; keep the rest of the comprehension (regions, techs,
the process_vintages checks, and model.capacity_to_activity) unchanged and
ensure the index v is used in the available-capacity lookup to prevent borrowing
headroom across vintages.
In `@temoa/data_io/component_manifest.py`:
- Around line 610-624: The share-table LoadItem entries for
model.limit_capacity_share and model.limit_new_capacity_share (and the similar
entries at 635-640) only validate the numerator group via validation_map and
thus never validate the 'super_group' column; update the loader so both group
columns use the same viable-set validation: either extend the validation_map to
include the index for 'super_group' (e.g., include the second group column
index) or implement a custom loader/second-pass validator that runs the existing
viable_rpt/viable_rtv validator against both group columns and applies the same
viable set to the denominator column before loading; locate the LoadItem
definitions for model.limit_capacity_share, model.limit_new_capacity_share and
the other share LoadItem(s) and make the validation change there so
'super_group' is validated the same way as the numerator group.
In `@temoa/data_io/hybrid_loader.py`:
- Around line 459-465: The code currently assumes the optional table
"tech_group_member" exists when building tech_groups (using cur.execute and then
calling self.manager.build_filters to produce filts), causing OperationalError
on older DBs; wrap the select in a safe check or try/except: either first verify
the table exists (e.g., query sqlite_master or information_schema) before
iterating, or catch the database OperationalError around the cur.execute('SELECT
group_name, tech FROM tech_group_member') and treat tech_groups as empty if the
table is absent, then continue to call self.manager.build_filters(tech_groups)
as before.
In `@temoa/db_schema/temoa_schema_v4.sql`:
- Around line 522-535: The CREATE TABLE statement for
limit_annual_capacity_factor lacks the idempotency guard; modify the CREATE
TABLE line for limit_annual_capacity_factor to include IF NOT EXISTS so the
statement becomes CREATE TABLE IF NOT EXISTS limit_annual_capacity_factor,
preserving the existing columns (region, tech, vintage REFERENCES
time_period(period), output_comm REFERENCES commodity(name), operator REFERENCES
operator, factor, notes), PRIMARY KEY and CHECK constraint unchanged to ensure
repeated schema bootstraps do not abort.
In `@temoa/utilities/db_migration_v3_to_v3_1.py`:
- Around line 155-160: The migrator currently omits legacy MinNewCapacity*
constraint tables causing silent data loss; update the no_transfer mapping
(variable no_transfer) to either include the MinNewCapacity* table names (e.g.,
"MinNewCapacityByPeriod", "MinNewCapacityByTechnology" — whatever concrete table
keys your schema uses) so the script will emit a manual-migration warning, or
instead route those MinNewCapacity* tables through the same period→vintage
conversion path used for other capacity constraints (the conversion routine used
elsewhere in this file) so they are converted to the new vintage-based
constraint format rather than being dropped. Ensure you reference the
no_transfer dict and the existing period→vintage conversion logic when making
the change so the migrator either warns or properly converts these legacy
constraints.
In `@tests/testing_data/mediumville.sql`:
- Around line 119-120: The two REPLACE INTO statements targeting
limit_new_capacity_share are missing an explicit column list, causing positional
binding to the table schema (region, period, sub_group, super_group, operator,
share, notes) and resulting in a type mismatch; fix both REPLACE INTO lines by
adding the correct column list that matches the provided VALUES order—e.g.
(region, super_group, sub_group, period, operator, share, notes)—so 'RPS_common'
binds to super_group and 2025 binds to period.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 19bd2384-f0fd-4650-87b8-fc8783f7217d
📒 Files selected for processing (12)
temoa/components/limits.pytemoa/core/model.pytemoa/data_io/component_manifest.pytemoa/data_io/hybrid_loader.pytemoa/db_schema/temoa_schema_v4.sqltemoa/model_checking/commodity_network_manager.pytemoa/tutorial_assets/utopia.sqltemoa/utilities/db_migration_v3_to_v3_1.pytests/testing_data/mediumville.sqltests/testing_data/mediumville_sets.jsontests/testing_data/test_system_sets.jsontests/testing_data/utopia_sets.json
d83a00a to
f8938b6
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
temoa/components/limits.py (1)
202-213: 🧹 Nitpick | 🔵 TrivialInline the set comprehension return.
indicesis only used for immediate return; this can be simplified.♻️ Proposed cleanup
def limit_annual_capacity_factor_indices( model: TemoaModel, ) -> set[tuple[Region, Period, Technology, Vintage, Commodity, str]]: - indices = { + return { (r, p, t, v, o, op) for r, t, v, o, op in model.limit_annual_capacity_factor_constraint_rtvo for _r in geography.gather_group_regions(model, r) for _t in technology.gather_group_techs(model, t) for p in model.time_optimize if o in model.process_outputs.get((_r, p, _t, v), []) } - return indices🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@temoa/components/limits.py` around lines 202 - 213, Inline the set comprehension inside the return of limit_annual_capacity_factor_indices instead of assigning it to the temporary variable indices; directly return the comprehension that iterates over model.limit_annual_capacity_factor_constraint_rtvo, expands geography.gather_group_regions(model, r) and technology.gather_group_techs(model, t), iterates model.time_optimize, and checks membership against model.process_outputs.get((_r, p, _t, v), []) so no intermediate variable is created.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@temoa/components/limits.py`:
- Around line 562-592: The loop over techs is overwriting activity_rpst each
iteration and the sparse-index access to
model.v_capacity_available_by_period_and_tech is unfiltered; change the tech
loop (symbols: activity_rpst, techs, _t) to accumulate per-tech contributions
(e.g., initialize activity_rpst = 0 before the tech loop and add each quicksum
rather than assign) for both the seasonal branch using model.v_flow_out and the
annual branch using model.v_flow_out_annual so all techs contribute, and when
building possible_activity_rpst only iterate or sum over existing sparse keys
(filter keys by matching (_r, p, _t) in
model.v_capacity_available_by_period_and_tech or use the model's keys iterator)
before indexing model.v_capacity_available_by_period_and_tech[_r, p, _t] and
combining with model.capacity_to_activity[_r, _t] and
model.segment_fraction_per_season[p, s].
In `@temoa/utilities/db_migration_v3_to_v3_1.py`:
- Around line 196-211: The migration fails because the code assumes a "vintage"
column exists for tables listed in period_to_vintage_tables; update the logic in
the block handling period_to_vintage_tables (referencing new_name, old_cols,
new_cols, period_index, vintage_index, data) to first check whether "vintage" is
present in new_cols and only perform the rearrangement when it exists, otherwise
skip the transformation (or remove the specific tables from
period_to_vintage_tables like LimitNewCapacity and LimitNewCapacityShare);
ensure you do not call new_cols.index("vintage") or access vintage_index when
"vintage" is absent.
---
Duplicate comments:
In `@temoa/components/limits.py`:
- Around line 202-213: Inline the set comprehension inside the return of
limit_annual_capacity_factor_indices instead of assigning it to the temporary
variable indices; directly return the comprehension that iterates over
model.limit_annual_capacity_factor_constraint_rtvo, expands
geography.gather_group_regions(model, r) and
technology.gather_group_techs(model, t), iterates model.time_optimize, and
checks membership against model.process_outputs.get((_r, p, _t, v), []) so no
intermediate variable is created.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ddc7498b-22a6-45f0-8256-514a769f4a56
📒 Files selected for processing (12)
temoa/components/limits.pytemoa/core/model.pytemoa/data_io/component_manifest.pytemoa/data_io/hybrid_loader.pytemoa/db_schema/temoa_schema_v4.sqltemoa/model_checking/commodity_network_manager.pytemoa/tutorial_assets/utopia.sqltemoa/utilities/db_migration_v3_to_v3_1.pytests/testing_data/mediumville.sqltests/testing_data/mediumville_sets.jsontests/testing_data/test_system_sets.jsontests/testing_data/utopia_sets.json
There was a problem hiding this comment.
♻️ Duplicate comments (1)
temoa/components/limits.py (1)
586-593:⚠️ Potential issue | 🔴 CriticalFix inverted sparse-key filter in seasonal capacity-factor RHS.
Line [592] uses
not inand then Line [587] immediately indexes that missing key, which can raise aKeyErrorduring model construction.🛠️ Proposed fix
possible_activity_rpst = quicksum( model.v_capacity_available_by_period_and_tech[_r, p, _t] * value(model.capacity_to_activity[_r, _t]) * value(model.segment_fraction_per_season[p, s]) for _r in regions for _t in techs - if (_r, p, _t) not in model.v_capacity_available_by_period_and_tech + if (_r, p, _t) in model.v_capacity_available_by_period_and_tech )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@temoa/components/limits.py` around lines 586 - 593, The comprehension building possible_activity_rpst uses an inverted filter causing indexing of missing keys: when iterating regions and techs it checks if (_r, p, _t) not in model.v_capacity_available_by_period_and_tech but then immediately indexes model.v_capacity_available_by_period_and_tech[_r, p, _t]; fix by only including terms for keys that exist (change the condition to if (_r, p, _t) in model.v_capacity_available_by_period_and_tech) or iterate directly over model.v_capacity_available_by_period_and_tech.keys() filtered by p and s so possible_activity_rpst, model.v_capacity_available_by_period_and_tech, model.capacity_to_activity and model.segment_fraction_per_season are only accessed for valid keys.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@temoa/components/limits.py`:
- Around line 586-593: The comprehension building possible_activity_rpst uses an
inverted filter causing indexing of missing keys: when iterating regions and
techs it checks if (_r, p, _t) not in
model.v_capacity_available_by_period_and_tech but then immediately indexes
model.v_capacity_available_by_period_and_tech[_r, p, _t]; fix by only including
terms for keys that exist (change the condition to if (_r, p, _t) in
model.v_capacity_available_by_period_and_tech) or iterate directly over
model.v_capacity_available_by_period_and_tech.keys() filtered by p and s so
possible_activity_rpst, model.v_capacity_available_by_period_and_tech,
model.capacity_to_activity and model.segment_fraction_per_season are only
accessed for valid keys.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: b43506a7-e3bb-4c70-bed1-094cd815d257
📒 Files selected for processing (2)
temoa/components/limits.pytemoa/db_schema/temoa_schema_v3_1.sql
1febf00 to
418c38a
Compare
…pacity and share tables
418c38a to
b678180
Compare
Three tables had period indices where vintage indices were much more sensible:
Also expanded validation sets in loading to include tech groups for tech_or_group limit tables. Applied this filtering to several tables.
Summary by CodeRabbit
Refactor
New Features
Chores
Tests