Skip to content

update flepimop2 provider pacakge#12

Merged
jc-macdonald merged 5 commits intomainfrom
reduce_introspection
Apr 7, 2026
Merged

update flepimop2 provider pacakge#12
jc-macdonald merged 5 commits intomainfrom
reduce_introspection

Conversation

@jc-macdonald
Copy link
Copy Markdown
Collaborator

@jc-macdonald jc-macdonald commented Feb 13, 2026

This pull request consolidates and refactors the integration between flepimop2 and op_engine, moving all engine logic and configuration into a single file (__init__.py). It removes the separate config.py and engine.py modules, simplifies dependency management in pyproject.toml, and updates linting rules. The new implementation provides a streamlined, self-contained flepimop2 engine backed by op_engine, with improved configuration validation and runtime checks.

Engine integration and refactoring:

  • Merged all engine logic and configuration into src/flepimop2/engine/op_engine/__init__.py, replacing the previous split between engine.py and config.py. The new implementation defines the OpEngineFlepimop2Engine and its configuration in a single file, with improved runtime validation and operator handling.

Dependency and build configuration:

  • Updated pyproject.toml to reference op-engine directly from its GitHub repository, removing the sibling directory wiring and simplifying dependency management.
    Linting and code quality:

  • Added per-file ignore rules in pyproject.toml for the new consolidated __init__.py, disabling specific lint and docstyle checks to accommodate the new implementation.

  • Aligned provider package with the expectations setup in flepimop2 #118

Closes #6 obsoletes #10, #11 due to changes introduced in flepimop2 #118

@jc-macdonald jc-macdonald self-assigned this Feb 13, 2026
@jc-macdonald jc-macdonald changed the title reduce introspection in flepimop2 provider package update flepimop2 provider pacakge Feb 17, 2026
Copy link
Copy Markdown
Collaborator

@TimothyWillard TimothyWillard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would nice to see a bit more DRY, especially for low hanging fruit like using types from flepimop2 or helpers already defined in op_engine.

Comment on lines +140 to +141
dt_min: float = Field(default=0.0, ge=0.0)
dt_max: float = Field(default=float("inf"), gt=0.0)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does dt_min == 0.0 mean infer lower bound and dt_max == math.inf mean infer upper bound? Minor, but seems like a more natural choice would be None since that indicates no user statement.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept dt_min=0.0 and dt_max=inf intentionally because those are the native op_engine DtControllerConfig sentinel values for ‘no additional bound’. Switching to None in the provider would require extra normalization logic and would diverge from core config semantics without changing behavior. If we want None ergonomics, we should introduce it consistently in op_engine itself first, then mirror it here.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that seems reasonable to mirror the API of op_engine here. I guess just noting its a bit weird to use concrete values rather than values that indicate "no statement", but I guess this is outside the scope of this PR then.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a TODO for you @MacdonaldJoshuaCaleb instead for me, but probably need to provide some kind of default system/engine as part of the integration testing utilities bundled with flepimop2.

Copy link
Copy Markdown
Member

@pearsonca pearsonca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor questions, but seems fine?

return None


def _rhs_from_stepper(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

work for another day: if adapter is necessary, we should probably be working the pipeline definition a bit. i imagine generally engines may need to adapt SystemABC somewhat, but all this re-shaping seems problematic.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually seems fine to me, and within the scope of engines. This is taking a stepper function and then vectorizing it because that's the type of "stepper" that op_engine expects. I don't think flepimop2 should get into the business of trying to figure out the correct way(s) to generalize steppers. However, I do think this could be simplified a bit by the introduction of proper "build" mechanics, see ACCIDDA/flepimop2#130, ACCIDDA/flepimop2#129 (comment).

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm mostly bothered by the shaping related checks than the reshaping itself. I should have said if this amount of adaptor required

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, I think this is a consequence of the outputs being unstructured numpy arrays. To address this would require adding more structure on that front. Perhaps ACCIDDA/flepimop2#147 is a first step to that, but definitely more work required beyond that.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed this is work for another day. The shape reshaping (1D <-> column-vector) exists because op_engine's CoreSolver expects (n_state, 1) columns while flepimop2 steppers work with 1D arrays. If the pipeline formalizes a stepper contract for shape, this goes away.

Copy link
Copy Markdown
Collaborator Author

@jc-macdonald jc-macdonald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use op_engine.types.as_float_1d?

as part of the simplifications I got rid of the types.py file

from flepimop2.exceptions import ValidationIssue
from flepimop2.system.abc import SystemABC, SystemProtocol
from pydantic import BaseModel, ConfigDict, Field, model_validator
from flepimop2.typing import StateChangeEnum # noqa: TC002
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of noqaing, why not move this under the type checking imports as the lint rule suggests? See https://docs.astral.sh/ruff/rules/typing-only-third-party-import/.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

StateChangeEnum is used as a Pydantic field type (state_change: StateChangeEnum on the model class). Pydantic v2 needs the type at module scope at runtime to resolve annotations during model creation, even with from __future__ import annotations. Moving it under TYPE_CHECKING would cause a NameError. The noqa: TC002 is the correct pattern here.

return None


def _rhs_from_stepper(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually seems fine to me, and within the scope of engines. This is taking a stepper function and then vectorizing it because that's the type of "stepper" that op_engine expects. I don't think flepimop2 should get into the business of trying to figure out the correct way(s) to generalize steppers. However, I do think this could be simplified a bit by the introduction of proper "build" mechanics, see ACCIDDA/flepimop2#130, ACCIDDA/flepimop2#129 (comment).

Comment on lines +140 to +141
dt_min: float = Field(default=0.0, ge=0.0)
dt_max: float = Field(default=float("inf"), gt=0.0)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that seems reasonable to mirror the API of op_engine here. I guess just noting its a bit weird to use concrete values rather than values that indicate "no statement", but I guess this is outside the scope of this PR then.

"""flepimop2 engine adapter backed by op_engine.CoreSolver."""

module: Literal["flepimop2.engine.op_engine"] = "flepimop2.engine.op_engine"
state_change: StateChangeEnum
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I'm a bit confused about this one. It seems like this seems to indicate that op_engine handles multiple different kinds of state change steppers, but requires user configuration to do so? If op_engine can handle multiple kinds of state change steppers it should just use system.state_change to infer the settings for op_engine. However, it appears like a validation is done, and then self.state_change is never used? So can op_engine actually handle steppers of different state changes, from your description of the package I thought it could only handle "flow"?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. op_engine currently only handles flow — you're right about that. The state_change field on the engine follows flepimop2 convention (engine/wrapper does the same pattern: declare state_change, validate via validate_system()). The config declares what the engine supports so the pipeline catches mismatches at validation time rather than producing silent wrong results.

self.state_change isn't used in run() because validation already ran — by the time run() executes, the system/engine pairing is guaranteed compatible. If op_engine grows to support delta or state steppers in the future, this field already has the right shape. But you're right that for now it's effectively always flow.

if isinstance(system_axis, str | int):
operator_axis = system_axis

stepper: SystemProtocol = system._stepper # noqa: SLF001
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another argument for pushing ACCIDDA/flepimop2#130 forward in the workplan

tr=operators.get("tr"),
bdf2=operators.get("bdf2"),
)
__all__ = ["OpEngineEngineConfig", "_coerce_operator_specs", "_has_operator_specs"]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why export _coerce_operator_specs & _has_operator_specs? These seem like private utilities.


[tool.ruff.lint.per-file-ignores]
"tests/**/*" = ["INP001", "S101"]
"src/flepimop2/engine/op_engine/__init__.py" = ["C901", "DOC201", "DOC501", "PLR0912", "PLR0913", "PLR0914", "PLR0915", "RUF067"]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes me a bit weary to ignore so many linting/formatting suggestions. Perhaps outside the scope of this PR, but I think these rules exist to indicate where there might be refactoring/documentation opportunities. Although, for RUF067 we also need to think about that on the flepimop2 front, the way we've structured the package as being an implicit namespace package with dynamic module loading makes it a little annoying to work around this one.

@jc-macdonald
Copy link
Copy Markdown
Collaborator Author

Responded to all outstanding second-round review comments. Summary:

  • noqa: TC002: Explained — StateChangeEnum must be at module scope for Pydantic runtime model creation
  • Inline helpers scope: Acknowledged duplication; tracked in Simplify typing in flepimop2 provider package #6 for cleanup once op_engine exposes validation helpers
  • state_change on engine: Follows flepimop2 convention (engine/wrapper does the same); validation-only, not used in run()
  • Carl's adapter/export scope: Acknowledged as future work

No code changes in this round — the feedback was design discussion. All 14 provider tests pass. @TimothyWillard @pearsonca ready for another look when convenient.

@jc-macdonald jc-macdonald force-pushed the reduce_introspection branch from b0e8ec9 to 62c4a79 Compare April 5, 2026 09:10
@jc-macdonald jc-macdonald added this to the Provider Integration milestone Apr 5, 2026
@jc-macdonald jc-macdonald merged commit d42f227 into main Apr 7, 2026
5 checks passed
@jc-macdonald jc-macdonald deleted the reduce_introspection branch April 7, 2026 06:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Simplify typing in flepimop2 provider package

3 participants