Skip to content

Commit f4601b8

Browse files
FBumanngithub-actions[bot]renovate[bot]claude
authored
Feature/test math+more (#600)
* Update the CHANGELOG.md * Update to tsam v3.1.0 and add warnings for preserve_n_clusters=False * [ci] prepare release v6.0.0 * fix typo in deps * fix typo in README.md * Revert citation temporarily * [ci] prepare release v6.0.0 * Improve json io * fix: Notebooks using tsam * Allow manual docs dispatch * Created: tests/test_clustering/test_multiperiod_extremes.py Test Coverage (56 tests): Multi-Period with Different Time Series - TestMultiPeriodDifferentTimeSeries - Tests for systems where each period has distinct demand profiles: - Different cluster assignments per period - Optimization with period-specific profiles - Correct expansion mapping per period - Statistics correctness per period Extreme Cluster Configurations - TestExtremeConfigNewCluster - Tests method='new_cluster': - Captures peak demand days - Can increase cluster count - Works with min_value parameter - TestExtremeConfigReplace - Tests method='replace': - Maintains requested cluster count - Works with multi-period systems - TestExtremeConfigAppend - Tests method='append': - Combined with segmentation - Objective preserved after expansion Combined Multi-Period and Extremes - TestExtremeConfigMultiPeriod - Extremes with multi-period/scenario: - Requires preserve_n_clusters=True for multi-period - Works with periods and scenarios together - TestMultiPeriodWithExtremes - Combined scenarios: - Different profiles with extreme capture - Extremes combined with segmentation - Independent cluster assignments per period Multi-Scenario Clustering - TestMultiScenarioWithClustering - Scenarios with clustering - TestFullDimensionalClustering - Full (periods + scenarios) combinations IO Round-Trip - TestMultiPeriodClusteringIO - Save/load preservation tests Edge Cases - TestEdgeCases - Single cluster, many clusters, occurrence sums, mapping validation * fix: clustering and tsam 3.1.0 issue * [ci] prepare release v6.0.1 * fix: clustering and tsam 3.1.0 issue * [ci] prepare release v6.0.1 * ci: remove test * [ci] prepare release v6.0.1 * chore(deps): update dependency werkzeug to v3.1.5 (#564) * chore(deps): update dependency ruff to v0.14.14 (#563) * chore(deps): update dependency netcdf4 to >=1.6.1, <1.7.5 (#583) * chore(deps): update dependency pre-commit to v4.5.1 (#532) * fix: Comparison coords (#599) * Fix coords concat in comparison.py * Fix coords concat in comparison.py * Fix coords concat in comparison.py * Add 6.0.1 changelog entry Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix coord preservation in Comparison.solution and .inputs - Apply _extract_nonindex_coords pattern to solution and inputs properties - Add warning when coordinate mappings conflict during merge Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Update CHANGELOG.md * Update CHANGELOG.md * ⏺ The fix is straightforward — on line 83, mapping.get(dv) returns None for unmapped values. Change it to mapping.get(dv, dv) so unmapped dimension values fall back to themselves. ⏺ Update(flixopt/comparison.py) ⎿  Added 1 line, removed 1 line 80 for name, (dim, mapping) in merged.items(): 81 if dim not in ds.dims: 82 continue 83 - new_coords[name] = (dim, [mapping.get(dv) for dv in ds.coords[dim].values]) 83 + new_coords[name] = (dim, [mapping.get(dv, dv) for dv in ds.coords[dim].values]) 84 85 return ds.assign_coords(new_coords) 86 ⏺ Done. The change on line 83 ensures that when mapping doesn't contain a key for a dimension value (which happens with outer-join additions), the original value dv is preserved instead of inserting None. * Update Changelog --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * [ci] prepare release v6.0.2 * typo * Revert "typo" This reverts commit 4a57282. * Add plan file * Add comprehensive test_math coverage for multi-period, scenarios, clustering, and validation - Add 26 new tests across 8 files (×3 optimize modes = ~75 test runs) - Multi-period: period weights, flow_hours limits, effect limits, linked invest, custom period weights - Scenarios: scenario weights, independent sizes, independent flow rates - Clustering: basic objective, storage cyclic/intercluster modes, status cyclic mode - Storage: relative min/max charge state, relative min/max final charge state, balanced invest - Components: transmission startup cost, Power2Heat, HeatPumpWithSource, SourceAndSink - Flow status: max_uptime standalone test - Validation: SourceAndSink requires size with prevent_simultaneous * ⏺ Done. Here's a summary of what was changed: Fix (flixopt/components.py:1146-1169): In _relative_charge_state_bounds, the scalar else branches now expand the base parameter to regular timesteps only (timesteps_extra[:-1]), then concat with the final-timestep DataArray containing the correct override value. Previously they just broadcast the scalar across all timesteps, silently ignoring relative_minimum_final_charge_state / relative_maximum_final_charge_state. Tests (tests/test_math/test_storage.py): Added two new tests — test_storage_relative_minimum_final_charge_state_scalar and test_storage_relative_maximum_final_charge_state_scalar — identical scenarios to the existing array-based tests but using scalar defaults (the previously buggy path). * Added TestClusteringExact class with 3 tests asserting exact per-timestep values in clustered systems: 1. test_flow_rates_match_demand_per_cluster — Verifies Grid flow_rate matches demand [10,20,30,40] identically in each cluster, objective = 200. 2. test_per_timestep_effects_with_varying_price — Verifies per-timestep costs [10,20,30,40] reflect price×flow with varying prices [1,2,3,4] and constant demand=10, objective = 200. 3. test_storage_cyclic_charge_discharge_pattern — Verifies storage with cyclic clustering: charges at cheap timesteps (price=1), discharges at expensive ones (price=100), with exact charge_state trajectory across both clusters, objective = 100. Deviation from plan: Used equal cluster weights [1.0, 1.0] instead of [1.0, 2.0]/[1.0, 3.0] for tests 1 and 2. This was necessary because cluster_weight is not preserved during NetCDF roundtrip (pre-existing IO bug), which would cause the save->reload->solve mode to fail. Equal weights produce correct results in all 3 IO modes while still testing the essential per-timestep value correctness. * More storage tests * Add multi-period tests * Add clustering tests and fix issues with user set cluster weights * Update CHANGELOG.md * Mark old tests as stale * Update CHANGELOG.md * Mark tests as stale and move to new dir * Move more tests to stale * Change fixtures to speed up tests * Moved files into stale * Renamed folder * Reorganize test dir * Reorganize test dir * Rename marker * 2. 08d-clustering-multiperiod.ipynb (cell 29): Removed stray <cell_type>markdown</cell_type> from Summary cell 3. 08f-clustering-segmentation.ipynb (cell 33): Removed stray <cell_type>markdown</cell_type> from API Reference cell 4. flixopt/comparison.py: _extract_nonindex_coords now detects when the same coord name appears on different dims — warns and skips instead of silently overwriting 5. test_multiperiod_extremes.py: Added .item() to mapping.min()/.max() and period_mapping.min()/.max() to extract scalars before comparison 6. test_flow_status.py: Tightened test_max_uptime_standalone assertion from > 50.0 to assert_allclose(..., 60.0, rtol=1e-5) matching the documented arithmetic --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2 parents ac8c22b + f73c346 commit f4601b8

56 files changed

Lines changed: 3277 additions & 188 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/docs.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,15 @@ on:
1212
- 'docs/**'
1313
- 'mkdocs.yml'
1414
workflow_dispatch:
15+
inputs:
16+
deploy:
17+
description: 'Deploy docs to GitHub Pages'
18+
type: boolean
19+
default: false
20+
version:
21+
description: 'Version to deploy (e.g., v6.0.0)'
22+
type: string
23+
required: false
1524
workflow_call:
1625
inputs:
1726
deploy:

CHANGELOG.md

Lines changed: 126 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,70 @@ If upgrading from v2.x, see the [v3.0.0 release notes](https://github.com/flixOp
5252
5353
Until here -->
5454

55-
## [6.0.0] - Upcoming
55+
## [6.0.3] - Upcoming
56+
57+
**Summary**: Bugfix release fixing `cluster_weight` loss during NetCDF roundtrip for manually constructed clustered FlowSystems.
58+
59+
### 🐛 Fixed
60+
61+
- **Clustering IO**: `cluster_weight` is now preserved during NetCDF roundtrip for manually constructed clustered FlowSystems (i.e. `FlowSystem(..., clusters=..., cluster_weight=...)`). Previously, `cluster_weight` was silently dropped to `None` during `save->reload->solve`, causing incorrect objective values. Systems created via `.transform.cluster()` were not affected.
62+
63+
### 👷 Development
64+
65+
- **New `test_math/` test suite**: Comprehensive mathematical correctness tests with exact, hand-calculated assertions. Each test runs in 3 IO modes (solve, save→reload→solve, solve→save→reload) via the `optimize` fixture:
66+
- `test_flow.py` — flow bounds, merit order, relative min/max, on/off hours
67+
- `test_flow_invest.py` — investment sizing, fixed-size, optional invest, piecewise invest
68+
- `test_flow_status.py` — startup costs, switch-on/off constraints, status penalties
69+
- `test_bus.py` — bus balance, excess/shortage penalties
70+
- `test_effects.py` — effect aggregation, periodic/temporal effects, multi-effect objectives
71+
- `test_components.py` — SourceAndSink, converters, links, combined heat-and-power
72+
- `test_conversion.py` — linear converter balance, multi-input/output, efficiency
73+
- `test_piecewise.py` — piecewise-linear efficiency, segment selection
74+
- `test_storage.py` — charge/discharge, SOC tracking, final charge state, losses
75+
- `test_multi_period.py` — period weights, invest across periods
76+
- `test_scenarios.py` — scenario weights, scenario-independent flows
77+
- `test_clustering.py` — exact per-timestep flow_rates, effects, and charge_state in clustered systems (incl. non-equal cluster weights to cover IO roundtrip)
78+
- `test_validation.py` — plausibility checks and error messages
79+
80+
---
81+
82+
## [6.0.2] - 2026-02-05
83+
84+
**Summary**: Patch release which improves `Comparison` coordinate handling.
85+
86+
### 🐛 Fixed
87+
88+
- **Comparison Coordinates**: Fixed `component` coordinate becoming `(case, contributor)` shaped after concatenation in `Comparison` class. Non-index coordinates are now properly merged before concat in `solution`, `inputs`, and all statistics properties. Added warning when coordinate mappings conflict (#599)
89+
90+
### 📝 Docs
91+
92+
- **Docs Workflow**: Added `workflow_dispatch` inputs for manual docs deployment with version selection (#599)
93+
94+
### 👷 Development
95+
96+
- Updated dev dependencies to newer versions
97+
---
98+
99+
## [6.0.1] - 2026-02-04
100+
101+
**Summary**: Bugfix release addressing clustering issues with multi-period systems and ExtremeConfig.
102+
103+
### 🐛 Fixed
104+
105+
- **Multi-period clustering with ExtremeConfig** - Fixed `ValueError: cannot reshape array` when clustering multi-period or multi-scenario systems with `ExtremeConfig`. The fix uses pandas `.unstack()` instead of manual reshape for robustness.
106+
- **Consistent cluster count validation** - Added validation to detect inconsistent cluster counts across periods/scenarios, providing clear error messages.
107+
108+
### 💥 Breaking Changes
109+
110+
- **ExtremeConfig method restriction for multi-period systems** - When using `ExtremeConfig` with multi-period or multi-scenario systems, only `method='replace'` is now allowed. Using `method='new_cluster'` or `method='append'` will raise a `ValueError`. This works around a tsam bug where these methods can produce inconsistent cluster counts across slices.
111+
112+
### 📦 Dependencies
113+
114+
- Excluded tsam 3.1.0 from compatible versions due to clustering bug.
115+
116+
---
117+
118+
## [6.0.0] - 2026-02-03
56119

57120
**Summary**: Major release featuring tsam v3 migration, complete rewrite of the clustering/aggregation system, 2-3x faster I/O for large systems, new `plotly` plotting accessor, FlowSystem comparison tools, and removal of deprecated v5.0 classes.
58121

@@ -226,12 +289,12 @@ comp = fx.Comparison([fs_base, fs_modified])
226289
comp = fx.Comparison([fs1, fs2, fs3], names=['baseline', 'low_cost', 'high_eff'])
227290

228291
# Side-by-side plots (auto-facets by 'case' dimension)
229-
comp.statistics.plot.balance('Heat')
230-
comp.statistics.flow_rates.plotly.line()
292+
comp.stats.plot.balance('Heat')
293+
comp.stats.flow_rates.plotly.line()
231294

232295
# Access combined data with 'case' dimension
233296
comp.solution # xr.Dataset
234-
comp.statistics.flow_rates # xr.Dataset
297+
comp.stats.flow_rates # xr.Dataset
235298

236299
# Compute differences relative to a reference case
237300
comp.diff() # vs first case
@@ -262,6 +325,58 @@ flow_system.topology.set_component_colors('turbo', overwrite=False) # Only unse
262325

263326
`Component.inputs`, `Component.outputs`, and `Component.flows` now use `FlowContainer` (dict-like) with dual access by index or label: `inputs[0]` or `inputs['Q_th']`.
264327

328+
#### `before_solve` Callback
329+
330+
New callback parameter for `optimize()` and `rolling_horizon()` allows adding custom constraints before solving:
331+
332+
```python
333+
def add_constraints(fs):
334+
model = fs.model
335+
boiler = model.variables['Boiler(Q_th)|flow_rate']
336+
model.add_constraints(boiler >= 10, name='min_boiler')
337+
338+
flow_system.optimize(solver, before_solve=add_constraints)
339+
340+
# Works with rolling_horizon too
341+
flow_system.optimize.rolling_horizon(
342+
solver,
343+
horizon=168,
344+
before_solve=add_constraints
345+
)
346+
```
347+
348+
#### `cluster_mode` for StatusParameters
349+
350+
New parameter to control status behavior at cluster boundaries:
351+
352+
```python
353+
fx.StatusParameters(
354+
...,
355+
cluster_mode='relaxed', # Default: no constraint at boundaries, prevents phantom startups
356+
# cluster_mode='cyclic', # Each cluster's final status equals its initial status
357+
)
358+
```
359+
360+
#### Comparison Class Enhancements
361+
362+
- **`Comparison.inputs`**: Compare inputs across FlowSystems for easy side-by-side input parameter comparison
363+
- **`data_only` parameter**: Get data without generating plots in Comparison methods
364+
- **`threshold` parameter**: Filter small values when comparing
365+
366+
#### Plotting Enhancements
367+
368+
- **`threshold` parameter**: Added to all plotting methods to filter values below a threshold (default: `1e-5`)
369+
- **`round_decimals` parameter**: Control decimal precision in `balance()`, `carrier_balance()`, and `storage()` plots
370+
- **`flow_colors` property**: Map flows to their component's colors for consistent visualization
371+
372+
#### `FlowSystem.from_old_dataset()`
373+
374+
New method for loading datasets saved with older flixopt versions:
375+
376+
```python
377+
fs = fx.FlowSystem.from_old_dataset(old_dataset)
378+
```
379+
265380
### 💥 Breaking Changes
266381

267382
#### tsam v3 Migration
@@ -306,17 +421,22 @@ fs.transform.cluster(
306421

307422
- `FlowSystem.weights` returns `dict[str, xr.DataArray]` (unit weights instead of `1.0` float fallback)
308423
- `FlowSystemDimensions` type now includes `'cluster'`
309-
- `statistics.plot.balance()`, `carrier_balance()`, and `storage()` now use `xarray_plotly.fast_bar()` internally (styled stacked areas for better performance)
424+
- `stats.plot.balance()`, `carrier_balance()`, and `storage()` now use `xarray_plotly.fast_bar()` internally (styled stacked areas for better performance)
425+
- `stats.plot.carrier_balance()` now combines inputs and outputs to show net flow per component, and aggregates per component by default
310426

311427
### 🗑️ Deprecated
312428

313429
The following items are deprecated and will be removed in **v7.0.0**:
314430

431+
**Accessor renamed:**
432+
433+
- `flow_system.statistics` → Use `flow_system.stats` (shorter, more convenient)
434+
315435
**Classes** (use FlowSystem methods instead):
316436

317437
- `Optimization` class → Use `flow_system.optimize(solver)`
318438
- `SegmentedOptimization` class → Use `flow_system.optimize.rolling_horizon()`
319-
- `Results` class → Use `flow_system.solution` and `flow_system.statistics`
439+
- `Results` class → Use `flow_system.solution` and `flow_system.stats`
320440
- `SegmentedResults` class → Use segment FlowSystems directly
321441

322442
**FlowSystem methods** (use `transform` or `topology` accessor instead):

CITATION.cff

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@ cff-version: 1.2.0
22
message: "If you use this software, please cite it as below and consider citing the related publication."
33
type: software
44
title: "flixopt"
5-
version: 6.0.0rc17
6-
date-released: 2026-02-02
5+
version: 6.0.2
6+
date-released: 2026-02-05
77
url: "https://github.com/flixOpt/flixopt"
88
repository-code: "https://github.com/flixOpt/flixopt"
99
license: MIT

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ flow_system.optimize(fx.solvers.HighsSolver())
5454

5555
# 3. Analyze results
5656
flow_system.solution # Raw xarray Dataset
57-
flow_system.statistics # Convenient analysis accessor
57+
flow_system.stats # Convenient analysis accessor
5858
```
5959

6060
**Get started with real examples:**

docs/notebooks/08c-clustering.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -585,7 +585,7 @@
585585
"id": "37",
586586
"metadata": {},
587587
"source": [
588-
"## API Reference\n",
588+
"<cell_type>markdown</cell_type>## API Reference\n",
589589
"\n",
590590
"### `transform.cluster()` Parameters\n",
591591
"\n",

docs/notebooks/08d-clustering-multiperiod.ipynb

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -556,6 +556,7 @@
556556
"fs = fs.transform.isel(time=slice(0, 168)) # First 168 timesteps\n",
557557
"\n",
558558
"# Cluster (applies per period/scenario)\n",
559+
"# Note: For multi-period systems, only method='replace' is supported\n",
559560
"fs_clustered = fs.transform.cluster(\n",
560561
" n_clusters=10,\n",
561562
" cluster_duration='1D',\n",

flixopt/comparison.py

Lines changed: 77 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,69 @@
2929
_CASE_SLOTS = frozenset(slot for slots in SLOT_ORDERS.values() for slot in slots)
3030

3131

32+
def _extract_nonindex_coords(datasets: list[xr.Dataset]) -> tuple[list[xr.Dataset], dict[str, tuple[str, dict]]]:
33+
"""Extract and merge non-index coords, returning cleaned datasets and merged mappings.
34+
35+
Non-index coords (like `component` on `contributor` dim) cause concat conflicts.
36+
This extracts them, merges the mappings, and returns datasets without them.
37+
"""
38+
if not datasets:
39+
return datasets, {}
40+
41+
# Find non-index coords and collect mappings
42+
merged: dict[str, tuple[str, dict]] = {}
43+
coords_to_drop: set[str] = set()
44+
45+
for ds in datasets:
46+
for name, coord in ds.coords.items():
47+
if len(coord.dims) != 1:
48+
continue
49+
dim = coord.dims[0]
50+
if dim == name or dim not in ds.coords:
51+
continue
52+
53+
coords_to_drop.add(name)
54+
if name not in merged:
55+
merged[name] = (dim, {})
56+
elif merged[name][0] != dim:
57+
warnings.warn(
58+
f"Coordinate '{name}' appears on different dims: "
59+
f"'{merged[name][0]}' vs '{dim}'. Dropping this coordinate.",
60+
stacklevel=4,
61+
)
62+
continue
63+
64+
for dv, cv in zip(ds.coords[dim].values, coord.values, strict=False):
65+
if dv not in merged[name][1]:
66+
merged[name][1][dv] = cv
67+
elif merged[name][1][dv] != cv:
68+
warnings.warn(
69+
f"Coordinate '{name}' has conflicting values for dim value '{dv}': "
70+
f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.",
71+
stacklevel=4,
72+
)
73+
74+
# Drop these coords from datasets
75+
if coords_to_drop:
76+
datasets = [ds.drop_vars(coords_to_drop, errors='ignore') for ds in datasets]
77+
78+
return datasets, merged
79+
80+
81+
def _apply_merged_coords(ds: xr.Dataset, merged: dict[str, tuple[str, dict]]) -> xr.Dataset:
82+
"""Apply merged coord mappings to concatenated dataset."""
83+
if not merged:
84+
return ds
85+
86+
new_coords = {}
87+
for name, (dim, mapping) in merged.items():
88+
if dim not in ds.dims:
89+
continue
90+
new_coords[name] = (dim, [mapping.get(dv, dv) for dv in ds.coords[dim].values])
91+
92+
return ds.assign_coords(new_coords)
93+
94+
3295
def _apply_slot_defaults(plotly_kwargs: dict, defaults: dict[str, str | None]) -> None:
3396
"""Apply default slot assignments to plotly kwargs.
3497
@@ -256,12 +319,10 @@ def solution(self) -> xr.Dataset:
256319
self._require_solutions()
257320
datasets = [fs.solution for fs in self._systems]
258321
self._warn_mismatched_dimensions(datasets)
259-
self._solution = xr.concat(
260-
[ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)],
261-
dim='case',
262-
join='outer',
263-
fill_value=float('nan'),
264-
)
322+
expanded = [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)]
323+
expanded, merged_coords = _extract_nonindex_coords(expanded)
324+
result = xr.concat(expanded, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
325+
self._solution = _apply_merged_coords(result, merged_coords)
265326
return self._solution
266327

267328
@property
@@ -324,12 +385,10 @@ def inputs(self) -> xr.Dataset:
324385
if self._inputs is None:
325386
datasets = [fs.to_dataset(include_solution=False) for fs in self._systems]
326387
self._warn_mismatched_dimensions(datasets)
327-
self._inputs = xr.concat(
328-
[ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)],
329-
dim='case',
330-
join='outer',
331-
fill_value=float('nan'),
332-
)
388+
expanded = [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)]
389+
expanded, merged_coords = _extract_nonindex_coords(expanded)
390+
result = xr.concat(expanded, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
391+
self._inputs = _apply_merged_coords(result, merged_coords)
333392
return self._inputs
334393

335394

@@ -374,7 +433,9 @@ def _concat_property(self, prop_name: str) -> xr.Dataset:
374433
continue
375434
if not datasets:
376435
return xr.Dataset()
377-
return xr.concat(datasets, dim='case', join='outer', fill_value=float('nan'))
436+
datasets, merged_coords = _extract_nonindex_coords(datasets)
437+
result = xr.concat(datasets, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
438+
return _apply_merged_coords(result, merged_coords)
378439

379440
def _merge_dict_property(self, prop_name: str) -> dict[str, str]:
380441
"""Merge a dict property from all cases (later cases override)."""
@@ -528,7 +589,9 @@ def _combine_data(self, method_name: str, *args, **kwargs) -> tuple[xr.Dataset,
528589
if not datasets:
529590
return xr.Dataset(), ''
530591

531-
return xr.concat(datasets, dim='case', join='outer', fill_value=float('nan')), title
592+
datasets, merged_coords = _extract_nonindex_coords(datasets)
593+
combined = xr.concat(datasets, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
594+
return _apply_merged_coords(combined, merged_coords), title
532595

533596
def _finalize(self, ds: xr.Dataset, fig, show: bool | None) -> PlotResult:
534597
"""Handle show and return PlotResult."""

flixopt/components.py

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1144,8 +1144,13 @@ def _relative_charge_state_bounds(self) -> tuple[xr.DataArray, xr.DataArray]:
11441144
min_final_da = min_final_da.assign_coords(time=[timesteps_extra[-1]])
11451145
min_bounds = xr.concat([rel_min, min_final_da], dim='time')
11461146
else:
1147-
# Original is scalar - broadcast to full time range (constant value)
1148-
min_bounds = rel_min.expand_dims(time=timesteps_extra)
1147+
# Original is scalar - expand to regular timesteps, then concat with final value
1148+
regular_min = rel_min.expand_dims(time=timesteps_extra[:-1])
1149+
min_final_da = (
1150+
min_final_value.expand_dims('time') if 'time' not in min_final_value.dims else min_final_value
1151+
)
1152+
min_final_da = min_final_da.assign_coords(time=[timesteps_extra[-1]])
1153+
min_bounds = xr.concat([regular_min, min_final_da], dim='time')
11491154

11501155
if 'time' in rel_max.dims:
11511156
# Original has time dim - concat with final value
@@ -1155,8 +1160,13 @@ def _relative_charge_state_bounds(self) -> tuple[xr.DataArray, xr.DataArray]:
11551160
max_final_da = max_final_da.assign_coords(time=[timesteps_extra[-1]])
11561161
max_bounds = xr.concat([rel_max, max_final_da], dim='time')
11571162
else:
1158-
# Original is scalar - broadcast to full time range (constant value)
1159-
max_bounds = rel_max.expand_dims(time=timesteps_extra)
1163+
# Original is scalar - expand to regular timesteps, then concat with final value
1164+
regular_max = rel_max.expand_dims(time=timesteps_extra[:-1])
1165+
max_final_da = (
1166+
max_final_value.expand_dims('time') if 'time' not in max_final_value.dims else max_final_value
1167+
)
1168+
max_final_da = max_final_da.assign_coords(time=[timesteps_extra[-1]])
1169+
max_bounds = xr.concat([regular_max, max_final_da], dim='time')
11601170

11611171
# Ensure both bounds have matching dimensions (broadcast once here,
11621172
# so downstream code doesn't need to handle dimension mismatches)

0 commit comments

Comments
 (0)