diff --git a/CHANGELOG.md b/CHANGELOG.md index d104dc819..4f2eb344c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -167,6 +167,24 @@ clustered_fs = flow_system.transform.cluster(params) clustered_fs.optimize(solver) ``` +**Rolling Horizon Optimization**: Decompose large operational problems into sequential segments: + +```python +# Solve with rolling horizon (replaces SegmentedOptimization) +segments = flow_system.optimize.rolling_horizon( + solver, + horizon=192, # Timesteps per segment + overlap=48, # Lookahead for storage optimization +) + +# Combined solution available on original FlowSystem +total_cost = flow_system.solution['costs'].item() + +# Individual segments also available +for seg in segments: + print(seg.solution['costs'].item()) +``` + **Solution Persistence**: FlowSystem now stores and persists solutions: ```python @@ -299,7 +317,7 @@ fs_subset = flow_system.transform.sel(time=slice('2023-01-01', '2023-06-30')) **Classes removed:** - `Calculation`, `FullCalculation` → Use `flow_system.optimize()` - `AggregatedCalculation` → Use `flow_system.transform.cluster()` + `optimize()` -- `SegmentedCalculation` → Use `SegmentedOptimization` +- `SegmentedCalculation`, `SegmentedOptimization` → Use `flow_system.optimize.rolling_horizon()` - `Aggregation` → Use `Clustering` - `AggregationParameters` → Use `ClusteringParameters` - `AggregationModel` → Use `ClusteringModel` @@ -380,26 +398,12 @@ fs_subset = flow_system.transform.sel(time=slice('2023-01-01', '2023-06-30')) - Separate docs build and deploy workflow - Improved test organization with deprecated tests in separate folder -### 🚧 Not Yet Migrated to New API - -**Segmented/Rolling Optimization** - Still uses `SegmentedOptimization` class: -```python -# Not yet migrated - use the class-based API -from flixopt import SegmentedOptimization - -calc = SegmentedOptimization('model', flow_system, timesteps_per_segment=96) -calc.do_modeling_and_solve(solver) -results = calc.results # Returns SegmentedResults -``` - -**Planned for future release:** -- `flow_system.optimize.rolling(solver, ...)` - Rolling horizon optimization to replace `SegmentedOptimization` - ### Migration Checklist | Task | Action | |------|--------| | Replace `Optimization` class | Use `flow_system.optimize(solver)` | +| Replace `SegmentedOptimization` class | Use `flow_system.optimize.rolling_horizon(solver, ...)` | | Replace `Results` access | Use `flow_system.solution['var_name']` | | Update `OnOffParameters` | Rename to `StatusParameters` with new parameter names | | Update `on_off_parameters` | Rename to `status_parameters` | diff --git a/README.md b/README.md index 339a40b41..713ec2a99 100644 --- a/README.md +++ b/README.md @@ -98,8 +98,8 @@ boiler = fx.Boiler("Boiler", eta=0.9, ...) **Multi-criteria optimization:** Model costs, emissions, resource use - any custom metric. Optimize single objectives or use weighted combinations and ε-constraints. → [Effects documentation](https://flixopt.github.io/flixopt/latest/user-guide/mathematical-notation/effects-and-dimensions/) -**Performance at any scale:** Choose optimization modes without changing your model - Optimization, SegmentedOptimization, or ClusteredOptimization (using [TSAM](https://github.com/FZJ-IEK3-VSA/tsam)). -→ [Optimization modes](https://flixopt.github.io/flixopt/latest/api-reference/optimization/) +**Performance at any scale:** Choose optimization modes without changing your model - full optimization, rolling horizon, or clustering (using [TSAM](https://github.com/FZJ-IEK3-VSA/tsam)). +→ [Scaling notebooks](https://flixopt.github.io/flixopt/latest/notebooks/08a-aggregation/) **Built for reproducibility:** Self-contained NetCDF result files with complete model information. Load results months later - everything is preserved. → [Results documentation](https://flixopt.github.io/flixopt/latest/api-reference/results/) diff --git a/docs/notebooks/07-scenarios-and-periods.ipynb b/docs/notebooks/07-scenarios-and-periods.ipynb index 1b3b761d8..27dfdce70 100644 --- a/docs/notebooks/07-scenarios-and-periods.ipynb +++ b/docs/notebooks/07-scenarios-and-periods.ipynb @@ -466,64 +466,7 @@ "cell_type": "markdown", "id": "30", "metadata": {}, - "source": [ - "## Key Concepts\n", - "\n", - "### Multi-Dimensional FlowSystem\n", - "\n", - "```python\n", - "flow_system = fx.FlowSystem(\n", - " timesteps=timesteps, # Time dimension\n", - " periods=periods, # Planning periods (years)\n", - " scenarios=scenarios, # Uncertain futures\n", - " scenario_weights=weights, # Probabilities\n", - ")\n", - "```\n", - "\n", - "### Dimension-Varying Parameters\n", - "\n", - "| Data Shape | Meaning |\n", - "|------------|----------|\n", - "| Scalar | Same for all time/period/scenario |\n", - "| Array (n_periods,) | Varies by period |\n", - "| Array (n_scenarios,) | Varies by scenario |\n", - "| DataFrame with columns | Columns match scenario names |\n", - "| Full array (time, period, scenario) | Full specification |\n", - "\n", - "### Scenario Optimization\n", - "\n", - "The optimizer minimizes **expected cost**:\n", - "$$\\min \\sum_s w_s \\cdot \\text{Cost}_s$$\n", - "\n", - "where $w_s$ is the scenario weight (probability).\n", - "\n", - "### Selection Methods\n", - "\n", - "```python\n", - "# Select specific scenario\n", - "fs_mild = flow_system.transform.sel(scenario='Mild Winter')\n", - "\n", - "# Select specific period\n", - "fs_2025 = flow_system.transform.sel(period=2025)\n", - "\n", - "# Select time range\n", - "fs_day1 = flow_system.transform.sel(time=slice('2024-01-15', '2024-01-16'))\n", - "```\n", - "\n", - "## Summary\n", - "\n", - "You learned how to:\n", - "\n", - "- Define **multiple periods** for multi-year planning\n", - "- Create **scenarios** for uncertain futures\n", - "- Use **scenario weights** for probability-weighted optimization\n", - "- Pass **dimension-varying parameters** (arrays and DataFrames)\n", - "- **Select** specific scenarios or periods for analysis\n", - "\n", - "### Next Steps\n", - "\n", - "- **[08-large-scale-optimization](08-large-scale-optimization.ipynb)**: Speed up large problems with resampling and clustering" - ] + "source": "## Key Concepts\n\n### Multi-Dimensional FlowSystem\n\n```python\nflow_system = fx.FlowSystem(\n timesteps=timesteps, # Time dimension\n periods=periods, # Planning periods (years)\n scenarios=scenarios, # Uncertain futures\n scenario_weights=weights, # Probabilities\n)\n```\n\n### Dimension-Varying Parameters\n\n| Data Shape | Meaning |\n|------------|----------|\n| Scalar | Same for all time/period/scenario |\n| Array (n_periods,) | Varies by period |\n| Array (n_scenarios,) | Varies by scenario |\n| DataFrame with columns | Columns match scenario names |\n| Full array (time, period, scenario) | Full specification |\n\n### Scenario Optimization\n\nThe optimizer minimizes **expected cost**:\n$$\\min \\sum_s w_s \\cdot \\text{Cost}_s$$\n\nwhere $w_s$ is the scenario weight (probability).\n\n### Selection Methods\n\n```python\n# Select specific scenario\nfs_mild = flow_system.transform.sel(scenario='Mild Winter')\n\n# Select specific period\nfs_2025 = flow_system.transform.sel(period=2025)\n\n# Select time range\nfs_day1 = flow_system.transform.sel(time=slice('2024-01-15', '2024-01-16'))\n```\n\n## Summary\n\nYou learned how to:\n\n- Define **multiple periods** for multi-year planning\n- Create **scenarios** for uncertain futures\n- Use **scenario weights** for probability-weighted optimization\n- Pass **dimension-varying parameters** (arrays and DataFrames)\n- **Select** specific scenarios or periods for analysis\n\n### Next Steps\n\n- **[08a-Aggregation](08a-aggregation.ipynb)**: Speed up large problems with resampling and clustering\n- **[08b-Rolling Horizon](08b-rolling-horizon.ipynb)**: Decompose large problems into sequential time segments" } ], "metadata": { diff --git a/docs/notebooks/08-large-scale-optimization.ipynb b/docs/notebooks/08a-aggregation.ipynb similarity index 53% rename from docs/notebooks/08-large-scale-optimization.ipynb rename to docs/notebooks/08a-aggregation.ipynb index 61cae35c5..f3d5d1686 100644 --- a/docs/notebooks/08-large-scale-optimization.ipynb +++ b/docs/notebooks/08a-aggregation.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "id": "0", "metadata": {}, - "source": "# Large-Scale\n\nSpeed up large problems with resampling and two-stage optimization.\n\nThis notebook introduces:\n\n- **Resampling**: Reduce time resolution (e.g., hourly → 4-hourly)\n- **Clustering**: Identify typical periods (e.g., 8 representative days)\n- **Two-stage optimization**: Size with reduced data, dispatch at full resolution\n- **Speed vs. accuracy trade-offs**: When to use each technique" + "source": "# Aggregation\n\nSpeed up large problems with time series aggregation techniques.\n\nThis notebook introduces:\n\n- **Resampling**: Reduce time resolution (e.g., hourly → 4-hourly)\n- **Clustering**: Identify typical periods (e.g., 8 representative days)\n- **Two-stage optimization**: Size with reduced data, dispatch at full resolution\n- **Speed vs. accuracy trade-offs**: When to use each technique" }, { "cell_type": "markdown", @@ -23,7 +23,6 @@ "source": [ "import timeit\n", "\n", - "import numpy as np\n", "import pandas as pd\n", "import plotly.express as px\n", "import xarray as xr\n", @@ -37,11 +36,7 @@ "cell_type": "markdown", "id": "3", "metadata": {}, - "source": [ - "## Create a Realistic Annual Dataset\n", - "\n", - "We simulate one month of hourly data (720 timesteps) to demonstrate the techniques:" - ] + "source": "## Load Time Series Data\n\nWe use real-world district heating data at 15-minute resolution (one month):" }, { "cell_type": "code", @@ -50,49 +45,22 @@ "metadata": {}, "outputs": [], "source": [ - "# One month at hourly resolution\n", - "timesteps = pd.date_range('2024-01-01', periods=720, freq='h') # 30 days\n", - "hours = np.arange(len(timesteps))\n", - "hour_of_day = hours % 24\n", - "day_of_month = hours // 24\n", - "\n", - "print(f'Timesteps: {len(timesteps)} hours ({len(timesteps) / 24:.0f} days)')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "5", - "metadata": {}, - "outputs": [], - "source": [ - "np.random.seed(42)\n", - "\n", - "# Heat demand: daily pattern with weekly variation\n", - "daily_pattern = np.select(\n", - " [\n", - " (hour_of_day >= 6) & (hour_of_day < 9),\n", - " (hour_of_day >= 9) & (hour_of_day < 17),\n", - " (hour_of_day >= 17) & (hour_of_day < 22),\n", - " ],\n", - " [200, 150, 180],\n", - " default=100,\n", - ").astype(float)\n", - "\n", - "# Add temperature effect (colder mid-month)\n", - "temp_effect = 1 + 0.3 * np.sin(day_of_month * np.pi / 30)\n", - "heat_demand = daily_pattern * temp_effect + np.random.normal(0, 10, len(timesteps))\n", - "heat_demand = np.clip(heat_demand, 80, 300)\n", - "\n", - "# Electricity price: time-of-use with volatility - high prices make CHP attractive\n", - "base_price = np.where((hour_of_day >= 7) & (hour_of_day <= 21), 0.32, 0.18)\n", - "elec_price = base_price * (1 + np.random.uniform(-0.15, 0.15, len(timesteps)))\n", - "\n", - "# Gas price: relatively stable and low\n", - "gas_price = 0.05 + np.random.uniform(-0.005, 0.005, len(timesteps))\n", - "\n", - "print(f'Heat demand: {heat_demand.min():.0f} - {heat_demand.max():.0f} kW')\n", - "print(f'Elec price: {elec_price.min():.3f} - {elec_price.max():.3f} €/kWh')" + "# Load time series data (15-min resolution)\n", + "data = pd.read_csv('../../examples/resources/Zeitreihen2020.csv', index_col=0, parse_dates=True).sort_index()\n", + "data = data['2020-01-01':'2020-01-31 23:45:00'] # One month\n", + "data.index.name = 'time' # Rename index for consistency\n", + "\n", + "timesteps = data.index\n", + "\n", + "# Extract profiles\n", + "electricity_demand = data['P_Netz/MW'].to_numpy()\n", + "heat_demand = data['Q_Netz/MW'].to_numpy()\n", + "electricity_price = data['Strompr.€/MWh'].to_numpy()\n", + "gas_price = data['Gaspr.€/MWh'].to_numpy()\n", + "\n", + "print(f'Timesteps: {len(timesteps)} ({len(timesteps) / 96:.0f} days at 15-min resolution)')\n", + "print(f'Heat demand: {heat_demand.min():.1f} - {heat_demand.max():.1f} MW')\n", + "print(f'Electricity price: {electricity_price.min():.1f} - {electricity_price.max():.1f} €/MWh')" ] }, { @@ -102,11 +70,13 @@ "metadata": {}, "outputs": [], "source": [ - "# Visualize first week with plotly - using xarray and faceting\n", + "# Visualize first week\n", "profiles = xr.Dataset(\n", " {\n", - " 'Heat Demand [kW]': xr.DataArray(heat_demand[:168], dims=['time'], coords={'time': timesteps[:168]}),\n", - " 'Electricity Price [€/kWh]': xr.DataArray(elec_price[:168], dims=['time'], coords={'time': timesteps[:168]}),\n", + " 'Heat Demand [MW]': xr.DataArray(heat_demand[:672], dims=['time'], coords={'time': timesteps[:672]}),\n", + " 'Electricity Price [€/MWh]': xr.DataArray(\n", + " electricity_price[:672], dims=['time'], coords={'time': timesteps[:672]}\n", + " ),\n", " }\n", ")\n", "\n", @@ -133,7 +103,107 @@ "id": "8", "metadata": {}, "outputs": [], - "source": "def build_system(timesteps, heat_demand, elec_price, gas_price):\n \"\"\"Build a FlowSystem with investment optimization.\"\"\"\n fs = fx.FlowSystem(timesteps)\n fs.add_carriers(\n fx.Carrier('gas', '#3498db', 'kW'),\n fx.Carrier('electricity', '#f1c40f', 'kW'),\n fx.Carrier('heat', '#e74c3c', 'kW'),\n )\n fs.add_elements(\n # Buses\n fx.Bus('Electricity', carrier='electricity'),\n fx.Bus(\n 'Heat', carrier='heat', imbalance_penalty_per_flow_hour=1e5\n ), # Allow for imbalance to prevent infeasibilities with fixed sizes\n fx.Bus('Gas', carrier='gas'),\n # Effects\n fx.Effect('costs', '€', 'Total Costs', is_standard=True, is_objective=True),\n # Gas Supply\n fx.Source(\n 'GasGrid',\n outputs=[fx.Flow('Gas', bus='Gas', size=1000, effects_per_flow_hour=gas_price)],\n ),\n # CHP with investment optimization\n fx.linear_converters.CHP(\n 'CHP',\n electrical_efficiency=0.38,\n thermal_efficiency=0.47,\n electrical_flow=fx.Flow(\n 'P_el',\n bus='Electricity',\n size=fx.InvestParameters(\n minimum_size=0,\n maximum_size=150,\n effects_of_investment_per_size={'costs': 25},\n ),\n relative_minimum=0.4,\n ),\n thermal_flow=fx.Flow('Q_th', bus='Heat'),\n fuel_flow=fx.Flow('Q_fuel', bus='Gas'),\n ),\n # Gas Boiler with investment optimization\n fx.linear_converters.Boiler(\n 'Boiler',\n thermal_efficiency=0.92,\n thermal_flow=fx.Flow(\n 'Q_th',\n bus='Heat',\n size=fx.InvestParameters(\n minimum_size=0,\n maximum_size=400,\n effects_of_investment_per_size={'costs': 8},\n ),\n ),\n fuel_flow=fx.Flow('Q_fuel', bus='Gas'),\n ),\n # Thermal Storage with investment optimization\n fx.Storage(\n 'Storage',\n capacity_in_flow_hours=fx.InvestParameters(\n minimum_size=0,\n maximum_size=1000,\n effects_of_investment_per_size={'costs': 0.5}, # Cheap storage\n ),\n initial_charge_state=0,\n eta_charge=0.98,\n eta_discharge=0.98,\n relative_loss_per_hour=0.005, # Low losses\n charging=fx.Flow('Charge', bus='Heat', size=150),\n discharging=fx.Flow('Discharge', bus='Heat', size=150),\n ),\n # Electricity Sales\n fx.Sink(\n 'ElecSales',\n inputs=[fx.Flow('P_el', bus='Electricity', size=200, effects_per_flow_hour=-elec_price)],\n ),\n # Heat Demand\n fx.Sink(\n 'HeatDemand',\n inputs=[fx.Flow('Q_th', bus='Heat', size=1, fixed_relative_profile=heat_demand)],\n ),\n )\n\n return fs\n\n\n# Build the base system\nflow_system = build_system(timesteps, heat_demand, elec_price, gas_price)\nprint(f'Base system: {len(timesteps)} timesteps')" + "source": [ + "def build_system(timesteps, heat_demand, electricity_demand, electricity_price, gas_price):\n", + " \"\"\"Build a district heating system with CHP, boiler, and storage (with investment options).\"\"\"\n", + " fs = fx.FlowSystem(timesteps)\n", + "\n", + " fs.add_elements(\n", + " # Buses\n", + " fx.Bus('Electricity'),\n", + " fx.Bus('Heat'),\n", + " fx.Bus('Gas'),\n", + " fx.Bus('Coal'),\n", + " # Effects\n", + " fx.Effect('costs', '€', 'Total Costs', is_standard=True, is_objective=True),\n", + " fx.Effect('CO2', 'kg', 'CO2 Emissions'),\n", + " # CHP with investment optimization\n", + " fx.linear_converters.CHP(\n", + " 'CHP',\n", + " thermal_efficiency=0.58,\n", + " electrical_efficiency=0.22,\n", + " electrical_flow=fx.Flow('P_el', bus='Electricity', size=200),\n", + " thermal_flow=fx.Flow(\n", + " 'Q_th',\n", + " bus='Heat',\n", + " size=fx.InvestParameters(\n", + " minimum_size=100,\n", + " maximum_size=300,\n", + " effects_of_investment_per_size={'costs': 10},\n", + " ),\n", + " relative_minimum=0.3,\n", + " ),\n", + " fuel_flow=fx.Flow('Q_fu', bus='Coal'),\n", + " ),\n", + " # Gas Boiler with investment optimization\n", + " fx.linear_converters.Boiler(\n", + " 'Boiler',\n", + " thermal_efficiency=0.85,\n", + " thermal_flow=fx.Flow(\n", + " 'Q_th',\n", + " bus='Heat',\n", + " size=fx.InvestParameters(\n", + " minimum_size=0,\n", + " maximum_size=150,\n", + " effects_of_investment_per_size={'costs': 5},\n", + " ),\n", + " relative_minimum=0.1,\n", + " ),\n", + " fuel_flow=fx.Flow('Q_fu', bus='Gas'),\n", + " ),\n", + " # Thermal Storage with investment optimization\n", + " fx.Storage(\n", + " 'Storage',\n", + " capacity_in_flow_hours=fx.InvestParameters(\n", + " minimum_size=0,\n", + " maximum_size=1000,\n", + " effects_of_investment_per_size={'costs': 0.5},\n", + " ),\n", + " initial_charge_state=0,\n", + " eta_charge=1,\n", + " eta_discharge=1,\n", + " relative_loss_per_hour=0.001,\n", + " charging=fx.Flow('Charge', size=137, bus='Heat'),\n", + " discharging=fx.Flow('Discharge', size=158, bus='Heat'),\n", + " ),\n", + " # Fuel sources\n", + " fx.Source(\n", + " 'GasGrid',\n", + " outputs=[fx.Flow('Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={'costs': gas_price, 'CO2': 0.3})],\n", + " ),\n", + " fx.Source(\n", + " 'CoalSupply',\n", + " outputs=[fx.Flow('Q_Coal', bus='Coal', size=1000, effects_per_flow_hour={'costs': 4.6, 'CO2': 0.3})],\n", + " ),\n", + " # Electricity grid connection\n", + " fx.Source(\n", + " 'GridBuy',\n", + " outputs=[\n", + " fx.Flow(\n", + " 'P_el',\n", + " bus='Electricity',\n", + " size=1000,\n", + " effects_per_flow_hour={'costs': electricity_price + 0.5, 'CO2': 0.3},\n", + " )\n", + " ],\n", + " ),\n", + " fx.Sink(\n", + " 'GridSell',\n", + " inputs=[fx.Flow('P_el', bus='Electricity', size=1000, effects_per_flow_hour=-(electricity_price - 0.5))],\n", + " ),\n", + " # Demands\n", + " fx.Sink('HeatDemand', inputs=[fx.Flow('Q_th', bus='Heat', size=1, fixed_relative_profile=heat_demand)]),\n", + " fx.Sink(\n", + " 'ElecDemand', inputs=[fx.Flow('P_el', bus='Electricity', size=1, fixed_relative_profile=electricity_demand)]\n", + " ),\n", + " )\n", + "\n", + " return fs\n", + "\n", + "\n", + "flow_system = build_system(timesteps, heat_demand, electricity_demand, electricity_price, gas_price)\n", + "print(f'System: {len(timesteps)} timesteps')" + ] }, { "cell_type": "markdown", @@ -275,23 +345,23 @@ " 'Full (baseline)': {\n", " 'Time [s]': time_full,\n", " 'Cost [€]': fs_full.solution['costs'].item(),\n", - " 'CHP Size [kW]': fs_full.statistics.sizes['CHP(P_el)'].item(),\n", - " 'Boiler Size [kW]': fs_full.statistics.sizes['Boiler(Q_th)'].item(),\n", - " 'Storage Size [kWh]': fs_full.statistics.sizes['Storage'].item(),\n", + " 'CHP Size [MW]': fs_full.statistics.sizes['CHP(Q_th)'].item(),\n", + " 'Boiler Size [MW]': fs_full.statistics.sizes['Boiler(Q_th)'].item(),\n", + " 'Storage Size [MWh]': fs_full.statistics.sizes['Storage'].item(),\n", " },\n", " 'Resampled (4h)': {\n", " 'Time [s]': time_resampled,\n", " 'Cost [€]': fs_resampled.solution['costs'].item(),\n", - " 'CHP Size [kW]': fs_resampled.statistics.sizes['CHP(P_el)'].item(),\n", - " 'Boiler Size [kW]': fs_resampled.statistics.sizes['Boiler(Q_th)'].item(),\n", - " 'Storage Size [kWh]': fs_resampled.statistics.sizes['Storage'].item(),\n", + " 'CHP Size [MW]': fs_resampled.statistics.sizes['CHP(Q_th)'].item(),\n", + " 'Boiler Size [MW]': fs_resampled.statistics.sizes['Boiler(Q_th)'].item(),\n", + " 'Storage Size [MWh]': fs_resampled.statistics.sizes['Storage'].item(),\n", " },\n", " 'Two-Stage': {\n", " 'Time [s]': time_stage1 + time_stage2,\n", " 'Cost [€]': fs_dispatch.solution['costs'].item(),\n", - " 'CHP Size [kW]': fs_dispatch.statistics.sizes['CHP(P_el)'].item(),\n", - " 'Boiler Size [kW]': fs_dispatch.statistics.sizes['Boiler(Q_th)'].item(),\n", - " 'Storage Size [kWh]': fs_dispatch.statistics.sizes['Storage'].item(),\n", + " 'CHP Size [MW]': fs_dispatch.statistics.sizes['CHP(Q_th)'].item(),\n", + " 'Boiler Size [MW]': fs_dispatch.statistics.sizes['Boiler(Q_th)'].item(),\n", + " 'Storage Size [MWh]': fs_dispatch.statistics.sizes['Storage'].item(),\n", " },\n", "}\n", "\n", @@ -303,7 +373,17 @@ "comparison['Cost Gap [%]'] = ((comparison['Cost [€]'] - baseline_cost) / baseline_cost * 100).round(2)\n", "comparison['Speedup'] = (baseline_time / comparison['Time [s]']).round(1)\n", "\n", - "comparison.round(2)" + "comparison.style.format(\n", + " {\n", + " 'Time [s]': '{:.2f}',\n", + " 'Cost [€]': '{:,.0f}',\n", + " 'CHP Size [MW]': '{:.1f}',\n", + " 'Boiler Size [MW]': '{:.1f}',\n", + " 'Storage Size [MWh]': '{:.0f}',\n", + " 'Cost Gap [%]': '{:.2f}',\n", + " 'Speedup': '{:.1f}x',\n", + " }\n", + ")" ] }, { @@ -400,28 +480,7 @@ "cell_type": "markdown", "id": "25", "metadata": {}, - "source": [ - "## Summary\n", - "\n", - "You learned how to:\n", - "\n", - "- Use **`transform.resample()`** to reduce time resolution\n", - "- Apply **two-stage optimization** for large investment problems\n", - "- Use **`transform.fix_sizes()`** to lock in investment decisions\n", - "- Compare **speed vs. accuracy** trade-offs\n", - "\n", - "### Key Takeaways\n", - "\n", - "1. **Start fast**: Use resampling for initial exploration\n", - "2. **Iterate**: Refine with two-stage optimization\n", - "3. **Validate**: Run full optimization for final results\n", - "4. **Monitor**: Check cost gaps to ensure acceptable accuracy\n", - "\n", - "### Further Reading\n", - "\n", - "- For clustering with typical periods, see `transform.cluster()` (requires `tsam` package)\n", - "- For time selection, see `transform.sel()` and `transform.isel()`" - ] + "source": "## Summary\n\nYou learned how to:\n\n- Use **`transform.resample()`** to reduce time resolution\n- Apply **two-stage optimization** for large investment problems\n- Use **`transform.fix_sizes()`** to lock in investment decisions\n- Compare **speed vs. accuracy** trade-offs\n\n### Key Takeaways\n\n1. **Start fast**: Use resampling for initial exploration\n2. **Iterate**: Refine with two-stage optimization\n3. **Validate**: Run full optimization for final results\n4. **Monitor**: Check cost gaps to ensure acceptable accuracy\n\n### Next Steps\n\n- **[08b-Rolling Horizon](08b-rolling-horizon.ipynb)**: For operational problems without investment decisions, decompose time into sequential segments\n\n### Further Reading\n\n- For clustering with typical periods, see `transform.cluster()` (requires `tsam` package)\n- For time selection, see `transform.sel()` and `transform.isel()`" } ], "metadata": { diff --git a/docs/notebooks/08b-rolling-horizon.ipynb b/docs/notebooks/08b-rolling-horizon.ipynb new file mode 100644 index 000000000..67edf9aa8 --- /dev/null +++ b/docs/notebooks/08b-rolling-horizon.ipynb @@ -0,0 +1,4876 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "0", + "metadata": {}, + "source": "# Rolling Horizon\n\nSolve large operational problems by decomposing the time horizon into sequential segments.\n\nThis notebook introduces:\n\n- **Rolling horizon optimization**: Divide time into overlapping segments\n- **State transfer**: Pass storage states and flow history between segments\n- **When to use**: Memory limits, operational planning with limited foresight\n\nWe use a realistic district heating system with CHP, boiler, and storage to demonstrate the approach." + }, + { + "cell_type": "markdown", + "id": "1", + "metadata": {}, + "source": [ + "## Setup" + ] + }, + { + "cell_type": "code", + "id": "2", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:01:44.873555Z", + "start_time": "2025-12-13T19:01:40.936227Z" + } + }, + "source": [ + "import timeit\n", + "\n", + "import pandas as pd\n", + "import plotly.express as px\n", + "import plotly.graph_objects as go\n", + "import xarray as xr\n", + "from plotly.subplots import make_subplots\n", + "\n", + "import flixopt as fx\n", + "\n", + "fx.CONFIG.notebook()" + ], + "outputs": [ + { + "data": { + "text/plain": [ + "flixopt.config.CONFIG" + ] + }, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + } + ], + "execution_count": 1 + }, + { + "cell_type": "markdown", + "id": "3", + "metadata": {}, + "source": "## Load Time Series Data\n\nWe use real-world district heating data at 15-minute resolution (two weeks):" + }, + { + "cell_type": "code", + "id": "4", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:01:45.078418Z", + "start_time": "2025-12-13T19:01:44.973157Z" + } + }, + "source": "# Load time series data (15-min resolution)\ndata = pd.read_csv('../../examples/resources/Zeitreihen2020.csv', index_col=0, parse_dates=True).sort_index()\ndata = data['2020-01-01':'2020-01-14 23:45:00'] # Two weeks\ndata.index.name = 'time' # Rename index for consistency\n\ntimesteps = data.index\n\n# Extract profiles\nelectricity_demand = data['P_Netz/MW'].to_numpy()\nheat_demand = data['Q_Netz/MW'].to_numpy()\nelectricity_price = data['Strompr.€/MWh'].to_numpy()\ngas_price = data['Gaspr.€/MWh'].to_numpy()\n\nprint(f'Timesteps: {len(timesteps)} ({len(timesteps) / 96:.0f} days at 15-min resolution)')\nprint(f'Heat demand: {heat_demand.min():.1f} - {heat_demand.max():.1f} MW')\nprint(f'Electricity price: {electricity_price.min():.1f} - {electricity_price.max():.1f} €/MWh')", + "outputs": [], + "execution_count": null + }, + { + "cell_type": "code", + "id": "5", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:01:45.204918Z", + "start_time": "2025-12-13T19:01:45.183230Z" + } + }, + "source": [ + "def build_system(timesteps, heat_demand, electricity_demand, electricity_price, gas_price):\n", + " \"\"\"Build a district heating system with CHP, boiler, and storage.\"\"\"\n", + " fs = fx.FlowSystem(timesteps)\n", + "\n", + " # Effects\n", + "\n", + " # Buses\n", + " fs.add_elements(\n", + " fx.Bus('Electricity'),\n", + " fx.Bus('Heat'),\n", + " fx.Bus('Gas'),\n", + " fx.Bus('Coal'),\n", + " fx.Effect('costs', '€', 'Total Costs', is_standard=True, is_objective=True),\n", + " fx.Effect('CO2', 'kg', 'CO2 Emissions'),\n", + " fx.linear_converters.CHP(\n", + " 'CHP',\n", + " thermal_efficiency=0.58,\n", + " electrical_efficiency=0.22,\n", + " status_parameters=fx.StatusParameters(effects_per_startup=24000),\n", + " electrical_flow=fx.Flow('P_el', bus='Electricity', size=200),\n", + " thermal_flow=fx.Flow('Q_th', bus='Heat', size=200),\n", + " fuel_flow=fx.Flow('Q_fu', bus='Coal', size=288, relative_minimum=87 / 288, previous_flow_rate=100),\n", + " ),\n", + " fx.linear_converters.Boiler(\n", + " 'Boiler',\n", + " thermal_efficiency=0.85,\n", + " thermal_flow=fx.Flow('Q_th', bus='Heat'),\n", + " fuel_flow=fx.Flow(\n", + " 'Q_fu',\n", + " bus='Gas',\n", + " size=95,\n", + " relative_minimum=12 / 95,\n", + " previous_flow_rate=20,\n", + " status_parameters=fx.StatusParameters(effects_per_startup=1000),\n", + " ),\n", + " ),\n", + " fx.Storage(\n", + " 'Storage',\n", + " capacity_in_flow_hours=684,\n", + " initial_charge_state=137,\n", + " minimal_final_charge_state=137,\n", + " maximal_final_charge_state=158,\n", + " eta_charge=1,\n", + " eta_discharge=1,\n", + " relative_loss_per_hour=0.001,\n", + " prevent_simultaneous_charge_and_discharge=True,\n", + " charging=fx.Flow('Charge', size=137, bus='Heat'),\n", + " discharging=fx.Flow('Discharge', size=158, bus='Heat'),\n", + " ),\n", + " fx.Source(\n", + " 'GasGrid',\n", + " outputs=[fx.Flow('Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={'costs': gas_price, 'CO2': 0.3})],\n", + " ),\n", + " fx.Source(\n", + " 'CoalSupply',\n", + " outputs=[fx.Flow('Q_Coal', bus='Coal', size=1000, effects_per_flow_hour={'costs': 4.6, 'CO2': 0.3})],\n", + " ),\n", + " fx.Source(\n", + " 'GridBuy',\n", + " outputs=[\n", + " fx.Flow(\n", + " 'P_el',\n", + " bus='Electricity',\n", + " size=1000,\n", + " effects_per_flow_hour={'costs': electricity_price + 0.5, 'CO2': 0.3},\n", + " )\n", + " ],\n", + " ),\n", + " fx.Sink(\n", + " 'GridSell',\n", + " inputs=[fx.Flow('P_el', bus='Electricity', size=1000, effects_per_flow_hour=-(electricity_price - 0.5))],\n", + " ),\n", + " fx.Sink('HeatDemand', inputs=[fx.Flow('Q_th', bus='Heat', size=1, fixed_relative_profile=heat_demand)]),\n", + " fx.Sink(\n", + " 'ElecDemand', inputs=[fx.Flow('P_el', bus='Electricity', size=1, fixed_relative_profile=electricity_demand)]\n", + " ),\n", + " )\n", + "\n", + " return fs\n", + "\n", + "\n", + "flow_system = build_system(timesteps, heat_demand, electricity_demand, electricity_price, gas_price)\n", + "print(f'System: {len(timesteps)} timesteps')" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "System: 1344 timesteps\n" + ] + } + ], + "execution_count": 3 + }, + { + "cell_type": "markdown", + "id": "6", + "metadata": {}, + "source": [ + "## Full Optimization (Baseline)\n", + "\n", + "First, solve the full problem as a baseline:" + ] + }, + { + "cell_type": "code", + "id": "7", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:02:56.367270Z", + "start_time": "2025-12-13T19:01:45.486690Z" + } + }, + "source": [ + "solver = fx.solvers.HighsSolver()\n", + "\n", + "start = timeit.default_timer()\n", + "fs_full = flow_system.copy()\n", + "fs_full.optimize(solver)\n", + "time_full = timeit.default_timer() - start\n", + "\n", + "print(f'Full optimization: {time_full:.2f} seconds')\n", + "print(f'Cost: {fs_full.solution[\"costs\"].item():,.0f} €')" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[2m2025-12-13 20:01:45.496\u001b[0m \u001b[33mWARNING \u001b[0m │ FlowSystem is not connected_and_transformed. Connecting and transforming data now.\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Writing constraints.: 100%|\u001b[38;2;128;191;255m██████████\u001b[0m| 74/74 [00:00<00:00, 152.15it/s]\n", + "Writing continuous variables.: 100%|\u001b[38;2;128;191;255m██████████\u001b[0m| 56/56 [00:00<00:00, 378.63it/s]\n", + "Writing binary variables.: 100%|\u001b[38;2;128;191;255m██████████\u001b[0m| 11/11 [00:00<00:00, 335.35it/s]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Running HiGHS 1.12.0 (git hash: 755a8e0): Copyright (c) 2025 HiGHS under MIT licence terms\n", + "MIP linopy-problem-lqmu4n6b has 53792 rows; 51102 cols; 168036 nonzeros; 14784 integer variables (14784 binary)\n", + "Coefficient ranges:\n", + " Matrix [1e-05, 2e+04]\n", + " Cost [1e+00, 1e+00]\n", + " Bound [1e+00, 1e+03]\n", + " RHS [1e-05, 2e+02]\n", + "WARNING: Problem has some excessively small row bounds\n", + "Presolving model\n", + "29568 rows, 24168 cols, 76581 nonzeros 0s\n", + "25322 rows, 18981 cols, 72124 nonzeros 0s\n", + "24294 rows, 18173 cols, 69378 nonzeros 0s\n", + "Presolve reductions: rows 24294(-29498); columns 18173(-32929); nonzeros 69378(-98658) \n", + "\n", + "Solving MIP model with:\n", + " 24294 rows\n", + " 18173 cols (13978 binary, 0 integer, 0 implied int., 4195 continuous, 0 domain fixed)\n", + " 69378 nonzeros\n", + "\n", + "Src: B => Branching; C => Central rounding; F => Feasibility pump; H => Heuristic;\n", + " I => Shifting; J => Feasibility jump; L => Sub-MIP; P => Empty MIP; R => Randomized rounding;\n", + " S => Solve LP; T => Evaluate node; U => Unbounded; X => User solution; Y => HiGHS solution;\n", + " Z => ZI Round; l => Trivial lower; p => Trivial point; u => Trivial upper; z => Trivial zero\n", + "\n", + " Nodes | B&B Tree | Objective Bounds | Dynamic Constraints | Work \n", + "Src Proc. InQueue | Leaves Expl. | BestBound BestSol Gap | Cuts InLp Confl. | LpIters Time\n", + "\n", + " 0 0 0 0.00% 1030491.76973 inf inf 0 0 0 0 0.5s\n", + " 0 0 0 0.00% 1540084.733469 inf inf 0 0 0 9609 1.0s\n", + " C 0 0 0 0.00% 1540120.790012 1662404.794591 7.36% 10533 2312 0 13317 3.5s\n", + " 0 0 0 0.00% 1540129.709896 1662404.794591 7.36% 10993 2339 0 14788 8.6s\n", + " 0 0 0 0.00% 1540135.248328 1662404.794591 7.35% 10379 2761 0 17251 13.8s\n", + " 0 0 0 0.00% 1540139.906068 1662404.794591 7.35% 10794 2560 0 18445 19.2s\n", + " 0 0 0 0.00% 1540142.257624 1662404.794591 7.35% 10289 2512 0 19682 24.4s\n", + " L 0 0 0 0.00% 1540142.371061 1591562.49514 3.23% 10213 2579 0 19922 31.8s\n", + " L 0 0 0 0.00% 1540142.371061 1540200.034522 0.00% 10213 2579 0 23652 68.0s\n", + " 1 0 1 100.00% 1540142.371061 1540200.034522 0.00% 9659 2579 0 31373 68.0s\n", + "\n", + "Solving report\n", + " Model linopy-problem-lqmu4n6b\n", + " Status Optimal\n", + " Primal bound 1540200.03452\n", + " Dual bound 1540142.37106\n", + " Gap 0.00374% (tolerance: 1%)\n", + " P-D integral 3.24885572414\n", + " Solution status feasible\n", + " 1540200.03452 (objective)\n", + " 0 (bound viol.)\n", + " 6.93576991062e-07 (int. viol.)\n", + " 0 (row viol.)\n", + " Timing 68.01\n", + " Max sub-MIP depth 2\n", + " Nodes 1\n", + " Repair LPs 0\n", + " LP iterations 31373\n", + " 0 (strong br.)\n", + " 10313 (separation)\n", + " 11450 (heuristics)\n", + "Full optimization: 70.87 seconds\n", + "Cost: 1,540,200 €\n" + ] + } + ], + "execution_count": 4 + }, + { + "cell_type": "markdown", + "id": "8", + "metadata": {}, + "source": "## Rolling Horizon Optimization\n\nThe `optimize.rolling_horizon()` method divides the time horizon into segments that are solved sequentially:\n\n```\nFull horizon: |---------- 1344 timesteps (14 days) ----------|\n \nSegment 1: |==== 192 (2 days) ====|-- overlap --|\nSegment 2: |==== 192 (2 days) ====|-- overlap --|\nSegment 3: |==== 192 (2 days) ====|-- overlap --|\n... \n```\n\nKey parameters:\n- **horizon**: Timesteps per segment (excluding overlap)\n- **overlap**: Additional lookahead timesteps (improves storage optimization)\n- **nr_of_previous_values**: Flow history transferred between segments" + }, + { + "cell_type": "code", + "id": "9", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:03:27.454194Z", + "start_time": "2025-12-13T19:02:56.525964Z" + } + }, + "source": [ + "start = timeit.default_timer()\n", + "fs_rolling = flow_system.copy()\n", + "segments = fs_rolling.optimize.rolling_horizon(\n", + " solver,\n", + " horizon=192, # 2-day segments (192 timesteps at 15-min resolution)\n", + " overlap=96, # 1-day lookahead\n", + ")\n", + "time_rolling = timeit.default_timer() - start\n", + "\n", + "print(f'Rolling horizon: {time_rolling:.2f} seconds ({len(segments)} segments)')\n", + "print(f'Cost: {fs_rolling.solution[\"costs\"].item():,.0f} €')" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Segment 1/7 (timesteps 0-288): 0%| | 0/7 [00:00" + ], + "text/html": [ + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
 Time [s]Cost [€]Cost Gap [%]
Method   
Full optimization70.871,540,2000.00
Rolling horizon30.921,540,6760.03
\n" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "execution_count": 6 + }, + { + "cell_type": "markdown", + "id": "12", + "metadata": {}, + "source": [ + "## Visualize: Heat Balance Comparison" + ] + }, + { + "cell_type": "code", + "id": "13", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:03:28.570509Z", + "start_time": "2025-12-13T19:03:27.661Z" + } + }, + "source": [ + "fs_full.statistics.plot.balance('Heat').figure.update_layout(title='Heat Balance (Full)')" + ], + "outputs": [ + { + "data": { + "text/html": [ + " \n", + " \n", + " " + ] + }, + "metadata": {}, + "output_type": "display_data", + "jetTransient": { + "display_id": null + } + }, + { + "data": { + "text/html": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data", + "jetTransient": { + "display_id": null + } + } + ], + "execution_count": 7 + }, + { + "cell_type": "code", + "id": "14", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:03:29.994610Z", + "start_time": "2025-12-13T19:03:29.772272Z" + } + }, + "source": [ + "fs_rolling.statistics.plot.balance('Heat').figure.update_layout(title='Heat Balance (Rolling)')" + ], + "outputs": [ + { + "data": { + "text/html": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data", + "jetTransient": { + "display_id": null + } + } + ], + "execution_count": 8 + }, + { + "cell_type": "markdown", + "id": "15", + "metadata": {}, + "source": [ + "## Storage State Continuity\n", + "\n", + "Rolling horizon transfers storage charge states between segments to ensure continuity:" + ] + }, + { + "cell_type": "code", + "id": "16", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:03:30.194568Z", + "start_time": "2025-12-13T19:03:30.141045Z" + } + }, + "source": [ + "fig = make_subplots(\n", + " rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.1, subplot_titles=['Full Optimization', 'Rolling Horizon']\n", + ")\n", + "\n", + "# Full optimization\n", + "charge_full = fs_full.solution['Storage|charge_state'].values[:-1] # Drop final value\n", + "fig.add_trace(go.Scatter(x=timesteps, y=charge_full, name='Full', line=dict(color='blue')), row=1, col=1)\n", + "\n", + "# Rolling horizon\n", + "charge_rolling = fs_rolling.solution['Storage|charge_state'].values[:-1]\n", + "fig.add_trace(go.Scatter(x=timesteps, y=charge_rolling, name='Rolling', line=dict(color='orange')), row=2, col=1)\n", + "\n", + "fig.update_yaxes(title_text='Charge State [MWh]', row=1, col=1)\n", + "fig.update_yaxes(title_text='Charge State [MWh]', row=2, col=1)\n", + "fig.update_layout(height=400, showlegend=False)\n", + "fig.show()" + ], + "outputs": [ + { + "data": { + "text/html": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data", + "jetTransient": { + "display_id": null + } + } + ], + "execution_count": 9 + }, + { + "cell_type": "markdown", + "id": "17", + "metadata": {}, + "source": [ + "## Inspect Individual Segments\n", + "\n", + "The method returns the individual segment FlowSystems, which can be inspected:" + ] + }, + { + "cell_type": "code", + "id": "18", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:03:30.246423Z", + "start_time": "2025-12-13T19:03:30.228470Z" + } + }, + "source": [ + "print(f'Number of segments: {len(segments)}')\n", + "print()\n", + "for i, seg in enumerate(segments):\n", + " start_time = seg.timesteps[0]\n", + " end_time = seg.timesteps[-1]\n", + " cost = seg.solution['costs'].item()\n", + " print(\n", + " f'Segment {i + 1}: {start_time.strftime(\"%Y-%m-%d %H:%M\")} → {end_time.strftime(\"%Y-%m-%d %H:%M\")} | Cost: {cost:,.0f} €'\n", + " )" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Number of segments: 7\n", + "\n", + "Segment 1: 2020-01-01 00:00 → 2020-01-03 23:45 | Cost: 318,658 €\n", + "Segment 2: 2020-01-03 00:00 → 2020-01-05 23:45 | Cost: 275,399 €\n", + "Segment 3: 2020-01-05 00:00 → 2020-01-07 23:45 | Cost: 335,051 €\n", + "Segment 4: 2020-01-07 00:00 → 2020-01-09 23:45 | Cost: 406,345 €\n", + "Segment 5: 2020-01-09 00:00 → 2020-01-11 23:45 | Cost: 356,730 €\n", + "Segment 6: 2020-01-11 00:00 → 2020-01-13 23:45 | Cost: 275,606 €\n", + "Segment 7: 2020-01-13 00:00 → 2020-01-14 23:45 | Cost: 270,066 €\n" + ] + } + ], + "execution_count": 10 + }, + { + "cell_type": "markdown", + "id": "19", + "metadata": {}, + "source": "## Visualize Segment Overlaps\n\nUnderstanding how segments overlap is key to tuning rolling horizon. Let's visualize the flow rates from each segment including their overlap regions:" + }, + { + "cell_type": "code", + "id": "20", + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:03:30.528986Z", + "start_time": "2025-12-13T19:03:30.272546Z" + } + }, + "source": [ + "# Concatenate all segment solutions into one dataset (including overlaps)\n", + "ds = xr.concat([seg.solution for seg in segments], dim=pd.RangeIndex(len(segments), name='segment'), join='outer')\n", + "\n", + "# Plot CHP thermal flow across all segments - each segment as a separate line\n", + "px.line(\n", + " ds['Boiler(Q_th)|flow_rate'].to_pandas().T,\n", + " labels={'value': 'Boiler Thermal Output [MW]', 'index': 'Timestep'},\n", + ")" + ], + "outputs": [ + { + "data": { + "text/html": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data", + "jetTransient": { + "display_id": null + } + } + ], + "execution_count": 11 + }, + { + "metadata": { + "ExecuteTime": { + "end_time": "2025-12-13T19:03:30.963250Z", + "start_time": "2025-12-13T19:03:30.651056Z" + } + }, + "cell_type": "code", + "source": [ + "px.line(\n", + " ds['Storage|charge_state'].to_pandas().T,\n", + " labels={'value': 'Storage Charge State [MW]', 'index': 'Timestep'},\n", + ")" + ], + "id": "d7c660381f2190e0", + "outputs": [ + { + "data": { + "text/html": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data", + "jetTransient": { + "display_id": null + } + } + ], + "execution_count": 12 + }, + { + "cell_type": "markdown", + "id": "21", + "metadata": {}, + "source": [ + "## When to Use Rolling Horizon\n", + "\n", + "| Use Case | Recommendation |\n", + "|----------|----------------|\n", + "| **Memory limits** | Large problems that exceed available memory |\n", + "| **Operational planning** | When limited foresight is realistic |\n", + "| **Quick approximate solutions** | Faster than full optimization |\n", + "| **Investment decisions** | Use full optimization instead |\n", + "\n", + "### Limitations\n", + "\n", + "- **No investments**: `InvestParameters` are not supported (raises error)\n", + "- **Suboptimal storage**: Limited foresight may miss long-term storage opportunities\n", + "- **Global constraints**: `flow_hours_max` etc. cannot be enforced globally" + ] + }, + { + "cell_type": "markdown", + "id": "22", + "metadata": {}, + "source": [ + "## API Reference\n", + "\n", + "```python\n", + "segments = flow_system.optimize.rolling_horizon(\n", + " solver, # Solver instance\n", + " horizon=192, # Timesteps per segment (e.g., 2 days at 15-min resolution)\n", + " overlap=48, # Additional lookahead timesteps (e.g., 12 hours)\n", + " nr_of_previous_values=1, # Flow history for uptime/downtime tracking\n", + ")\n", + "\n", + "# Combined solution on original FlowSystem\n", + "flow_system.solution['costs'].item()\n", + "\n", + "# Individual segment solutions\n", + "for seg in segments:\n", + " print(seg.solution['costs'].item())\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "23", + "metadata": {}, + "source": "## Summary\n\nYou learned how to:\n\n- Use **`optimize.rolling_horizon()`** to decompose large problems\n- Choose **horizon** and **overlap** parameters\n- Understand the **trade-offs** vs. full optimization\n\n### Key Takeaways\n\n1. **Rolling horizon** is useful for memory-limited or operational planning problems\n2. **Overlap** improves solution quality at the cost of computation time\n3. **Storage states** are automatically transferred between segments\n4. Use **full optimization** for investment decisions\n\n### Related Notebooks\n\n- **[08a-Aggregation](08a-aggregation.ipynb)**: For investment problems, use time series aggregation (resampling, clustering) instead" + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "name": "python", + "version": "3.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/notebooks/10-transmission.ipynb b/docs/notebooks/10-transmission.ipynb index 898d092c0..8c74e4e8c 100644 --- a/docs/notebooks/10-transmission.ipynb +++ b/docs/notebooks/10-transmission.ipynb @@ -384,22 +384,7 @@ "cell_type": "markdown", "id": "29", "metadata": {}, - "source": [ - "## Summary\n", - "\n", - "You learned how to:\n", - "\n", - "- Create **unidirectional transmission** between two buses\n", - "- Model **bidirectional transmission** with flow direction constraints\n", - "- Apply **relative and absolute losses** to transmission\n", - "- Optimize **transmission capacity** using InvestParameters\n", - "- Analyze **multi-site energy systems** with interconnections\n", - "\n", - "### Next Steps\n", - "\n", - "- **[07-scenarios-and-periods](07-scenarios-and-periods.ipynb)**: Multi-year planning with uncertainty\n", - "- **[08-large-scale-optimization](08-large-scale-optimization.ipynb)**: Computational efficiency techniques" - ] + "source": "## Summary\n\nYou learned how to:\n\n- Create **unidirectional transmission** between two buses\n- Model **bidirectional transmission** with flow direction constraints\n- Apply **relative and absolute losses** to transmission\n- Optimize **transmission capacity** using InvestParameters\n- Analyze **multi-site energy systems** with interconnections\n\n### Next Steps\n\n- **[07-scenarios-and-periods](07-scenarios-and-periods.ipynb)**: Multi-year planning with uncertainty\n- **[08a-Aggregation](08a-aggregation.ipynb)**: Speed up large problems with time series aggregation\n- **[08b-Rolling Horizon](08b-rolling-horizon.ipynb)**: Decompose large problems into sequential segments" } ], "metadata": { diff --git a/docs/notebooks/index.md b/docs/notebooks/index.md index 067d5247b..233f6be1b 100644 --- a/docs/notebooks/index.md +++ b/docs/notebooks/index.md @@ -36,7 +36,8 @@ Learn flixopt through practical examples organized by topic. Each notebook inclu | Notebook | Description | |----------|-------------| | [07-Scenarios](07-scenarios-and-periods.ipynb) | Multi-year planning with uncertain demand scenarios | -| [08-Large-Scale](08-large-scale-optimization.ipynb) | Speed up large problems with resampling and two-stage optimization | +| [08a-Aggregation](08a-aggregation.ipynb) | Speed up large problems with resampling and two-stage optimization | +| [08b-Rolling Horizon](08b-rolling-horizon.ipynb) | Decompose large problems into sequential time segments | ## Results @@ -58,5 +59,6 @@ Learn flixopt through practical examples organized by topic. Each notebook inclu | `PiecewiseConversion`, part-load efficiency | Piecewise Conversion | | `PiecewiseEffects`, economies of scale | Piecewise Effects | | Periods, scenarios, weights | Scenarios | -| `transform.resample()`, `fix_sizes()` | Large-Scale | +| `transform.resample()`, `fix_sizes()` | Aggregation | +| `optimize.rolling_horizon()` | Rolling Horizon | | `statistics`, `topology`, plotting | Plotting | diff --git a/docs/user-guide/migration-guide-v5.md b/docs/user-guide/migration-guide-v5.md index 5c19761e0..0c43e18f0 100644 --- a/docs/user-guide/migration-guide-v5.md +++ b/docs/user-guide/migration-guide-v5.md @@ -325,17 +325,27 @@ Clustered optimization uses the new transform accessor: # Results in clustered_fs.solution ``` -### Segmented Optimization (Not Yet Migrated) +### Segmented / Rolling Horizon Optimization -Segmented optimization still uses the class-based API. A new `optimize.rolling()` method is planned for a future release. +=== "v4.x (Old)" + ```python + calc = fx.SegmentedOptimization('model', flow_system, + timesteps_per_segment=96) + calc.do_modeling_and_solve(solver) + results = calc.results # Returns SegmentedResults + ``` -```python -# Still use the class-based API (unchanged from v4.x) -calc = fx.SegmentedOptimization('model', flow_system, - timesteps_per_segment=96) -calc.do_modeling_and_solve(solver) -results = calc.results # Returns SegmentedResults -``` +=== "v5.0.0 (New)" + ```python + # Use optimize.rolling_horizon() method + segments = flow_system.optimize.rolling_horizon( + solver, + horizon=96, # Timesteps per segment + overlap=12, # Lookahead for storage optimization + ) + # Combined solution on original FlowSystem + flow_system.solution['costs'].item() + ``` --- diff --git a/docs/user-guide/support.md b/docs/user-guide/support.md index 517a353a1..eba27c616 100644 --- a/docs/user-guide/support.md +++ b/docs/user-guide/support.md @@ -16,7 +16,7 @@ When opening an issue, include: - [FAQ](faq.md) — Common questions - [Troubleshooting](troubleshooting.md) — Common issues - [Examples](../notebooks/index.md) — Working code -- [API Reference](../api-reference/index.md) — Technical docs +- [API Reference](../api-reference/) — Technical docs ## Contributing diff --git a/flixopt/optimize_accessor.py b/flixopt/optimize_accessor.py index 5428cd855..f88cdf982 100644 --- a/flixopt/optimize_accessor.py +++ b/flixopt/optimize_accessor.py @@ -7,12 +7,22 @@ from __future__ import annotations +import logging +import sys from typing import TYPE_CHECKING +import xarray as xr +from tqdm import tqdm + +from .config import CONFIG +from .io import suppress_output + if TYPE_CHECKING: from .flow_system import FlowSystem from .solvers import _Solver +logger = logging.getLogger('flixopt') + class OptimizeAccessor: """ @@ -28,10 +38,10 @@ class OptimizeAccessor: >>> flow_system.optimize(solver) >>> print(flow_system.solution) - Future specialized modes: + Rolling horizon optimization: - >>> flow_system.optimize.clustered(solver, aggregation=params) - >>> flow_system.optimize.mga(solver, alternatives=5) + >>> segments = flow_system.optimize.rolling_horizon(solver, horizon=168) + >>> print(flow_system.solution) # Combined result """ def __init__(self, flow_system: FlowSystem) -> None: @@ -79,13 +89,273 @@ def __call__(self, solver: _Solver, normalize_weights: bool = True) -> FlowSyste self._fs.solve(solver) return self._fs - # Future methods can be added here: - # - # def clustered(self, solver: _Solver, aggregation: AggregationParameters, - # normalize_weights: bool = True) -> FlowSystem: - # """Clustered optimization with time aggregation.""" - # ... - # - # def mga(self, solver: _Solver, alternatives: int = 5) -> FlowSystem: - # """Modeling to Generate Alternatives.""" - # ... + def rolling_horizon( + self, + solver: _Solver, + horizon: int = 100, + overlap: int = 0, + nr_of_previous_values: int = 1, + ) -> list[FlowSystem]: + """ + Solve the optimization using a rolling horizon approach. + + Divides the time horizon into overlapping segments that are solved sequentially. + Each segment uses final values from the previous segment as initial conditions, + ensuring dynamic continuity across the solution. The combined solution is stored + on the original FlowSystem. + + This approach is useful for: + - Large-scale problems that exceed memory limits + - Annual planning with seasonal variations + - Operational planning with limited foresight + + Args: + solver: The solver to use (e.g., HighsSolver, GurobiSolver). + horizon: Number of timesteps in each segment (excluding overlap). + Must be > 2. Larger values provide better optimization at the cost + of memory and computation time. Default: 100. + overlap: Number of additional timesteps added to each segment for lookahead. + Improves storage optimization by providing foresight. Higher values + improve solution quality but increase computational cost. Default: 0. + nr_of_previous_values: Number of previous timestep values to transfer between + segments for initialization (e.g., for uptime/downtime tracking). Default: 1. + + Returns: + List of segment FlowSystems, each with their individual solution. + The combined solution (with overlaps trimmed) is stored on the original FlowSystem. + + Raises: + ValueError: If horizon <= 2 or overlap < 0. + ValueError: If horizon + overlap > total timesteps. + ValueError: If InvestParameters are used (not supported in rolling horizon). + + Examples: + Basic rolling horizon optimization: + + >>> segments = flow_system.optimize.rolling_horizon( + ... solver, + ... horizon=168, # Weekly segments + ... overlap=24, # 1-day lookahead + ... ) + >>> print(flow_system.solution) # Combined result + + Inspect individual segments: + + >>> for i, seg in enumerate(segments): + ... print(f'Segment {i}: {seg.solution["costs"].item():.2f}') + + Note: + - InvestParameters are not supported as investment decisions require + full-horizon optimization. + - Global constraints (flow_hours_max, etc.) may produce suboptimal results + as they cannot be enforced globally across segments. + - Storage optimization may be suboptimal compared to full-horizon solutions + due to limited foresight in each segment. + """ + + # Validation + if horizon <= 2: + raise ValueError('horizon must be greater than 2 to avoid internal side effects.') + if overlap < 0: + raise ValueError('overlap must be non-negative.') + if nr_of_previous_values < 0: + raise ValueError('nr_of_previous_values must be non-negative.') + if nr_of_previous_values > horizon: + raise ValueError('nr_of_previous_values cannot exceed horizon.') + + total_timesteps = len(self._fs.timesteps) + horizon_with_overlap = horizon + overlap + + if horizon_with_overlap > total_timesteps: + raise ValueError( + f'horizon + overlap ({horizon_with_overlap}) cannot exceed total timesteps ({total_timesteps}).' + ) + + # Ensure flow system is connected + if not self._fs.connected_and_transformed: + self._fs.connect_and_transform() + + # Calculate segment indices + segment_indices = self._calculate_segment_indices(total_timesteps, horizon, overlap) + n_segments = len(segment_indices) + logger.info( + f'Starting Rolling Horizon Optimization - Segments: {n_segments}, Horizon: {horizon}, Overlap: {overlap}' + ) + + # Create and solve segments + segment_flow_systems: list[FlowSystem] = [] + + progress_bar = tqdm( + enumerate(segment_indices), + total=n_segments, + desc='Solving segments', + unit='segment', + file=sys.stdout, + disable=not CONFIG.Solving.log_to_console, + ) + + try: + for i, (start_idx, end_idx) in progress_bar: + progress_bar.set_description(f'Segment {i + 1}/{n_segments} (timesteps {start_idx}-{end_idx})') + + # Suppress output when progress bar is shown (including logger and solver) + if CONFIG.Solving.log_to_console: + # Temporarily raise logger level to suppress INFO messages + original_level = logger.level + logger.setLevel(logging.WARNING) + try: + with suppress_output(): + segment_fs = self._fs.transform.isel(time=slice(start_idx, end_idx)) + if i > 0 and nr_of_previous_values > 0: + self._transfer_state( + source_fs=segment_flow_systems[i - 1], + target_fs=segment_fs, + horizon=horizon, + nr_of_previous_values=nr_of_previous_values, + ) + segment_fs.build_model() + if i == 0: + self._check_no_investments(segment_fs) + segment_fs.solve(solver) + finally: + logger.setLevel(original_level) + else: + segment_fs = self._fs.transform.isel(time=slice(start_idx, end_idx)) + if i > 0 and nr_of_previous_values > 0: + self._transfer_state( + source_fs=segment_flow_systems[i - 1], + target_fs=segment_fs, + horizon=horizon, + nr_of_previous_values=nr_of_previous_values, + ) + segment_fs.build_model() + if i == 0: + self._check_no_investments(segment_fs) + segment_fs.solve(solver) + + segment_flow_systems.append(segment_fs) + + finally: + progress_bar.close() + + # Combine segment solutions + logger.info('Combining segment solutions...') + self._finalize_solution(segment_flow_systems, horizon) + + logger.info(f'Rolling horizon optimization completed: {n_segments} segments solved.') + + return segment_flow_systems + + def _calculate_segment_indices(self, total_timesteps: int, horizon: int, overlap: int) -> list[tuple[int, int]]: + """Calculate start and end indices for each segment.""" + segments = [] + start = 0 + while start < total_timesteps: + end = min(start + horizon + overlap, total_timesteps) + segments.append((start, end)) + start += horizon # Move by horizon (not horizon + overlap) + if end == total_timesteps: + break + return segments + + def _transfer_state( + self, + source_fs: FlowSystem, + target_fs: FlowSystem, + horizon: int, + nr_of_previous_values: int, + ) -> None: + """Transfer final state from source segment to target segment. + + Transfers: + - Flow previous_flow_rate: Last nr_of_previous_values from non-overlap portion + - Storage initial_charge_state: Charge state at end of non-overlap portion + """ + from .components import Storage + + solution = source_fs.solution + time_slice = slice(horizon - nr_of_previous_values, horizon) + + # Transfer flow rates (for uptime/downtime tracking) + for label, target_flow in target_fs.flows.items(): + var_name = f'{label}|flow_rate' + if var_name in solution: + values = solution[var_name].isel(time=time_slice).values + target_flow.previous_flow_rate = values.item() if values.size == 1 else values + + # Transfer storage charge states + for label, target_comp in target_fs.components.items(): + if isinstance(target_comp, Storage): + var_name = f'{label}|charge_state' + if var_name in solution: + target_comp.initial_charge_state = solution[var_name].isel(time=horizon).item() + + def _check_no_investments(self, segment_fs: FlowSystem) -> None: + """Check that no InvestParameters are used (not supported in rolling horizon).""" + from .features import InvestmentModel + + invest_elements = [] + for component in segment_fs.components.values(): + for model in component.submodel.all_submodels: + if isinstance(model, InvestmentModel): + invest_elements.append(model.label_full) + + if invest_elements: + raise ValueError( + f'InvestParameters are not supported in rolling horizon optimization. ' + f'Found InvestmentModels: {invest_elements}. ' + f'Use standard optimize() for problems with investments.' + ) + + def _finalize_solution( + self, + segment_flow_systems: list[FlowSystem], + horizon: int, + ) -> None: + """Combine segment solutions and compute derived values directly (no re-solve).""" + # Combine all solution variables from segments + combined_solution = self._combine_solutions(segment_flow_systems, horizon) + + # Assign combined solution to the original FlowSystem + self._fs._solution = combined_solution + + def _combine_solutions( + self, + segment_flow_systems: list[FlowSystem], + horizon: int, + ) -> xr.Dataset: + """Combine segment solutions into a single Dataset. + + - Time-dependent variables: concatenated with overlap trimming + - Effect temporal/total: recomputed from per-timestep values + - Other scalars (including periodic): NaN (not meaningful for rolling horizon) + """ + if not segment_flow_systems: + raise ValueError('No segments to combine.') + + effect_labels = set(self._fs.effects.keys()) + combined_vars: dict[str, xr.DataArray] = {} + first_solution = segment_flow_systems[0].solution + + # Step 1: Time-dependent → concatenate; Scalars → NaN + for var_name, first_var in first_solution.data_vars.items(): + if 'time' in first_var.dims: + arrays = [ + seg.solution[var_name].isel( + time=slice(None, horizon if i < len(segment_flow_systems) - 1 else None) + ) + for i, seg in enumerate(segment_flow_systems) + ] + combined_vars[var_name] = xr.concat(arrays, dim='time') + else: + combined_vars[var_name] = xr.DataArray(float('nan')) + + # Step 2: Recompute effect totals from per-timestep values + for effect in effect_labels: + per_ts = f'{effect}(temporal)|per_timestep' + if per_ts in combined_vars: + temporal_sum = combined_vars[per_ts].sum(dim='time', skipna=True) + combined_vars[f'{effect}(temporal)'] = temporal_sum + combined_vars[effect] = temporal_sum # Total = temporal (periodic is NaN/unsupported) + + return xr.Dataset(combined_vars) diff --git a/mkdocs.yml b/mkdocs.yml index 186e109fd..551fac523 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -69,7 +69,8 @@ nav: - Piecewise Effects: notebooks/06c-piecewise-effects.ipynb - Scaling: - Scenarios: notebooks/07-scenarios-and-periods.ipynb - - Large-Scale: notebooks/08-large-scale-optimization.ipynb + - Aggregation: notebooks/08a-aggregation.ipynb + - Rolling Horizon: notebooks/08b-rolling-horizon.ipynb - Results: - Plotting: notebooks/09-plotting-and-data-access.ipynb