{ "cells": [ { "cell_type": "markdown", "id": "71bc8fd4", "metadata": {}, "source": [ "# Losses Tutorial\n", "\n", "This tutorial demonstrates how to configure and run a **multi-link redundancy loss analysis** in Python using RA2CE.\n", "While the Damages analysis estimates direct physical repair costs, the Losses analysis quantifies the **economic (indirect) impacts** of service disruption on users of the network.\n", "\n", "The losses analysis builds on the **criticality of the road network** and translates disruptions into **Vehicle Loss Hours (VLH)**, which are then converted into **monetary values (€)** using traffic data and values of time." ] }, { "cell_type": "markdown", "id": "aa3634c8", "metadata": {}, "source": [ "## Required Inputs\n", "\n", "Inputs required include:\n", "\n", "- **Criticality** of each road link (alternative route length/time).\n", "- **Resilience curve**: how quickly traffic recovers on a disrupted link.\n", "- **Traffic intensities**: how many vehicles use each link, per time period and purpose.\n", "- **Values of time**: the economic value (€/hour) of travel time for different trip purposes.\n", "\n", "These input are all necessary to perform a losses analysis. The criticality analysis is automatically performed as part of the losses workflow and does not require further attention in this tutorial. Step 3 explains how to prepare the other required input data.\n", "\n", "The output is expressed as:\n", "\n", "- **Vehicle Loss Hours (VLH)** → total hours of lost accessibility due to disruptions.\n", "- Broken down per **trip purpose** (commute, freight, business, etc.) and **time of day** (daily, rush hour, etc.), but also aggregated totals.\n", "\n", "This makes the Losses module essential for answering questions like:\n", "\n", "- *\"What is the expected economic loss due to a non-availability of a road network from a natural hazard (e.g. flood event) ?\"*\n", "- *\"What is the Expected Annual Loss (EAL) from repeated hazard events with different return periods?\"*" ] }, { "cell_type": "markdown", "id": "0f0ab13e", "metadata": {}, "source": [ "## Step 1: Project Setup\n", "\n", "We start by defining the root project directory and paths for input/output data:" ] }, { "cell_type": "code", "execution_count": 1, "id": "5ead0365", "metadata": {}, "outputs": [], "source": [ "from pathlib import Path\n", "\n", "root_dir = Path(\"data\", \"losses\")\n", "\n", "static_path = root_dir.joinpath(\"static\")\n", "hazard_path = static_path.joinpath(\"hazard\")\n", "output_path = root_dir.joinpath(\"output\")\n", "input_analysis_path = root_dir.joinpath(\"input_analysis_data\") # This is a path where input data for the losses analysis is stored" ] }, { "cell_type": "markdown", "id": "ed2dccd5", "metadata": {}, "source": [ "## Step 2: Define the Network\n", "\n", "We configure the **network section** by providing the shapefile with road links.\n", "\n", "The losses analysis requires specific additional data (e.g., traffic intensity for every link).\n", "Therefore, it is **not recommended** to use OSM data for this analysis, as it would be difficult to collect all required input data for every OSM link.\n", "We strongly recommend using a pre-defined shapefile as source for the road network.\n", "\n", "It is essential to specify a unique `file_id` (e.g., `ID`), which corresponds to a unique identifier in the network shapefile.\n", "This will later be used in Step 3 to match traffic intensities with the corresponding road links." ] }, { "cell_type": "code", "execution_count": null, "id": "a9a442e7", "metadata": {}, "outputs": [], "source": [ "from ra2ce.network.network_config_data.enums.network_type_enum import NetworkTypeEnum\n", "from ra2ce.network.network_config_data.enums.source_enum import SourceEnum\n", "from ra2ce.network.network_config_data.network_config_data import NetworkSection\n", "\n", "network_section = NetworkSection(\n", " directed=False,\n", " source=SourceEnum.SHAPEFILE,\n", " primary_file=static_path.joinpath(\"network\", 'network.shp'),\n", " file_id=\"ID\", # Needed for matching with traffic intensity\n", " save_gpkg=True,\n", ")\n" ] }, { "cell_type": "markdown", "id": "eef7e388", "metadata": {}, "source": [ "We now specify the hazard data. The hazard maps (e.g., flood depth rasters) are read as GeoTIFFs.\n", "\n", "The losses analysis provides results solely for road links (node to node) and not on segmented networks (every 100m for example). Therefore, we set `overlay_segmented_network=False` to speed up computational time." ] }, { "cell_type": "code", "execution_count": 4, "id": "16845e71", "metadata": {}, "outputs": [], "source": [ "from ra2ce.network.network_config_data.enums.aggregate_wl_enum import AggregateWlEnum\n", "from ra2ce.network.network_config_data.network_config_data import HazardSection\n", "\n", "hazard_section = HazardSection(\n", " hazard_map=[Path(file) for file in hazard_path.glob(\"*.tif\")],\n", " aggregate_wl=AggregateWlEnum.MAX, # use maximum water level for losses\n", " hazard_crs=\"EPSG:32736\", # CRS of flood maps\n", " overlay_segmented_network=False\n", ")" ] }, { "cell_type": "markdown", "id": "5523e093", "metadata": {}, "source": [ "Both the network and hazard sections are passed to a single configuration object:" ] }, { "cell_type": "code", "execution_count": null, "id": "426cdb11", "metadata": {}, "outputs": [], "source": [ "from ra2ce.network.network_config_data.network_config_data import NetworkConfigData\n", "\n", "network_config_data = NetworkConfigData(\n", " root_path=root_dir,\n", " network=network_section,\n", " hazard=hazard_section,\n", " static_path=root_dir.joinpath(\"static\"),\n", ")\n" ] }, { "cell_type": "markdown", "id": "ba30a5f1", "metadata": {}, "source": [ "## Step 3: Prepare required input data\n", "\n", "Three CSV input files are required for the losses workflow:\n", "\n", "| **File** | **Content** |\n", "|----------|-------------|\n", "| `resilience_curve.csv` | Recovery speed of road links after disruption (0–1 scale). |\n", "| `traffic_intensities.csv` | Traffic volumes per link, per trip purpose, per period. |\n", "| `values_of_time.csv` | Economic value of time per trip purpose (€/hour). |\n", "\n", "### Traffic Intensity\n", "\n", "The **traffic intensity** CSV file contains the average number of vehicles for every link in the network.\n", "A distinction can be made between different trip purposes (e.g., commute, freight, business, other)\n", "and time periods (e.g., day, evening) since the traffic patterns can vary significantly throughout the day.\n", "\n", "The traffic intensity must be expressed in **number of vehicles per day**.\n", "\n", "The traffic intensity can be differentiated between classes of trips. You have to choose among the\n", "pre-defined trip purposes from [TripPurposeEnum](../api/ra2ce.analysis.analysis_config_data.enums.html#ra2ce.analysis.analysis_config_data.enums.trip_purpose_enum.TripPurposeEnum){.api-ref} (`BUSINESS`, `COMMUTE`, `FREIGHT`, `OTHER`) and use the following structure for column names:\n", "`day_typeoftrip` or `evening_typeoftrip`.\n", "\n", "For example:\n", "\n", "| **ID** | **evening_commute** | **evening_freight** | **evening_total** | **day_commute** | **day_freight** | **day_total** |\n", "|--------|---------------------|---------------------|-------------------|-----------------|-----------------|---------------|\n", "| 1 | 0 | 0 | 0 | 100 | 200 | 300 |\n", "| 2 | 0 | 0 | 0 | 50 | 20 | 70 |\n", "| 3 | 10 | 2 | 12 | 30 | 32 | 62 |" ] }, { "cell_type": "markdown", "id": "655a6ec7", "metadata": {}, "source": [ "### Values of time\n", "\n", "As a user, you also need to specify the **value of time** for every class of trip defined in the traffic intensity.\n", "It represents the amount of money lost per unit of time (or distance) due to the unavailability of a disrupted link.\n", "\n", "The value of time can be expressed either:\n", "- per **hour of delay** (€/hour), or\n", "- per **kilometer of detour** (€/km).\n", "\n", "The unit of currency depends on the values entered in this table, but must be consistent across all rows.\n", "\n", "The **average number of occupants per vehicle** is also required for each trip purpose. This is especially important to compute the losses for links without alternative routes.\n", "\n", "Example (`values_of_time.csv`):\n", "\n", "| **trip_types** | **value_of_time** | **occupants** |\n", "|----------------|-------------------|---------------|\n", "| commute | 10 | 1 |\n", "| freight | 20 | 2 |\n", "\n", "In this example, vehicles in the *commute* class experience an economic loss of **10 €/hour of disruption**, while *freight* vehicles lose **20 €/hour**, with two occupants per vehicle." ] }, { "cell_type": "markdown", "id": "53cd7e92", "metadata": {}, "source": [ "### Resilience Curves\n", "\n", "The last required input file defines the **resilience curves** for the different road types in the network.\n", "Each road type (e.g., highway, residential) is affected differently by a given hazard level and recovers at its own pace.\n", "\n", "The column `link_type_hazard_intensity` controls which resilience curve is selected for each road type and hazard level.\n", "The hazard intensity uses the same unit as in the hazard map.\n", "For example, for `highway_0-0.5`, the corresponding resilience curve applies to all links of type *highway* with a hazard intensity between 0 and 0.5.\n", "\n", "The table must cover **all road types** and **all expected hazard intensities**.\n", "\n", "The resilience curves are defined with:\n", "- **duration steps** (in hours)\n", "- **functionality loss ratio**\n", "\n", "**Example interpretation** (first row of the table):\n", "- At t = 0, the link is 100% functional\n", "- Between t = 0 and t = 10 h, the link is 50% functional\n", "- Between t = 10 h and t = 40 h (10+30), the link is 70% functional\n", "- Between t = 40 h and t = 90 h (10+30+50), the link is 90% functional\n", "- After t > 90 h, the link is fully functional again\n", "\n", "| **link_type_hazard_intensity** | **duration_steps** | **functionality_loss_ratio** |\n", "|--------------------------------|---------------------|------------------------------|\n", "| highway_0-0.5 | [10.0, 30.0, 50.0] | [0.5, 0.3, 0.1] |\n", "| highway_0.5-2 | [10.0, 40.0, 100.0]| [0.75, 0.5, 0.25] |\n", "| residential_0-2 | [5.0, 10.0, 15.0] | [0.75, 0.5, 0.25] |" ] }, { "cell_type": "markdown", "id": "482da241", "metadata": {}, "source": [ "## Step 4: Define the Losses Analysis\n", "\n", "Now we configure the **losses analysis** using [AnalysisSectionLosses](../api/ra2ce.analysis.analysis_config_data.html#ra2ce.analysis.analysis_config_data.analysis_config_data.AnalysisSectionLosses){.api-ref} and the analysis type set to [MULTI_LINK_LOSSES](../api/ra2ce.analysis.analysis_config_data.enums.html#ra2ce.analysis.analysis_config_data.enums.analysis_losses_enum.AnalysisLossesEnum.MULTI_LINK_LOSSES){.api-ref}.\n", "It is required for this analysis to provide the three CSV input files prepared in Step 3, and to specify the **production loss per capita per hour** (in €/hour) as well as the traffic period ([TrafficPeriodEnum](../api/ra2ce.analysis.analysis_config_data.enums.html#module-ra2ce.analysis.analysis_config_data.enums.traffic_period_enum){.api-ref}) and trip purposes ([TripPurposeEnum](../api/ra2ce.analysis.analysis_config_data.enums.html#ra2ce.analysis.analysis_config_data.enums.trip_purpose_enum.TripPurposeEnum){.api-ref}) to consider." ] }, { "cell_type": "code", "execution_count": null, "id": "955f462d", "metadata": {}, "outputs": [], "source": [ "from ra2ce.analysis.analysis_config_data.analysis_config_data import (AnalysisConfigData, AnalysisSectionLosses)\n", "from ra2ce.analysis.analysis_config_data.enums.analysis_losses_enum import AnalysisLossesEnum\n", "from ra2ce.analysis.analysis_config_data.enums.trip_purpose_enum import TripPurposeEnum\n", "from ra2ce.analysis.analysis_config_data.enums.weighing_enum import WeighingEnum\n", "from ra2ce.analysis.analysis_config_data.enums.event_type_enum import EventTypeEnum\n", "from ra2ce.analysis.analysis_config_data.enums.traffic_period_enum import TrafficPeriodEnum\n", "\n", "losses_analysis = [AnalysisSectionLosses(\n", " name='losses',\n", " analysis=AnalysisLossesEnum.MULTI_LINK_LOSSES, # MULTI_LINK_LOSSES or SINGLE_LINK_LOSSES\n", " event_type=EventTypeEnum.EVENT, # EVENT-based analysis (single hazard event)\n", " weighing=WeighingEnum.TIME, # losses weighted by time (instead of length)\n", " threshold=0.5, # water depth threshold for disruption\n", " production_loss_per_capita_per_hour=42,\n", " traffic_period=TrafficPeriodEnum.DAY,\n", " trip_purposes=[\n", " TripPurposeEnum.BUSINESS,\n", " TripPurposeEnum.COMMUTE,\n", " TripPurposeEnum.FREIGHT,\n", " TripPurposeEnum.OTHER,\n", " ],\n", "\n", " # CSV input files\n", " resilience_curves_file=input_analysis_path.joinpath(\"resilience_curve.csv\"),\n", " traffic_intensities_file=input_analysis_path.joinpath(\"traffic_intensities.csv\"),\n", " values_of_time_file=input_analysis_path.joinpath(\"values_of_time.csv\"),\n", "\n", " save_csv=True,\n", " save_gpkg=True\n", ")]\n", "\n", "analysis_config_data = AnalysisConfigData(\n", " analyses=losses_analysis,\n", " output_path=output_path,\n", ")" ] }, { "cell_type": "markdown", "id": "e0c2f7a5", "metadata": {}, "source": [ "## Step 5: Run the Analysis\n", "\n", "Finally, we create a RA2CE handler, configure the analysis, and run it:" ] }, { "cell_type": "code", "execution_count": 7, "id": "f15245a5", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "c:\\Users\\hauth\\miniforge3\\envs\\ra2ce_env\\Lib\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", " from .autonotebook import tqdm as notebook_tqdm\n", "100%|██████████| 460/460 [00:00<00:00, 153613.04it/s]\n", "2025-09-28 05:31:50 PM - [avg_speed_calculator.py:176] - root - WARNING - No valid file found with average speeds data\\losses\\static\\output_graph\\avg_speed.csv, calculating and saving them instead.\n", "2025-09-28 05:31:50 PM - [avg_speed_calculator.py:151] - root - WARNING - Default speed have been assigned to road type []. Please check the average speed CSV, enter the right average speed for this road type and run RA2CE again.\n", "2025-09-28 05:31:50 PM - [avg_speed_calculator.py:151] - root - WARNING - Default speed have been assigned to road type []. Please check the average speed CSV, enter the right average speed for this road type and run RA2CE again.\n", "2025-09-28 05:31:50 PM - [avg_speed_calculator.py:151] - root - WARNING - Default speed have been assigned to road type []. Please check the average speed CSV, enter the right average speed for this road type and run RA2CE again.\n", "2025-09-28 05:31:51 PM - [hazard_overlay.py:381] - root - WARNING - Hazard crs EPSG:32736 and graph crs EPSG:4326 are inconsistent, we try to reproject the graph crs\n", "Graph hazard overlay with RP_10: 100%|██████████| 230/230 [00:02<00:00, 100.15it/s]\n", "Graph fraction with hazard overlay with RP_10: 100%|██████████| 230/230 [00:20<00:00, 11.24it/s]\n", "c:\\Users\\hauth\\miniforge3\\envs\\ra2ce_env\\Lib\\site-packages\\geopandas\\geodataframe.py:1528: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame.\n", "Try using .loc[row_indexer,col_indexer] = value instead\n", "\n", "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " super().__setitem__(key, value)\n" ] }, { "data": { "text/plain": [ "[AnalysisResultWrapper(results_collection=[AnalysisResult(analysis_result= ID u v RP10_fr RP10_ma alt_nodes alt_time avgspeed bridge \\\n", " 0 86 2 42 0.014318 1.369213 nan NaN 58.0 yes \n", " 1 100 30 82 0.100808 0.654458 nan NaN 30.0 yes \n", " 2 264 46 51 0.002634 0.766717 nan NaN 50.0 nan \n", " 3 58 47 49 0.932580 7.217377 nan NaN 58.0 nan \n", " 4 51 47 50 0.615422 1.324971 nan NaN 58.0 nan \n", " .. ... ... ... ... ... ... ... ... ... \n", " 223 253 153 156 0.000000 0.000000 nan NaN 50.0 nan \n", " 224 246 154 155 0.000000 0.000000 nan NaN 50.0 nan \n", " 225 249 154 156 0.000000 0.000000 nan NaN 50.0 nan \n", " 226 248 156 157 0.000000 0.000000 nan NaN 50.0 nan \n", " 227 257 157 158 0.000000 0.000000 nan NaN 50.0 nan \n", " \n", " connected ... node_A node_B rfid rfid_c time vlh_business_RP10_ma \\\n", " 0 nan ... 2 42 125 6 0.045 0.0000 \n", " 1 nan ... 30 82 233 61 0.008 0.0000 \n", " 2 0 ... 46 51 151 89 0.003 1060.9375 \n", " 3 0 ... 47 49 145 91 0.000 14229.6000 \n", " 4 0 ... 47 50 148 92 0.001 1968.7500 \n", " .. ... ... ... ... ... ... ... ... \n", " 223 nan ... 153 156 448 231 0.001 0.0000 \n", " 224 nan ... 154 155 447 232 0.001 0.0000 \n", " 225 nan ... 154 156 449 233 0.000 0.0000 \n", " 226 nan ... 156 157 452 234 0.004 0.0000 \n", " 227 nan ... 157 158 454 235 0.030 0.0000 \n", " \n", " vlh_commute_RP10_ma vlh_freight_RP10_ma vlh_other_RP10_ma \\\n", " 0 0.00 0.000 0.00 \n", " 1 0.00 0.000 0.00 \n", " 2 1181.25 240.625 4156.25 \n", " 3 20859.30 2182.950 97020.00 \n", " 4 5643.75 1859.375 10500.00 \n", " .. ... ... ... \n", " 223 0.00 0.000 0.00 \n", " 224 0.00 0.000 0.00 \n", " 225 0.00 0.000 0.00 \n", " 226 0.00 0.000 0.00 \n", " 227 0.00 0.000 0.00 \n", " \n", " vlh_RP10_ma_total \n", " 0 0.0000 \n", " 1 0.0000 \n", " 2 6639.0625 \n", " 3 134291.8500 \n", " 4 19971.8750 \n", " .. ... \n", " 223 0.0000 \n", " 224 0.0000 \n", " 225 0.0000 \n", " 226 0.0000 \n", " 227 0.0000 \n", " \n", " [228 rows x 29 columns], analysis_config=AnalysisSectionLosses(name='losses', save_gpkg=True, save_csv=True, analysis=, weighing=, production_loss_per_capita_per_hour=42, traffic_period=, hours_per_traffic_period=0, trip_purposes=[, , , ], resilience_curves_file=WindowsPath('data/losses/input_analysis_data/resilience_curve.csv'), traffic_intensities_file=WindowsPath('data/losses/input_analysis_data/traffic_intensities.csv'), values_of_time_file=WindowsPath('data/losses/input_analysis_data/values_of_time.csv'), threshold=0.5, threshold_destinations=nan, equity_weight='', calculate_route_without_disruption=False, buffer_meters=nan, category_field_name='', save_traffic=False, event_type=, risk_calculation_mode=, risk_calculation_year=0), output_path=WindowsPath('data/losses/output'), _custom_name='')])]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from ra2ce.ra2ce_handler import Ra2ceHandler\n", "\n", "handler = Ra2ceHandler.from_config(network_config_data, analysis_config_data)\n", "\n", "handler.configure()\n", "handler.run_analysis()" ] }, { "cell_type": "markdown", "id": "e5c4e555", "metadata": {}, "source": [ "## Output\n", "\n", "The results are saved in the `output` folder as:\n", "\n", "- **losses.gpkg** → GeoPackage with results per link\n", "\n", "The attributes of interest include:\n", "\n", "- `vlh_RP10_ma_total` → Loss (in €) for hazard event RP10, all trip purposes aggregated\n", "- `vlh_commute_RP10_ma` → Loss (in €) for hazard event RP10, commute trips only\n", "- `vlh_freight_RP10_ma` → Loss (in €) for hazard event RP10, freight trips only\n", "- `vlh_business_RP10_ma` → Loss (in €) for hazard event RP10, business trips only\n", "- `vlh_other_RP10_ma` → Loss (in €) for hazard event RP10, other trips only" ] } ], "metadata": { "kernelspec": { "display_name": "ra2ce_env", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.13" } }, "nbformat": 4, "nbformat_minor": 5 }