{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Case 1: Retaining wall\n", "\n", "## Introduction\n", "\n", "This notebook demonstrates the use of reliability methods from the Deltares Probabilistic Library on a realistic geotechnical benchmark: the stability of a gravity retaining wall. It is adapted from the guidelines for the application of the 2nd generation of Eurocode 7: *\"Reliability-based verification of limit states for geotechnical structures\"* (2024). Because the limit state function can be expressed in closed form, this example provides a transparent and verifiable starting point for exploring the library.\n", "\n", "The retaining wall is assessed against three failure mechanisms:\n", "\n", "- **Sliding** along the base \n", "- **Bearing capacity** failure of the foundation soil \n", "- **Overturning** about the toe \n", "\n", "System failure occurs when **any one** mechanism is critical. The system limit state function $g_\\text{sys}$ is therefore taken as the minimum over the three individual values.\n", "\n", "The model involves four stochastic variables:\n", "\n", "| Variable | Description | Unit | Distribution |\n", "|----------|-------------|------|-------------|\n", "| $\\gamma_1$ | Unit weight of the backfill soil | kN/m³ | Normal |\n", "| $\\gamma_2$ | Unit weight of the foundation soil | kN/m³ | Normal |\n", "| $\\phi_1$ | Friction angle of the backfill soil | ° | Normal (truncated) |\n", "| $\\phi_2$ | Friction angle of the foundation soil | ° | Normal |\n", "\n", "Six reliability methods are applied and compared: **FORM**, **Crude Monte Carlo**, **Directional Sampling**, **Importance Sampling**, **Adaptive Importance Sampling**, and **Subset Simulation**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"Retaining\n", "\n", "*Source: Ching & Hsieh (2011)*" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from probabilistic_library import ReliabilityProject, DistributionType, ReliabilityMethod, StartMethod, StandardNormal, GradientType, SampleMethod\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model definition\n", "\n", "The system limit state function is defined as:\n", "\n", "$$g_\\text{sys}(\\mathbf{X}) = \\min\\bigl(g_1(\\mathbf{X}),\\; g_2(\\mathbf{X}),\\; g_3(\\mathbf{X})\\bigr)$$\n", "\n", "where:\n", "\n", "| Sub-function | Failure mode | Definition |\n", "|---|---|---|\n", "| $g_1$ | **Sliding** | Horizontal passive resistance − active force component |\n", "| $g_2$ | **Bearing capacity** | Ultimate soil load capacity − applied vertical load |\n", "| $g_3$ | **Overturning** | Resisting moment − overturning moment |\n", "\n", "The sign convention is $g > 0$ (safe) and $g \\leq 0$ (failure). System failure is triggered as soon as any one mechanism becomes critical.\n", "\n", "The three sub-functions and their assembly into `LSF_retaining_wall` are defined below. The function arguments (`gamma_1`, `gamma_2`, `phi_1`, `phi_2`) must exactly match the variable names used in the project." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def g_1_sliding(Pa, delta_1, delta_2, theta, Ww, Wr):\n", " return (Pa*np.sin(delta_1+theta) + Ww + Wr)*np.tan(delta_2) - Pa*np.cos(delta_1+theta)\n", "\n", "def g_2_bearing_capacity(qu, Lb, Ww, Wr, Pa, delta_1, theta):\n", " \"\"\"\n", " Limit state function for bearing capacity\n", "\n", " Parameters\n", "\n", " qu : float\n", " Ultimate bearing capacity.\n", "\n", " Lb : float\n", "\n", " Ww : float\n", " Weight of the wall.\n", "\n", " Wr : float\n", " Weight of the wall\n", "\n", " Pa : float\n", " Active force\n", "\n", " delta_1 : float\n", " friction angle between the backfill and the back of the wall\n", " \n", " theta : float\n", " back angle of the retaining wall\n", "\n", " \"\"\"\n", " return qu*Lb-(Ww + Wr + Pa*np.sin(delta_1+theta))\n", "\n", "def g3_overturning(Mr, M0):\n", " \"\"\"\n", " Limit state function for overturning\n", "\n", " Parameters\n", " ----------\n", " Mo : float\n", " Overturning moment.\n", " Mr : float\n", " Resisting moment.\n", " \"\"\"\n", " return Mr - M0\n", " " ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def LSF_retaining_wall(gamma_1, gamma_2, phi_1, phi_2):\n", " \"\"\"\n", " Function to calculate the reliability index\n", " :param gamma_1: Unit weight of the backfill soil\n", " :param gamma_2: Unit weight of the foundation soil\n", " :param phi_1: Friction angle ok the backfill\n", " :param phi_2: Friction angle of the foundation soil\n", " \"\"\"\n", "\n", " H = 4 # Height of the wall in m\n", " B = 2.5 # Top width of the wall in m\n", " L = 3.5 # Bottom of the wall in m\n", " alpha_m = 5.0 # slope of the backfill soil in in deg\n", " gamma_Ww = 23.58 # unit weight of the retaining wall in kN/m3\n", "\n", " alpha = np.radians(alpha_m) # in rad\n", "\n", " t = np.arctan((L-B)/H)\n", " phi_1 = np.radians(phi_1)\n", " phi_2 = np.radians(phi_2)\n", " delta_1 = 2/3*phi_1\n", " delta_2 = 2/3*phi_2\n", " Ka = np.cos(phi_1-t)**2/(np.cos(t)**2*np.cos(delta_1+t)*(1+np.sqrt(np.sin(delta_1+phi_1)*np.sin(phi_1-alpha)/np.cos(delta_1+t)/np.cos(t-alpha)))**2)\n", "\n", " Pa = 0.5*Ka*gamma_1*H**2 \n", " \n", " Ww = gamma_Ww*B*H\n", " Wr = gamma_Ww*(L-B)*H/2\n", " \n", " beta = np.arctan(Pa*np.cos(delta_1+t)/(Pa*np.sin(delta_1+t) + Ww + Wr))\n", " \n", " Fg = (1-np.rad2deg(beta)/np.rad2deg(phi_2))**2\n", " Nq = np.exp(np.pi*np.tan(phi_2))*np.tan(np.pi/4+phi_2/2)**2\n", " Ng = 2*(Nq+1)*np.tan(phi_2)\n", " \n", " Mr = Ww*B/2 + Wr*(B+1/3*(L-B)) + Pa*np.sin(delta_1+t)*(B+2/3*(L-B))\n", " Mo = Pa*np.cos(delta_1+t)*H/3\n", "\n", " xb = (Mr-Mo)/(Ww + Wr + Pa*np.sin(delta_1+t))\n", " \n", " eL = np.abs(xb-L/2)\n", " Lb = L-2*eL\n", " qu = 0.5*gamma_2*Lb*Ng*Fg\n", "\n", " g1 = g_1_sliding(Pa, delta_1, delta_2, t, Ww, Wr)\n", " g2 = g_2_bearing_capacity(qu, Lb, Ww, Wr, Pa, delta_1, t)\n", " g3 = g3_overturning(Mr, Mo)\n", " g_syst = np.min(np.array([g1, g2, g3]), axis=0)\n", " return g_syst\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setting up the project\n", "\n", "A `ReliabilityProject` is the central object that ties together the model, the stochastic variables, the numerical settings, and the results. Here we create a new project and assign `LSF_retaining_wall` as the model to evaluate." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "project = ReliabilityProject()\n", "project.model = LSF_retaining_wall" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Stochastic variables\n", "\n", "Each argument of `LSF_retaining_wall` is registered as a stochastic variable. For each variable we specify:\n", "\n", "- `distribution` — the marginal distribution type \n", "- `mean` — the mean value \n", "- `variation` — the **coefficient of variation** (CoV = σ/μ), which the library uses to derive the standard deviation \n", "\n", "All four variables follow a **Normal** distribution. The friction angle $\\phi_1$ is additionally **truncated** to the interval [5°, 45°] to prevent physically meaningless values during sampling." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "project.variables[\"phi_1\"].distribution = DistributionType.normal\n", "project.variables[\"phi_1\"].mean = 35\n", "project.variables[\"phi_1\"].variation = 0.1\n", "project.variables[\"phi_1\"].truncated = True\n", "project.variables[\"phi_1\"].minimum = 5\n", "project.variables[\"phi_1\"].maximum = 45 \n", "\n", "\n", "project.variables[\"phi_2\"].distribution = DistributionType.normal\n", "project.variables[\"phi_2\"].mean = 35\n", "project.variables[\"phi_2\"].variation = 0.1\n", " \n", "project.variables[\"gamma_1\"].distribution = DistributionType.normal\n", "project.variables[\"gamma_1\"].mean = 19\n", "project.variables[\"gamma_1\"].variation = 0.1\n", "\n", "project.variables[\"gamma_2\"].distribution = DistributionType.normal\n", "project.variables[\"gamma_2\"].mean = 17\n", "project.variables[\"gamma_2\"].variation = 0.1\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reliability analysis\n", "\n", "We now run six reliability methods. Each method estimates the **reliability index** $\\beta$ and the corresponding **probability of failure** $P_f = \\Phi(-\\beta)$, where $\\Phi$ is the standard normal CDF.\n", "\n", "### 1 · FORM — First Order Reliability Method\n", "\n", "FORM approximates the limit state surface by a hyperplane (first-order Taylor expansion) at the **design point** — the most probable failure point in standard normal space. It is computationally efficient and, as a by-product, provides **α-factors** that quantify each variable's relative contribution to the probability of failure.\n", "\n", "Three settings control the iteration:\n", "\n", "- **`relaxation_factor`** — a damping coefficient (between 0 and 1) that scales the step size in the iHLRF algorithm. Smaller values improve stability for strongly non-linear problems at the cost of more iterations. \n", "- **`maximum_iterations`** — hard upper limit on the number of iterations. \n", "- **`variation_coefficient`** — convergence threshold: iteration stops when the normalised step size falls below this value." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "project.settings.reliability_method = ReliabilityMethod.form\n", "\n", "project.settings.relaxation_factor = 0.15\n", "project.settings.maximum_iterations = 100\n", "project.settings.variation_coefficient = 0.02\n", "\n", "project.run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The helper function `read_results` below extracts the key outputs from the design point and returns them in a dictionary for later comparison. It is defined once here and reused after every subsequent method run." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Beta = 2.9263733572015203\n", "Probability of failure = 0.0017146954960667192\n", "gamma_1: alpha = -0.1549273133163195, x = 19.86141280778421\n", "gamma_2: alpha = 0.14105825853061282, x = 16.29825847971871\n", "phi_1: alpha = 0.24652517187546255, x = 32.46928593576917\n", "phi_2: alpha = 0.9462163784889117, x = 25.308561599448186\n", "Converged (convergence = 0.008532246341729577 < 0.02)\n", "Model runs = 95\n" ] }, { "data": { "text/plain": [ "{'dp': ,\n", " 'beta': 2.9263733572015203,\n", " 'pf': 0.0017146954960667192,\n", " 'alphas': ,\n", " 'converged': True,\n", " 'convergence': 0.008532246341729577,\n", " 'model_runs': 95}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def read_results(dp):\n", "\n", " beta = dp.reliability_index\n", "\n", " print(f\"Beta = {beta}\")\n", "\n", " pf = StandardNormal.get_q_from_u(beta)\n", " print(f\"Probability of failure = {pf}\")\n", "\n", " for alpha in dp.alphas:\n", " print(f\"{alpha.variable.name}: alpha = {alpha.alpha}, x = {alpha.x}\")\n", "\n", " if dp.is_converged:\n", " print(f\"Converged (convergence = {dp.convergence} < {project.settings.variation_coefficient})\")\n", " else:\n", " print(f\"Not converged (convergence = {dp.convergence} > {project.settings.variation_coefficient})\")\n", " \n", " print(f\"Model runs = {dp.total_model_runs}\")\n", " return dict(dp=dp, beta=beta, pf=pf, alphas=dp.alphas, converged=dp.is_converged, convergence=dp.convergence, model_runs=dp.total_model_runs)\n", "\n", "read_results(project.design_point)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Beta = 2.9263733572015203\n", "Probability of failure = 0.0017146954960667192\n", "gamma_1: alpha = -0.1549273133163195, x = 19.86141280778421\n", "gamma_2: alpha = 0.14105825853061282, x = 16.29825847971871\n", "phi_1: alpha = 0.24652517187546255, x = 32.46928593576917\n", "phi_2: alpha = 0.9462163784889117, x = 25.308561599448186\n", "Converged (convergence = 0.008532246341729577 < 0.02)\n", "Model runs = 95\n" ] } ], "source": [ "res_form = read_results(project.design_point)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2 · Crude Monte Carlo Simulation\n", "\n", "Crude Monte Carlo (CMC) estimates $P_f$ by drawing independent samples from the joint distribution and counting the fraction that fall in the failure domain ($g \\leq 0$). It is the most straightforward reference method, but requires large sample sizes for small failure probabilities.\n", "\n", "The adaptive stopping rule runs until the CoV of the $P_f$ estimate drops below `variation_coefficient`, or the sample count reaches `maximum_samples`." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "project.settings.reliability_method = ReliabilityMethod.crude_monte_carlo\n", "\n", "project.settings.minimum_samples = 1000\n", "project.settings.maximum_samples = 500000\n", "project.settings.variation_coefficient = 0.02\n", "\n", "project.run() # takes 17 seconds" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Beta = 2.906754233612549\n", "Probability of failure = 0.0018260000000000207\n", "gamma_1: alpha = -0.16177958086934452, x = 19.89348161504766\n", "gamma_2: alpha = 0.160157304050106, x = 16.20858553329425\n", "phi_1: alpha = 0.26018241482970117, x = 32.34738501789724\n", "phi_2: alpha = 0.9383400855694731, x = 25.453665942959002\n", "Not converged (convergence = 0.03306493992005365 > 0.02)\n", "Model runs = 500001\n" ] } ], "source": [ "res_MC = read_results(project.design_point)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3 · Directional Sampling\n", "\n", "Directional Sampling transforms the problem to hyperspherical coordinates and integrates analytically along random directions in standard normal space. For each direction, the probability contribution is computed exactly using the standard normal CDF, making it more efficient than crude Monte Carlo for smooth limit state functions. The settings `minimum_directions` and `maximum_directions` bound the number of random directions explored." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "project.settings.reliability_method = ReliabilityMethod.directional_sampling\n", "project.settings.minimum_directions = 10000\n", "project.settings.maximum_directions = 20000\n", "project.settings.variation_coefficient = 0.02\n", "\n", "project.run()\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res_DS = read_results(project.design_point)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4 · Importance Sampling\n", "\n", "Importance Sampling (IS) concentrates sampling around the design point found by FORM by shifting the sampling density to that region. Because failure samples are generated much more frequently, it achieves Monte Carlo-level accuracy with far fewer model evaluations — typically one to two orders of magnitude fewer than crude Monte Carlo for moderate failure probabilities." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "project.settings.reliability_method = ReliabilityMethod.importance_sampling\n", "project.settings.minimum_samples = 1000\n", "project.settings.maximum_samples = 100000\n", "project.settings.variation_coefficient = 0.02\n", "\n", "project.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res_IS = read_results(project.design_point)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5 · Adaptive Importance Sampling\n", "\n", "Adaptive Importance Sampling (AIS) iteratively refines the sampling distribution based on accumulated results, without requiring a prior FORM run. The `minimum_variance_loops` / `maximum_variance_loops` settings control the adaptive phase, while `fraction_failed` sets the target fraction of failed samples per loop used to update the sampling density." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "project.settings.reliability_method = ReliabilityMethod.adaptive_importance_sampling\n", "project.settings.minimum_samples = 10000\n", "project.settings.maximum_samples = 100000\n", "project.settings.minimum_variance_loops = 5\n", "project.settings.maximum_variance_loops = 10\n", "project.settings.fraction_failed = 0.5\n", "project.settings.variation_coefficient = 0.02\n", "\n", "project.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res_AIS = read_results(project.design_point)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 6 · Subset Simulation\n", "\n", "Subset Simulation decomposes the failure probability into a product of conditional probabilities over a sequence of intermediate failure events of increasing severity. It is particularly well-suited for rare events ($P_f \\lesssim 10^{-4}$) and does not require a prior FORM run. The `sample_method = SampleMethod.adaptive_conditional` selects the adaptive MCMC kernel used to generate samples conditional on each intermediate event." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "project.settings.reliability_method = ReliabilityMethod.subset_simulation\n", "project.settings.minimum_samples = 1000\n", "project.settings.maximum_samples = 50000\n", "project.settings.variation_coefficient = 0.02\n", "project.settings.sample_method = SampleMethod.adaptive_conditional\n", "\n", "project.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res_SS = read_results(project.design_point)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparison of results\n", "\n", "The table below summarises $\\beta$ and the number of model evaluations for each method. All methods agree closely on $\\beta \\approx 2.9$, corresponding to $P_f \\approx 1.8 \\times 10^{-3}$.\n", "\n", "| Method | β | Model evaluations |\n", "|--------|---|-------------------|\n", "| FORM | 2.93 | 95 |\n", "| Crude Monte Carlo | 2.90 | 500,000 |\n", "| Directional Sampling | 2.90 | 100,000 |\n", "| Importance Sampling | 2.90 | 100,000 |\n", "| Adaptive Importance Sampling | 2.92 | 41,136 |\n", "| Subset Simulation | 2.91 | 45,000 |\n", "\n", "**Key observations:**\n", "\n", "- **FORM** is by far the most efficient method (< 100 evaluations). The close agreement with all simulation methods confirms that the limit state surface is well approximated as a hyperplane at the design point.\n", "- **Importance Sampling** and **Adaptive Importance Sampling** reach Monte Carlo-level accuracy with roughly an order of magnitude fewer evaluations than crude Monte Carlo.\n", "- **Subset Simulation** achieves similar efficiency to AIS without needing a prior FORM run, making it a robust choice when the shape of the limit state surface is unknown.\n", "- According to Eurocode 7, $\\beta \\approx 2.9$ is associated with **Reliability Class RC1/RC2**, consistent with the design intent for this type of structure." ] } ], "metadata": { "kernelspec": { "display_name": ".venv (3.12.13)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.13" } }, "nbformat": 4, "nbformat_minor": 2 }