主题018:随机有限元与不确定性量化

1. 引言

在实际工程问题中,材料性能、几何尺寸、边界条件等往往存在不确定性。这些不确定性可能来源于材料制造过程的波动、测量误差、环境变化等因素。传统的确定性分析方法无法考虑这些不确定性,可能导致过于乐观或保守的设计。随机有限元方法(Stochastic Finite Element Method, SFEM)将概率统计理论与有限元方法相结合,为不确定性量化提供了强大的分析工具。

不确定性量化(Uncertainty Quantification, UQ)是评估输入不确定性对输出响应影响的过程。通过UQ分析,工程师可以:

  • 评估结构性能的可靠性和安全性
  • 识别对输出影响最大的输入参数
  • 优化设计以提高鲁棒性
  • 为决策提供概率依据

本主题将系统介绍随机有限元方法的基本理论、常用技术和工程应用,通过多个案例展示如何在Python中实现不确定性量化分析。


2. 不确定性来源与分类

2.1 不确定性的类型

工程中的不确定性主要分为两大类:

1. 偶然不确定性(Aleatory Uncertainty)

也称为固有不确定性或随机不确定性,是由系统本身的随机性引起的,无法通过增加数据来消除。

  • 材料性能的固有波动
  • 制造公差
  • 载荷的随机变化
  • 环境条件的自然变化

2. 认知不确定性(Epistemic Uncertainty)

也称为系统不确定性,是由于知识不足或信息不完全引起的,可以通过获取更多数据来减少。

  • 模型简化假设
  • 参数估计误差
  • 边界条件的不确定性
  • 模型形式的不确定性

2.2 不确定性来源

材料参数不确定性

  • 弹性模量的分散性(通常5-15%变异系数)
  • 屈服强度的波动(通常10-20%变异系数)
  • 泊松比的变化
  • 密度的不确定性

几何不确定性

  • 截面尺寸的制造公差
  • 表面粗糙度
  • 初始缺陷(裂纹、孔洞)
  • 装配误差

载荷不确定性

  • 风载的随机性
  • 地震动的随机性
  • 交通载荷的统计分布
  • 使用载荷的变化

边界条件不确定性

  • 支撑条件的模糊性
  • 接触状态的不确定性
  • 约束刚度的变化

3. 概率论基础

3.1 随机变量与概率分布

随机变量:取值具有随机性的变量,用大写字母表示(如XXX),其具体实现用小写字母表示(如xxx)。

概率密度函数(PDF):描述连续随机变量的概率分布。

fX(x)≥0,∫−∞+∞fX(x)dx=1 f_X(x) \geq 0, \quad \int_{-\infty}^{+\infty} f_X(x) dx = 1 fX(x)0,+fX(x)dx=1

累积分布函数(CDF)

FX(x)=P(X≤x)=∫−∞xfX(t)dt F_X(x) = P(X \leq x) = \int_{-\infty}^{x} f_X(t) dt FX(x)=P(Xx)=xfX(t)dt

常用概率分布

正态分布(高斯分布)

fX(x)=1σ2πexp⁡(−(x−μ)22σ2) f_X(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) fX(x)=σ2π 1exp(2σ2(xμ)2)

其中μ\muμ为均值,σ\sigmaσ为标准差。正态分布适用于描述许多自然现象的随机波动。

对数正态分布

ln⁡X\ln XlnX服从正态分布,则XXX服从对数正态分布:

fX(x)=1xσln⁡X2πexp⁡(−(ln⁡x−μln⁡X)22σln⁡X2) f_X(x) = \frac{1}{x\sigma_{\ln X}\sqrt{2\pi}} \exp\left(-\frac{(\ln x - \mu_{\ln X})^2}{2\sigma_{\ln X}^2}\right) fX(x)=xσlnX2π 1exp(2σlnX2(lnxμlnX)2)

对数正态分布适用于描述必须为正的材料参数(如弹性模量、强度)。

均匀分布

fX(x)={1b−a,a≤x≤b0,其他 f_X(x) = \begin{cases} \frac{1}{b-a}, & a \leq x \leq b \\ 0, & \text{其他} \end{cases} fX(x)={ba1,0,axb其他

当只知道参数的范围而不知道具体分布时,常假设为均匀分布。

威布尔分布

fX(x)=kλ(xλ)k−1exp⁡(−(xλ)k) f_X(x) = \frac{k}{\lambda} \left(\frac{x}{\lambda}\right)^{k-1} \exp\left(-\left(\frac{x}{\lambda}\right)^k\right) fX(x)=λk(λx)k1exp((λx)k)

威布尔分布常用于描述材料的疲劳寿命和强度分布。

3.2 随机变量的数字特征

期望值(均值)

μX=E[X]=∫−∞+∞xfX(x)dx \mu_X = E[X] = \int_{-\infty}^{+\infty} x f_X(x) dx μX=E[X]=+xfX(x)dx

方差

σX2=Var[X]=E[(X−μX)2]=∫−∞+∞(x−μX)2fX(x)dx \sigma_X^2 = Var[X] = E[(X - \mu_X)^2] = \int_{-\infty}^{+\infty} (x - \mu_X)^2 f_X(x) dx σX2=Var[X]=E[(XμX)2]=+(xμX)2fX(x)dx

标准差σX=Var[X]\sigma_X = \sqrt{Var[X]}σX=Var[X]

变异系数(Coefficient of Variation, CoV)

δX=σXμX \delta_X = \frac{\sigma_X}{\mu_X} δX=μXσX

变异系数是无量纲的离散程度度量,便于不同量纲参数的比较。

偏度(Skewness)

γ1=E[(X−μXσX)3] \gamma_1 = E\left[\left(\frac{X-\mu_X}{\sigma_X}\right)^3\right] γ1=E[(σXXμX)3]

偏度描述分布的不对称性。正态分布的偏度为0。

峰度(Kurtosis)

γ2=E[(X−μXσX)4]−3 \gamma_2 = E\left[\left(\frac{X-\mu_X}{\sigma_X}\right)^4\right] - 3 γ2=E[(σXXμX)4]3

峰度描述分布的尾部厚度。正态分布的峰度为0(超额峰度)。

3.3 多维随机变量

联合概率密度函数

对于两个随机变量XXXYYY

fXY(x,y)≥0,∫−∞+∞∫−∞+∞fXY(x,y)dxdy=1 f_{XY}(x, y) \geq 0, \quad \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} f_{XY}(x, y) dx dy = 1 fXY(x,y)0,++fXY(x,y)dxdy=1

边缘概率密度函数

fX(x)=∫−∞+∞fXY(x,y)dy f_X(x) = \int_{-\infty}^{+\infty} f_{XY}(x, y) dy fX(x)=+fXY(x,y)dy

条件概率密度函数

fX∣Y(x∣y)=fXY(x,y)fY(y) f_{X|Y}(x|y) = \frac{f_{XY}(x, y)}{f_Y(y)} fXY(xy)=fY(y)fXY(x,y)

协方差

Cov[X,Y]=E[(X−μX)(Y−μY)] Cov[X, Y] = E[(X - \mu_X)(Y - \mu_Y)] Cov[X,Y]=E[(XμX)(YμY)]

相关系数

ρXY=Cov[X,Y]σXσY,−1≤ρXY≤1 \rho_{XY} = \frac{Cov[X, Y]}{\sigma_X \sigma_Y}, \quad -1 \leq \rho_{XY} \leq 1 ρXY=σXσYCov[X,Y],1ρXY1

协方差矩阵

对于nnn维随机向量X=[X1,X2,...,Xn]T\mathbf{X} = [X_1, X_2, ..., X_n]^TX=[X1,X2,...,Xn]T

CX=[σX12Cov[X1,X2]⋯Cov[X1,Xn]Cov[X2,X1]σX22⋯Cov[X2,Xn]⋮⋮⋱⋮Cov[Xn,X1]Cov[Xn,X2]⋯σXn2] \mathbf{C}_{\mathbf{X}} = \begin{bmatrix} \sigma_{X_1}^2 & Cov[X_1, X_2] & \cdots & Cov[X_1, X_n] \\ Cov[X_2, X_1] & \sigma_{X_2}^2 & \cdots & Cov[X_2, X_n] \\ \vdots & \vdots & \ddots & \vdots \\ Cov[X_n, X_1] & Cov[X_n, X_2] & \cdots & \sigma_{X_n}^2 \end{bmatrix} CX= σX12Cov[X2,X1]Cov[Xn,X1]Cov[X1,X2]σX22Cov[Xn,X2]Cov[X1,Xn]Cov[X2,Xn]σXn2


4. 随机有限元方法

4.1 基本框架

确定性有限元的控制方程为:

Ku=f \mathbf{K} \mathbf{u} = \mathbf{f} Ku=f

当材料参数、几何参数或载荷具有随机性时,刚度矩阵K\mathbf{K}K、位移向量u\mathbf{u}u和载荷向量f\mathbf{f}f都成为随机量。

随机有限元的目标是求解响应量的统计特性(均值、方差、概率分布等)。

4.2 蒙特卡洛模拟(MCS)

蒙特卡洛模拟是最通用的不确定性量化方法,通过大量随机抽样来估计响应的统计特性。

基本步骤

  1. 定义输入随机变量:确定各随机变量的概率分布和统计参数
  2. 生成随机样本:使用随机数生成器产生大量样本
  3. 执行确定性分析:对每个样本执行有限元计算
  4. 统计分析:基于所有样本的响应进行统计分析

样本量估计

对于95%置信水平,估计均值的相对误差小于ϵ\epsilonϵ所需样本量:

N≥(1.96⋅CoVϵ)2 N \geq \left(\frac{1.96 \cdot CoV}{\epsilon}\right)^2 N(ϵ1.96CoV)2

其中CoVCoVCoV为响应的变异系数。

优点

  • 概念简单,易于实现
  • 适用于任意复杂的非线性问题
  • 精度可控,收敛速度与维度无关

缺点

  • 计算成本高(通常需要10310^3103-10610^6106次计算)
  • 收敛速度慢(O(1/N)O(1/\sqrt{N})O(1/N )
  • 难以捕捉小概率事件

4.3 摄动法(Perturbation Method)

摄动法通过泰勒展开将随机问题转化为确定性问题序列。

一阶摄动法

设随机变量X=μX+δXX = \mu_X + \delta XX=μX+δX,其中δX\delta XδX为小扰动。

将响应YYY在均值处展开:

Y≈Y(μX)+∂Y∂X∣μXδX Y \approx Y(\mu_X) + \left.\frac{\partial Y}{\partial X}\right|_{\mu_X} \delta X YY(μX)+XY μXδX

响应的均值和方差:

E[Y]≈Y(μX) E[Y] \approx Y(\mu_X) E[Y]Y(μX)

Var[Y]≈(∂Y∂X∣μX)2Var[X] Var[Y] \approx \left(\left.\frac{\partial Y}{\partial X}\right|_{\mu_X}\right)^2 Var[X] Var[Y](XY μX)2Var[X]

二阶摄动法

Y≈Y(μX)+∂Y∂X∣μXδX+12∂2Y∂X2∣μX(δX)2 Y \approx Y(\mu_X) + \left.\frac{\partial Y}{\partial X}\right|_{\mu_X} \delta X + \frac{1}{2} \left.\frac{\partial^2 Y}{\partial X^2}\right|_{\mu_X} (\delta X)^2 YY(μX)+XY μXδX+21X22Y μX(δX)2

E[Y]≈Y(μX)+12∂2Y∂X2∣μXVar[X] E[Y] \approx Y(\mu_X) + \frac{1}{2} \left.\frac{\partial^2 Y}{\partial X^2}\right|_{\mu_X} Var[X] E[Y]Y(μX)+21X22Y μXVar[X]

优点

  • 计算效率高(只需少量确定性分析)
  • 适用于小不确定性问题

缺点

  • 仅适用于小变异系数(通常δ<0.1\delta < 0.1δ<0.1
  • 高阶导数计算复杂
  • 不适用于强非线性问题

4.4 谱随机有限元方法(SSFEM)

谱随机有限元方法(Spectral Stochastic Finite Element Method)将随机响应展开为随机变量的多项式级数。

多项式混沌展开(PCE)

将随机响应Y(ξ)Y(\xi)Y(ξ)展开为:

Y(ξ)=∑i=0PyiΨi(ξ) Y(\xi) = \sum_{i=0}^{P} y_i \Psi_i(\xi) Y(ξ)=i=0PyiΨi(ξ)

其中ξ\xiξ为标准随机变量,Ψi\Psi_iΨi为正交多项式,yiy_iyi为确定性系数。

正交多项式选择

  • 高斯随机变量 → Hermite多项式
  • 均匀随机变量 → Legendre多项式
  • 指数随机变量 → Laguerre多项式
  • Beta随机变量 → Jacobi多项式

Wiener-Hermite混沌展开

对于高斯随机变量,使用Hermite多项式:

Y(ξ)=y0H0(ξ)+y1H1(ξ)+y2H2(ξ)+⋯ Y(\xi) = y_0 H_0(\xi) + y_1 H_1(\xi) + y_2 H_2(\xi) + \cdots Y(ξ)=y0H0(ξ)+y1H1(ξ)+y2H2(ξ)+

其中Hermite多项式:

Hn(ξ)=(−1)neξ2/2dndξne−ξ2/2 H_n(\xi) = (-1)^n e^{\xi^2/2} \frac{d^n}{d\xi^n} e^{-\xi^2/2} Hn(ξ)=(1)neξ2/2dξndneξ2/2

前几项:

  • H0(ξ)=1H_0(\xi) = 1H0(ξ)=1
  • H1(ξ)=ξH_1(\xi) = \xiH1(ξ)=ξ
  • H2(ξ)=ξ2−1H_2(\xi) = \xi^2 - 1H2(ξ)=ξ21
  • H3(ξ)=ξ3−3ξH_3(\xi) = \xi^3 - 3\xiH3(ξ)=ξ33ξ

统计特性

E[Y]=y0 E[Y] = y_0 E[Y]=y0

Var[Y]=∑i=1Pyi2E[Ψi2] Var[Y] = \sum_{i=1}^{P} y_i^2 E[\Psi_i^2] Var[Y]=i=1Pyi2E[Ψi2]

优点

  • 计算效率高
  • 可直接获得响应的完整概率分布
  • 适用于中等程度的不确定性

缺点

  • 对非高斯输入需要变量变换
  • 高维问题存在"维数灾难"
  • 非线性问题的收敛性

4.5 随机配点法(Stochastic Collocation)

随机配点法在随机空间的特定点上求解确定性问题,然后通过插值获得响应的统计特性。

基本步骤

  1. 在随机空间选择一组配点{ξi}i=1N\{\xi_i\}_{i=1}^{N}{ξi}i=1N
  2. 在每个配点求解确定性有限元问题
  3. 使用插值或积分方法估计统计特性

配点选择

  • 全张量积网格(Tensor Product Grid)
  • 稀疏网格(Sparse Grid)
  • 高斯积分点

稀疏网格

对于ddd维问题,稀疏网格的配点数为:

N=O(kd/d!) N = O(k^d / d!) N=O(kd/d!)

相比全张量积的N=kdN = k^dN=kd大幅减少。

优点

  • 可以使用现有的确定性有限元代码
  • 收敛速度快
  • 适用于高维问题(使用稀疏网格)

缺点

  • 配点选择影响精度
  • 非线性问题可能需要自适应配点

5. 可靠性分析

5.1 失效概率

结构的失效概率定义为:

Pf=P(g(X)≤0)=∫g(X)≤0fX(x)dx P_f = P(g(\mathbf{X}) \leq 0) = \int_{g(\mathbf{X}) \leq 0} f_{\mathbf{X}}(\mathbf{x}) d\mathbf{x} Pf=P(g(X)0)=g(X)0fX(x)dx

其中g(X)g(\mathbf{X})g(X)为极限状态函数(Limit State Function):

  • g(X)>0g(\mathbf{X}) > 0g(X)>0:安全状态
  • g(X)=0g(\mathbf{X}) = 0g(X)=0:极限状态
  • g(X)<0g(\mathbf{X}) < 0g(X)<0:失效状态

5.2 一次可靠度方法(FORM)

FORM通过线性近似计算失效概率。

标准正态变换

将相关非正态随机变量X\mathbf{X}X变换为标准正态变量U\mathbf{U}U

U=T(X) \mathbf{U} = \mathbf{T}(\mathbf{X}) U=T(X)

可靠度指标

在标准正态空间中,可靠度指标定义为:

β=min⁡g(u)=0∥u∥ \beta = \min_{g(\mathbf{u})=0} \|\mathbf{u}\| β=g(u)=0minu

即原点到极限状态曲面的最短距离。

失效概率估计

Pf≈Φ(−β) P_f \approx \Phi(-\beta) PfΦ(β)

其中Φ\PhiΦ为标准正态CDF。

设计点

最可能失效点(MPP):

u∗=−β∇g(u∗)∥∇g(u∗)∥ \mathbf{u}^* = -\beta \frac{\nabla g(\mathbf{u}^*)}{\|\nabla g(\mathbf{u}^*)\|} u=β∥∇g(u)g(u)

迭代算法

  1. 初始化u0\mathbf{u}_0u0
  2. 计算g(uk)g(\mathbf{u}_k)g(uk)∇g(uk)\nabla g(\mathbf{u}_k)g(uk)
  3. 更新:uk+1=∇g(uk)Tuk−g(uk)∥∇g(uk)∥2∇g(uk)\mathbf{u}_{k+1} = \frac{\nabla g(\mathbf{u}_k)^T \mathbf{u}_k - g(\mathbf{u}_k)}{\|\nabla g(\mathbf{u}_k)\|^2} \nabla g(\mathbf{u}_k)uk+1=∥∇g(uk)2g(uk)Tukg(uk)g(uk)
  4. 重复直到收敛

5.3 二次可靠度方法(SORM)

SORM使用二次近似提高精度。

失效概率估计

Pf≈Φ(−β)∏i=1n−1(1+βκi)−1/2 P_f \approx \Phi(-\beta) \prod_{i=1}^{n-1} (1 + \beta \kappa_i)^{-1/2} PfΦ(β)i=1n1(1+βκi)1/2

其中κi\kappa_iκi为极限状态曲面在设计点的主曲率。

5.4 重要性抽样

重要性抽样通过修改抽样分布来提高小概率事件的估计效率。

Pf=∫I(g(x)≤0)fX(x)hX(x)hX(x)dx P_f = \int I(g(\mathbf{x}) \leq 0) \frac{f_{\mathbf{X}}(\mathbf{x})}{h_{\mathbf{X}}(\mathbf{x})} h_{\mathbf{X}}(\mathbf{x}) d\mathbf{x} Pf=I(g(x)0)hX(x)fX(x)hX(x)dx

其中hX(x)h_{\mathbf{X}}(\mathbf{x})hX(x)为重要性抽样密度,通常以设计点为中心。


6. 敏感性分析

6.1 局部敏感性分析

敏感性系数

Si=∂E[Y]∂μXi S_i = \frac{\partial E[Y]}{\partial \mu_{X_i}} Si=μXiE[Y]

或无量纲形式:

S~i=σXiσY∂E[Y]∂μXi \tilde{S}_i = \frac{\sigma_{X_i}}{\sigma_Y} \frac{\partial E[Y]}{\partial \mu_{X_i}} S~i=σYσXiμXiE[Y]

有限差分法

∂Y∂Xi≈Y(Xi+ΔXi)−Y(Xi)ΔXi \frac{\partial Y}{\partial X_i} \approx \frac{Y(X_i + \Delta X_i) - Y(X_i)}{\Delta X_i} XiYΔXiY(Xi+ΔXi)Y(Xi)

6.2 全局敏感性分析

Sobol指数

基于方差分解的全局敏感性指标。

总方差分解:

Var[Y]=∑iVi+∑i<jVij+⋯+V12⋯n Var[Y] = \sum_i V_i + \sum_{i<j} V_{ij} + \cdots + V_{12\cdots n} Var[Y]=iVi+i<jVij++V12n

一阶Sobol指数:

Si=ViVar[Y]=VarXi(EX∼i[Y∣Xi])Var[Y] S_i = \frac{V_i}{Var[Y]} = \frac{Var_{X_i}(E_{\mathbf{X}_{\sim i}}[Y|X_i])}{Var[Y]} Si=Var[Y]Vi=Var[Y]VarXi(EXi[YXi])

总效应指数:

STi=EX∼i[VarXi(Y∣X∼i)]Var[Y] S_{Ti} = \frac{E_{\mathbf{X}_{\sim i}}[Var_{X_i}(Y|\mathbf{X}_{\sim i})]}{Var[Y]} STi=Var[Y]EXi[VarXi(YXi)]

Morris方法

基于"基效应"(Elementary Effects)的筛选方法,适用于高维问题。


7. Python实现案例

案例一:材料参数不确定性的蒙特卡洛分析

问题描述:悬臂梁的弹性模量服从正态分布,分析端部位移的不确定性。

Python实现

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats

# 问题参数
L = 1.0  # 梁长度 (m)
b = 0.05  # 截面宽度 (m)
h = 0.1   # 截面高度 (m)
P = 1000  # 端部载荷 (N)

I = b * h**3 / 12  # 截面惯性矩

# 弹性模量的统计特性
E_mean = 70e9  # 均值 (Pa)
E_cov = 0.1    # 变异系数 10%
E_std = E_mean * E_cov  # 标准差

print("=" * 60)
print("案例一:材料参数不确定性的蒙特卡洛分析")
print("=" * 60)
print(f"梁参数: L={L}m, b={b}m, h={h}m")
print(f"载荷: P={P}N")
print(f"弹性模量: μ={E_mean/1e9:.2f} GPa, CoV={E_cov*100:.1f}%")

# 蒙特卡洛模拟
n_samples = 10000
np.random.seed(42)
E_samples = np.random.normal(E_mean, E_std, n_samples)

# 端部位移计算 (悬臂梁端部挠度: δ = PL³/(3EI))
delta_samples = P * L**3 / (3 * E_samples * I)

# 统计分析
delta_mean = np.mean(delta_samples)
delta_std = np.std(delta_samples)
delta_cov = delta_std / delta_mean

print(f"\n蒙特卡洛模拟结果 (N={n_samples}):")
print(f"位移均值: {delta_mean*1000:.4f} mm")
print(f"位移标准差: {delta_std*1000:.4f} mm")
print(f"位移变异系数: {delta_cov*100:.2f}%")

# 确定性分析结果
delta_deterministic = P * L**3 / (3 * E_mean * I)
print(f"\n确定性分析结果:")
print(f"位移: {delta_deterministic*1000:.4f} mm")

# 95%置信区间
ci_lower = np.percentile(delta_samples, 2.5)
ci_upper = np.percentile(delta_samples, 97.5)
print(f"\n95%置信区间: [{ci_lower*1000:.4f}, {ci_upper*1000:.4f}] mm")

# 可视化
fig, axes = plt.subplots(2, 2, figsize=(14, 10))

# 1. 弹性模量分布
ax1 = axes[0, 0]
count, bins, patches = ax1.hist(E_samples/1e9, bins=50, density=True, alpha=0.7, color='steelblue', edgecolor='black')
x_pdf = np.linspace(E_samples.min(), E_samples.max(), 100)
pdf = stats.norm.pdf(x_pdf, E_mean, E_std)
ax1.plot(x_pdf/1e9, pdf*1e9, 'r-', linewidth=2, label='理论PDF')
ax1.axvline(E_mean/1e9, color='red', linestyle='--', linewidth=2, label=f'Mean={E_mean/1e9:.1f} GPa')
ax1.set_xlabel('Elastic Modulus E (GPa)', fontsize=11)
ax1.set_ylabel('Probability Density', fontsize=11)
ax1.set_title('Distribution of Elastic Modulus', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)

# 2. 端部位移分布
ax2 = axes[0, 1]
count, bins, patches = ax2.hist(delta_samples*1000, bins=50, density=True, alpha=0.7, color='orange', edgecolor='black')
ax2.axvline(delta_mean*1000, color='red', linestyle='--', linewidth=2, label=f'Mean={delta_mean*1000:.3f} mm')
ax2.axvline(ci_lower*1000, color='green', linestyle=':', linewidth=2, label=f'95% CI')
ax2.axvline(ci_upper*1000, color='green', linestyle=':', linewidth=2)
ax2.set_xlabel('Tip Displacement δ (mm)', fontsize=11)
ax2.set_ylabel('Probability Density', fontsize=11)
ax2.set_title('Distribution of Tip Displacement', fontsize=12, fontweight='bold')
ax2.legend()
ax2.grid(True, alpha=0.3)

# 3. E vs δ 散点图
ax3 = axes[1, 0]
ax3.scatter(E_samples[::50]/1e9, delta_samples[::50]*1000, alpha=0.5, s=10, c='blue')
ax3.plot(E_samples/1e9, P*L**3/(3*E_samples*I)*1000, 'r-', linewidth=2, label='Theoretical')
ax3.set_xlabel('Elastic Modulus E (GPa)', fontsize=11)
ax3.set_ylabel('Tip Displacement δ (mm)', fontsize=11)
ax3.set_title('E-δ Relationship', fontsize=12, fontweight='bold')
ax3.legend()
ax3.grid(True, alpha=0.3)

# 4. 收敛性分析
ax4 = axes[1, 1]
sample_sizes = np.logspace(2, 4, 20).astype(int)
means = []
stds = []
for n in sample_sizes:
    means.append(np.mean(delta_samples[:n])*1000)
    stds.append(np.std(delta_samples[:n])*1000)
ax4.semilogx(sample_sizes, means, 'b-', linewidth=2, label='Mean')
ax4.axhline(delta_mean*1000, color='red', linestyle='--', linewidth=2)
ax4.fill_between(sample_sizes, np.array(means)-np.array(stds), np.array(means)+np.array(stds), alpha=0.3, color='blue')
ax4.set_xlabel('Number of Samples', fontsize=11)
ax4.set_ylabel('Tip Displacement (mm)', fontsize=11)
ax4.set_title('Monte Carlo Convergence', fontsize=12, fontweight='bold')
ax4.legend()
ax4.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('case1_monte_carlo_material.png', dpi=150, bbox_inches='tight')
plt.close()
print("\n✓ 案例一图像已保存")

案例说明

  • 本案例展示了蒙特卡洛模拟的基本流程
  • 通过大量随机抽样,获得位移的完整概率分布
  • 收敛性分析验证了样本量的充分性
  • 结果可用于可靠性评估和保守设计

案例二:多变量相关性的不确定性传播

问题描述:考虑弹性模量和截面尺寸的相关性,分析对结构响应的影响。

Python实现

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats

print("\n" + "=" * 60)
print("案例二:多变量相关性的不确定性传播")
print("=" * 60)

# 问题参数
L = 2.0  # 梁长度 (m)
P = 5000  # 集中载荷 (N)

# 随机变量统计特性
E_mean, E_cov = 200e9, 0.08  # 弹性模量
b_mean, b_cov = 0.1, 0.05     # 宽度
h_mean, h_cov = 0.2, 0.05     # 高度

E_std = E_mean * E_cov
b_std = b_mean * b_cov
h_std = h_mean * h_cov

print(f"随机变量统计特性:")
print(f"  E: μ={E_mean/1e9:.0f} GPa, CoV={E_cov*100:.0f}%")
print(f"  b: μ={b_mean*1000:.0f} mm, CoV={b_cov*100:.0f}%")
print(f"  h: μ={h_mean*1000:.0f} mm, CoV={h_cov*100:.0f}%")

# 相关系数矩阵
correlation_cases = [
    ('Independent', 0.0),
    ('Weak Correlation', 0.3),
    ('Strong Correlation', 0.7)
]

results = {}

for case_name, rho in correlation_cases:
    print(f"\n{case_name} (ρ={rho}):")
    
    # 协方差矩阵 (仅考虑b和h的相关性)
    mean = [b_mean, h_mean]
    cov = [[b_std**2, rho*b_std*h_std],
           [rho*b_std*h_std, h_std**2]]
    
    # 蒙特卡洛模拟
    n_samples = 5000
    np.random.seed(42)
    
    E_samples = np.random.normal(E_mean, E_std, n_samples)
    bh_samples = np.random.multivariate_normal(mean, cov, n_samples)
    b_samples = bh_samples[:, 0]
    h_samples = bh_samples[:, 1]
    
    # 确保几何参数为正
    b_samples = np.maximum(b_samples, 0.01)
    h_samples = np.maximum(h_samples, 0.01)
    
    # 截面惯性矩和位移
    I_samples = b_samples * h_samples**3 / 12
    delta_samples = P * L**3 / (3 * E_samples * I_samples)
    
    # 统计特性
    delta_mean = np.mean(delta_samples)
    delta_std = np.std(delta_samples)
    delta_cov = delta_std / delta_mean
    
    print(f"  位移均值: {delta_mean*1000:.4f} mm")
    print(f"  位移标准差: {delta_std*1000:.4f} mm")
    print(f"  位移变异系数: {delta_cov*100:.2f}%")
    
    results[case_name] = {
        'mean': delta_mean,
        'std': delta_std,
        'cov': delta_cov,
        'samples': delta_samples
    }

# 可视化
fig, axes = plt.subplots(2, 2, figsize=(14, 10))

# 1. 不同相关性下的位移分布
ax1 = axes[0, 0]
colors = ['blue', 'green', 'red']
for (case_name, _), color in zip(correlation_cases, colors):
    data = results[case_name]['samples']
    ax1.hist(data*1000, bins=40, density=True, alpha=0.4, label=case_name, color=color)
    ax1.axvline(results[case_name]['mean']*1000, color=color, linestyle='--', linewidth=2)
ax1.set_xlabel('Tip Displacement δ (mm)', fontsize=11)
ax1.set_ylabel('Probability Density', fontsize=11)
ax1.set_title('Displacement Distribution for Different Correlations', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)

# 2. 相关系数对变异系数的影响
ax2 = axes[0, 1]
rhos = np.linspace(-0.9, 0.9, 19)
covs = []
for rho in rhos:
    mean = [b_mean, h_mean]
    cov = [[b_std**2, rho*b_std*h_std],
           [rho*b_std*h_std, h_std**2]]
    np.random.seed(42)
    E_samples = np.random.normal(E_mean, E_std, 2000)
    bh_samples = np.random.multivariate_normal(mean, cov, 2000)
    b_samples = np.maximum(bh_samples[:, 0], 0.01)
    h_samples = np.maximum(bh_samples[:, 1], 0.01)
    I_samples = b_samples * h_samples**3 / 12
    delta_samples = P * L**3 / (3 * E_samples * I_samples)
    covs.append(np.std(delta_samples) / np.mean(delta_samples) * 100)

ax2.plot(rhos, covs, 'b-', linewidth=2, marker='o')
ax2.axvline(0, color='gray', linestyle='--', linewidth=1)
ax2.set_xlabel('Correlation Coefficient ρ(b,h)', fontsize=11)
ax2.set_ylabel('CoV of Displacement (%)', fontsize=11)
ax2.set_title('Effect of Correlation on Uncertainty', fontsize=12, fontweight='bold')
ax2.grid(True, alpha=0.3)

# 3. 散点图矩阵(以强相关为例)
ax3 = axes[1, 0]
np.random.seed(42)
mean = [b_mean, h_mean]
cov = [[b_std**2, 0.7*b_std*h_std],
       [0.7*b_std*h_std, h_std**2]]
bh_samples = np.random.multivariate_normal(mean, cov, 1000)
ax3.scatter(bh_samples[:, 0]*1000, bh_samples[:, 1]*1000, alpha=0.5, s=20, c='purple')
ax3.set_xlabel('Width b (mm)', fontsize=11)
ax3.set_ylabel('Height h (mm)', fontsize=11)
ax3.set_title('Scatter Plot: b vs h (ρ=0.7)', fontsize=12, fontweight='bold')
ax3.grid(True, alpha=0.3)

# 4. 敏感性比较
ax4 = axes[1, 1]
variables = ['E', 'b', 'h']
# 计算敏感性系数(使用相关系数0.3的情况)
np.random.seed(42)
mean = [b_mean, h_mean]
cov = [[b_std**2, 0.3*b_std*h_std],
       [0.3*b_std*h_std, h_std**2]]

# 基准情况
E_base = np.random.normal(E_mean, E_std, 3000)
bh_base = np.random.multivariate_normal(mean, cov, 3000)
b_base = np.maximum(bh_base[:, 0], 0.01)
h_base = np.maximum(bh_base[:, 1], 0.01)
I_base = b_base * h_base**3 / 12
delta_base = P * L**3 / (3 * E_base * I_base)

# 单独扰动
perturbation = 0.01
sensitivities = []

# E的敏感性
E_pert = E_base * (1 + perturbation)
delta_E = P * L**3 / (3 * E_pert * I_base)
sens_E = np.mean((delta_E - delta_base) / delta_base) / perturbation
sensitivities.append(abs(sens_E))

# b的敏感性
b_pert = b_base * (1 + perturbation)
I_pert = b_pert * h_base**3 / 12
delta_b = P * L**3 / (3 * E_base * I_pert)
sens_b = np.mean((delta_b - delta_base) / delta_base) / perturbation
sensitivities.append(abs(sens_b))

# h的敏感性
h_pert = h_base * (1 + perturbation)
I_pert = b_base * h_pert**3 / 12
delta_h = P * L**3 / (3 * E_base * I_pert)
sens_h = np.mean((delta_h - delta_base) / delta_base) / perturbation
sensitivities.append(abs(sens_h))

bars = ax4.bar(variables, sensitivities, color=['red', 'green', 'blue'], edgecolor='black', linewidth=1.5)
ax4.set_ylabel('Sensitivity Coefficient |∂δ/∂X|', fontsize=11)
ax4.set_title('Parameter Sensitivity Analysis', fontsize=12, fontweight='bold')
ax4.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, sensitivities):
    ax4.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.05, f'{val:.2f}', ha='center', fontsize=10)

plt.tight_layout()
plt.savefig('case2_multivariate_correlation.png', dpi=150, bbox_inches='tight')
plt.close()
print("\n✓ 案例二图像已保存")

案例说明

  • 展示了多随机变量的不确定性传播
  • 分析了相关性对响应不确定性的影响
  • 通过敏感性分析识别关键参数
  • 结果说明高度相关的几何参数会增大响应的不确定性

案例三:一阶可靠度方法(FORM)实现

问题描述:使用FORM计算简支梁在随机载荷下的失效概率。

Python实现

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.optimize import minimize

print("\n" + "=" * 60)
print("案例三:一阶可靠度方法(FORM)实现")
print("=" * 60)

# 问题参数
L = 4.0  # 跨度 (m)
b = 0.2  # 宽度 (m)
h = 0.4  # 高度 (m)
I = b * h**3 / 12  # 惯性矩
E = 30e9  # 弹性模量 (Pa)

# 随机变量
# 分布载荷 q ~ N(μ=5000, σ=1000) N/m
# 屈服强度 σ_y ~ N(μ=20e6, σ=3e6) Pa

mu_q, sigma_q = 5000, 1000
mu_sigma_y, sigma_sigma_y = 20e6, 3e6

print(f"随机变量:")
print(f"  分布载荷 q: μ={mu_q} N/m, σ={sigma_q} N/m")
print(f"  屈服强度 σ_y: μ={mu_sigma_y/1e6:.1f} MPa, σ={sigma_sigma_y/1e6:.1f} MPa")

# 极限状态函数
# 最大弯矩在跨中: M_max = qL²/8
# 最大应力: σ_max = M_max * y / I = (qL²/8) * (h/2) / I
# 极限状态: g = σ_y - σ_max = 0

def g(x):
    """极限状态函数 x = [q, sigma_y]"""
    q, sigma_y = x
    M_max = q * L**2 / 8
    sigma_max = M_max * (h/2) / I
    return sigma_y - sigma_max

# 标准正态变换
def x_to_u(x, mu, sigma):
    """从物理空间到标准正态空间"""
    return (x - mu) / sigma

def u_to_x(u, mu, sigma):
    """从标准正态空间到物理空间"""
    return u * sigma + mu

# 在标准正态空间中的极限状态函数
def g_u(u, mu, sigma):
    """u = [u_q, u_sigma_y]"""
    x = u_to_x(u, mu, sigma)
    return g(x)

# FORM迭代求解
mu = np.array([mu_q, mu_sigma_y])
sigma = np.array([sigma_q, sigma_sigma_y])

# 初始化设计点(均值点)
u = np.array([0.0, 0.0])
max_iter = 100
tol = 1e-6

print("\nFORM迭代过程:")
for i in range(max_iter):
    # 当前物理空间点
    x = u_to_x(u, mu, sigma)
    
    # 计算极限状态函数值
    g_val = g(x)
    
    # 计算梯度(数值微分)
    eps = 1e-6
    grad = np.zeros(2)
    for j in range(2):
        x_pert = x.copy()
        x_pert[j] += eps
        grad[j] = (g(x_pert) - g_val) / eps
    
    # 转换梯度到标准正态空间
    grad_u = grad * sigma
    
    # 计算梯度模长
    grad_norm = np.linalg.norm(grad_u)
    
    # 更新设计点
    alpha = -grad_u / grad_norm  # 单位方向向量
    beta = (g_val - np.dot(grad_u, u)) / grad_norm  # 可靠度指标
    u_new = beta * alpha
    
    print(f"  迭代 {i+1}: β={beta:.4f}, g={g_val:.4e}, u=[{u_new[0]:.4f}, {u_new[1]:.4f}]")
    
    # 检查收敛
    if np.linalg.norm(u_new - u) < tol:
        u = u_new
        break
    
    u = u_new

# 最终结果
beta_form = beta
x_design = u_to_x(u, mu, sigma)
P_f_form = stats.norm.cdf(-beta_form)

print(f"\nFORM结果:")
print(f"  可靠度指标 β = {beta_form:.4f}")
print(f"  失效概率 Pf = {P_f_form:.4e}")
print(f"  设计点: q* = {x_design[0]:.2f} N/m, σ_y* = {x_design[1]/1e6:.2f} MPa")

# 蒙特卡洛验证
n_mc = 100000
np.random.seed(42)
q_mc = np.random.normal(mu_q, sigma_q, n_mc)
sigma_y_mc = np.random.normal(mu_sigma_y, sigma_sigma_y, n_mc)

failures = 0
for i in range(n_mc):
    x_mc = [q_mc[i], sigma_y_mc[i]]
    if g(x_mc) <= 0:
        failures += 1

P_f_mc = failures / n_mc
beta_mc = -stats.norm.ppf(P_f_mc)

print(f"\n蒙特卡洛验证 (N={n_mc}):")
print(f"  失效次数: {failures}")
print(f"  失效概率 Pf = {P_f_mc:.4e}")
print(f"  可靠度指标 β = {beta_mc:.4f}")

# 可视化
fig, axes = plt.subplots(2, 2, figsize=(14, 10))

# 1. 标准正态空间中的极限状态曲面
ax1 = axes[0, 0]
u1_range = np.linspace(-4, 4, 100)
u2_range = np.linspace(-4, 4, 100)
U1, U2 = np.meshgrid(u1_range, u2_range)
G = np.zeros_like(U1)
for i in range(len(u1_range)):
    for j in range(len(u2_range)):
        G[j, i] = g_u([U1[j, i], U2[j, i]], mu, sigma)

contour = ax1.contour(U1, U2, G, levels=[0], colors='red', linewidths=2)
ax1.clabel(contour, inline=True, fontsize=10)
ax1.contourf(U1, U2, G, levels=[-1e10, 0], colors=['red'], alpha=0.2)
ax1.plot(u[0], u[1], 'g*', markersize=20, label='Design Point')
ax1.plot([0, u[0]], [0, u[1]], 'g--', linewidth=2, label=f'β={beta_form:.2f}')
ax1.set_xlabel('u_q (standard normal)', fontsize=11)
ax1.set_ylabel('u_σy (standard normal)', fontsize=11)
ax1.set_title('Limit State in Standard Normal Space', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)
ax1.set_aspect('equal')

# 2. 物理空间中的极限状态曲面
ax2 = axes[0, 1]
q_range = np.linspace(2000, 8000, 100)
sigma_y_range = np.linspace(10e6, 30e6, 100)
Q, SIGMA_Y = np.meshgrid(q_range, sigma_y_range)
G_phys = np.zeros_like(Q)
for i in range(len(q_range)):
    for j in range(len(sigma_y_range)):
        G_phys[j, i] = g([Q[j, i], SIGMA_Y[j, i]])

contour = ax2.contour(Q, SIGMA_Y/1e6, G_phys, levels=[0], colors='red', linewidths=2)
ax2.clabel(contour, inline=True, fontsize=10)
ax2.contourf(Q, SIGMA_Y/1e6, G_phys, levels=[-1e10, 0], colors=['red'], alpha=0.2)
ax2.plot(x_design[0], x_design[1]/1e6, 'g*', markersize=20, label='Design Point')
ax2.set_xlabel('q (N/m)', fontsize=11)
ax2.set_ylabel('σ_y (MPa)', fontsize=11)
ax2.set_title('Limit State in Physical Space', fontsize=12, fontweight='bold')
ax2.legend()
ax2.grid(True, alpha=0.3)

# 3. 可靠度指标对比
ax3 = axes[1, 0]
methods = ['FORM', 'Monte Carlo']
betas = [beta_form, beta_mc]
colors = ['blue', 'orange']
bars = ax3.bar(methods, betas, color=colors, edgecolor='black', linewidth=1.5)
ax3.set_ylabel('Reliability Index β', fontsize=11)
ax3.set_title('Reliability Index Comparison', fontsize=12, fontweight='bold')
ax3.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, betas):
    ax3.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.05, f'{val:.3f}', ha='center', fontsize=11)

# 4. 失效概率对比
ax4 = axes[1, 1]
P_f_values = [P_f_form, P_f_mc]
bars = ax4.bar(methods, P_f_values, color=colors, edgecolor='black', linewidth=1.5)
ax4.set_ylabel('Failure Probability Pf', fontsize=11)
ax4.set_title('Failure Probability Comparison', fontsize=12, fontweight='bold')
ax4.set_yscale('log')
ax4.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, P_f_values):
    ax4.text(bar.get_x() + bar.get_width()/2, bar.get_height() * 1.5, f'{val:.2e}', ha='center', fontsize=10)

plt.tight_layout()
plt.savefig('case3_form_reliability.png', dpi=150, bbox_inches='tight')
plt.close()
print("\n✓ 案例三图像已保存")

案例说明

  • 实现了FORM的基本算法流程
  • 展示了从物理空间到标准正态空间的变换
  • 通过迭代求解找到最可能失效点
  • 与蒙特卡洛方法对比验证精度

案例四:多项式混沌展开(PCE)方法

问题描述:使用PCE方法高效计算随机响应的统计特性。

Python实现

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from itertools import combinations_with_replacement

print("\n" + "=" * 60)
print("案例四:多项式混沌展开(PCE)方法")
print("=" * 60)

# 问题参数
L = 1.0  # 梁长度 (m)
b = 0.05  # 宽度 (m)
h = 0.1   # 高度 (m)
I = b * h**3 / 12
P = 1000  # 载荷 (N)

# 随机弹性模量 E ~ Uniform(60e9, 80e9) Pa
E_min, E_max = 60e9, 80e9
E_mean = (E_min + E_max) / 2

print(f"随机变量: E ~ Uniform({E_min/1e9:.0f}, {E_max/1e9:.0f}) GPa")

# 将均匀分布映射到[-1, 1](Legendre多项式定义域)
def E_to_xi(E):
    """从E映射到标准变量ξ ∈ [-1, 1]"""
    return 2 * (E - E_min) / (E_max - E_min) - 1

def xi_to_E(xi):
    """从ξ映射到E"""
    return E_min + (xi + 1) * (E_max - E_min) / 2

# Legendre多项式(在[-1, 1]上正交)
def legendre_poly(xi, n):
    """计算n阶Legendre多项式"""
    if n == 0:
        return np.ones_like(xi)
    elif n == 1:
        return xi
    elif n == 2:
        return 0.5 * (3*xi**2 - 1)
    elif n == 3:
        return 0.5 * (5*xi**3 - 3*xi)
    elif n == 4:
        return 0.125 * (35*xi**4 - 30*xi**2 + 3)
    else:
        # 使用递推公式
        P = [np.ones_like(xi), xi]
        for i in range(2, n+1):
            P_next = ((2*i-1)*xi*P[i-1] - (i-1)*P[i-2]) / i
            P.append(P_next)
        return P[n]

# 多项式混沌展开阶数
max_order = 4
n_terms = max_order + 1

print(f"\nPCE展开阶数: {max_order}")
print(f"多项式项数: {n_terms}")

# 使用高斯-勒让德积分点计算PCE系数
# 在[-1, 1]上的高斯积分点
from numpy.polynomial.legendre import leggauss
n_quad = 10  # 积分点数
xi_quad, w_quad = leggauss(n_quad)

# 计算PCE系数
# δ(E) = Σ c_i * P_i(ξ(E))
# c_i = ∫ δ(E(ξ)) * P_i(ξ) * w(ξ) dξ / ∫ P_i²(ξ) * w(ξ) dξ
# 对于均匀分布,w(ξ) = 0.5

coefficients = []
for i in range(n_terms):
    # 计算分子: ∫ δ * P_i * w dξ
    numerator = 0
    for j in range(n_quad):
        xi = xi_quad[j]
        w = w_quad[j]
        E = xi_to_E(xi)
        delta = P * L**3 / (3 * E * I)
        P_i = legendre_poly(xi, i)
        numerator += delta * P_i * w
    
    # 计算分母: ∫ P_i² * w dξ
    # Legendre多项式的正交性: ∫_{-1}^{1} P_m(x) P_n(x) dx = 2/(2n+1) δ_{mn}
    denominator = 2 / (2*i + 1)
    
    c_i = numerator / denominator
    coefficients.append(c_i)
    print(f"  c_{i} = {c_i*1000:.6f} mm")

# 使用PCE计算统计特性
# 均值 = c_0
mean_pce = coefficients[0]

# 方差 = Σ c_i² * ||P_i||² (i > 0)
variance_pce = 0
for i in range(1, n_terms):
    norm_sq = 2 / (2*i + 1)
    variance_pce += coefficients[i]**2 * norm_sq

std_pce = np.sqrt(variance_pce)

print(f"\nPCE统计特性:")
print(f"  均值: {mean_pce*1000:.4f} mm")
print(f"  标准差: {std_pce*1000:.4f} mm")
print(f"  变异系数: {std_pce/mean_pce*100:.2f}%")

# 蒙特卡洛验证
n_mc = 100000
np.random.seed(42)
E_mc = np.random.uniform(E_min, E_max, n_mc)
delta_mc = P * L**3 / (3 * E_mc * I)

mean_mc = np.mean(delta_mc)
std_mc = np.std(delta_mc)

print(f"\n蒙特卡洛验证 (N={n_mc}):")
print(f"  均值: {mean_mc*1000:.4f} mm")
print(f"  标准差: {std_mc*1000:.4f} mm")
print(f"  变异系数: {std_mc/mean_mc*100:.2f}%")

# PCE响应面
def pce_response(xi, coeffs):
    """计算PCE响应"""
    result = 0
    for i, c in enumerate(coeffs):
        result += c * legendre_poly(xi, i)
    return result

# 可视化
fig, axes = plt.subplots(2, 2, figsize=(14, 10))

# 1. PCE响应面 vs 真实响应
ax1 = axes[0, 0]
xi_plot = np.linspace(-1, 1, 100)
E_plot = xi_to_E(xi_plot)
delta_true = P * L**3 / (3 * E_plot * I)
delta_pce = pce_response(xi_plot, coefficients)

ax1.plot(E_plot/1e9, delta_true*1000, 'b-', linewidth=2, label='True Response')
ax1.plot(E_plot/1e9, delta_pce*1000, 'r--', linewidth=2, label=f'PCE (order {max_order})')
ax1.set_xlabel('Elastic Modulus E (GPa)', fontsize=11)
ax1.set_ylabel('Tip Displacement δ (mm)', fontsize=11)
ax1.set_title('PCE Approximation vs True Response', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)

# 2. PCE系数
ax2 = axes[0, 1]
orders = list(range(n_terms))
ax2.bar(orders, np.array(coefficients)*1000, color='steelblue', edgecolor='black', linewidth=1.5)
ax2.set_xlabel('Polynomial Order', fontsize=11)
ax2.set_ylabel('PCE Coefficient (mm)', fontsize=11)
ax2.set_title('PCE Coefficients', fontsize=12, fontweight='bold')
ax2.grid(True, alpha=0.3, axis='y')

# 3. 收敛性分析
ax3 = axes[1, 0]
orders_test = range(1, 8)
mean_errors = []
std_errors = []
for order in orders_test:
    n_terms_test = order + 1
    coeffs_test = []
    for i in range(n_terms_test):
        numerator = 0
        for j in range(n_quad):
            xi = xi_quad[j]
            w = w_quad[j]
            E = xi_to_E(xi)
            delta = P * L**3 / (3 * E * I)
            P_i = legendre_poly(xi, i)
            numerator += delta * P_i * w
        denominator = 2 / (2*i + 1)
        c_i = numerator / denominator
        coeffs_test.append(c_i)
    
    mean_test = coeffs_test[0]
    variance_test = sum(coeffs_test[i]**2 * 2/(2*i+1) for i in range(1, n_terms_test))
    std_test = np.sqrt(variance_test)
    
    mean_errors.append(abs(mean_test - mean_mc) / mean_mc * 100)
    std_errors.append(abs(std_test - std_mc) / std_mc * 100)

ax3.semilogy(orders_test, mean_errors, 'b-o', linewidth=2, markersize=8, label='Mean Error (%)')
ax3.semilogy(orders_test, std_errors, 'r-s', linewidth=2, markersize=8, label='Std Error (%)')
ax3.set_xlabel('PCE Order', fontsize=11)
ax3.set_ylabel('Relative Error (%)', fontsize=11)
ax3.set_title('PCE Convergence', fontsize=12, fontweight='bold')
ax3.legend()
ax3.grid(True, alpha=0.3, which='both')

# 4. 概率密度函数对比
ax4 = axes[1, 1]
# 使用PCE生成样本
n_pce_samples = 10000
xi_samples = np.random.uniform(-1, 1, n_pce_samples)
delta_pce_samples = pce_response(xi_samples, coefficients)

ax4.hist(delta_mc*1000, bins=50, density=True, alpha=0.5, label='Monte Carlo', color='blue')
ax4.hist(delta_pce_samples*1000, bins=50, density=True, alpha=0.5, label='PCE', color='red')
ax4.axvline(mean_mc*1000, color='blue', linestyle='--', linewidth=2, label=f'MC Mean={mean_mc*1000:.3f}')
ax4.axvline(mean_pce*1000, color='red', linestyle='--', linewidth=2, label=f'PCE Mean={mean_pce*1000:.3f}')
ax4.set_xlabel('Tip Displacement δ (mm)', fontsize=11)
ax4.set_ylabel('Probability Density', fontsize=11)
ax4.set_title('PDF Comparison: MC vs PCE', fontsize=12, fontweight='bold')
ax4.legend()
ax4.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('case4_pce_method.png', dpi=150, bbox_inches='tight')
plt.close()
print("\n✓ 案例四图像已保存")

案例说明

  • 展示了PCE方法的基本实现流程
  • 使用Legendre多项式处理均匀分布随机变量
  • 通过高斯积分计算展开系数
  • 收敛性分析验证了PCE的高精度

案例五:敏感性分析与参数重要性排序

问题描述:使用Sobol指数和Morris方法进行全局敏感性分析。

Python实现

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats

print("\n" + "=" * 60)
print("案例五:敏感性分析与参数重要性排序")
print("=" * 60)

# 问题参数: 悬臂梁端部位移
# δ = PL³/(3EI), I = bh³/12
# 随机变量: P, L, b, h, E

def cantilever_disp(params):
    """计算悬臂梁端部位移"""
    P, L, b, h, E = params
    I = b * h**3 / 12
    return P * L**3 / (3 * E * I)

# 随机变量统计特性
# [均值, 标准差]
params_info = {
    'P': [1000, 100],      # 载荷 (N)
    'L': [1.0, 0.05],      # 长度 (m)
    'b': [0.05, 0.0025],   # 宽度 (m)
    'h': [0.1, 0.005],     # 高度 (m)
    'E': [70e9, 7e9]       # 弹性模量 (Pa)
}

param_names = list(params_info.keys())
n_params = len(param_names)

print("随机变量统计特性:")
for name, (mu, sigma) in params_info.items():
    print(f"  {name}: μ={mu:.4f}, σ={sigma:.4f}, CoV={sigma/mu*100:.1f}%")

# Morris方法(筛选方法)
def morris_method(n_trajectories=50, delta=0.1):
    """
    Morris方法计算基本效应
    n_trajectories: 轨迹数
    delta: 扰动大小
    """
    mu_star = np.zeros(n_params)  # 基本效应的均值
    sigma = np.zeros(n_params)    # 基本效应的标准差
    
    elementary_effects = [[] for _ in range(n_params)]
    
    for _ in range(n_trajectories):
        # 随机起点(在[0,1]区间)
        x_base = np.random.rand(n_params)
        
        # 生成轨迹
        for i in range(n_params):
            x = x_base.copy()
            # 确保扰动后仍在[0,1]内
            if x[i] + delta <= 1:
                x_perturbed = x.copy()
                x_perturbed[i] += delta
            else:
                x_perturbed = x.copy()
                x_perturbed[i] -= delta
            
            # 映射到物理空间
            params_base = []
            params_perturbed = []
            for j, name in enumerate(param_names):
                mu, sigma = params_info[name]
                # 假设正态分布,使用逆变换
                p_base = stats.norm.ppf(x[j], mu, sigma)
                p_pert = stats.norm.ppf(x_perturbed[j], mu, sigma)
                params_base.append(p_base)
                params_perturbed.append(p_pert)
            
            # 计算基本效应
            y_base = cantilever_disp(params_base)
            y_pert = cantilever_disp(params_perturbed)
            
            # 归一化基本效应
            mu_i, sigma_i = params_info[param_names[i]]
            ee = (y_pert - y_base) / delta * sigma_i / mu_i
            elementary_effects[i].append(ee)
    
    for i in range(n_params):
        mu_star[i] = np.mean(np.abs(elementary_effects[i]))
        sigma[i] = np.std(elementary_effects[i])
    
    return mu_star, sigma

print("\nMorris方法分析:")
mu_star, sigma_ee = morris_method(n_trajectories=100)
for i, name in enumerate(param_names):
    print(f"  {name}: μ*={mu_star[i]:.4f}, σ={sigma_ee[i]:.4f}")

# 简单Sobol指数估计(基于蒙特卡洛)
def estimate_sobol_indices(n_samples=5000):
    """
    估计一阶Sobol指数
    使用Saltelli采样方案
    """
    np.random.seed(42)
    
    # 生成两个独立样本矩阵
    A = np.random.rand(n_samples, n_params)
    B = np.random.rand(n_samples, n_params)
    
    # 计算响应
    Y_A = np.zeros(n_samples)
    Y_B = np.zeros(n_samples)
    
    for k in range(n_samples):
        params_A = [stats.norm.ppf(A[k, j], *params_info[name]) for j, name in enumerate(param_names)]
        params_B = [stats.norm.ppf(B[k, j], *params_info[name]) for j, name in enumerate(param_names)]
        Y_A[k] = cantilever_disp(params_A)
        Y_B[k] = cantilever_disp(params_B)
    
    # 总方差
    total_var = np.var(np.concatenate([Y_A, Y_B]))
    
    # 一阶Sobol指数
    S1 = np.zeros(n_params)
    
    for i in range(n_params):
        # 生成A_Bi矩阵(A的第i列被B的第i列替换)
        A_Bi = A.copy()
        A_Bi[:, i] = B[:, i]
        
        Y_ABi = np.zeros(n_samples)
        for k in range(n_samples):
            params_ABi = [stats.norm.ppf(A_Bi[k, j], *params_info[name]) for j, name in enumerate(param_names)]
            Y_ABi[k] = cantilever_disp(params_ABi)
        
        # 一阶效应
        S1[i] = np.mean(Y_B * (Y_ABi - Y_A)) / total_var
    
    return S1, total_var

print("\nSobol指数分析:")
S1, total_variance = estimate_sobol_indices(n_samples=3000)
for i, name in enumerate(param_names):
    print(f"  {name}: S1={S1[i]:.4f}")
print(f"  总方差: {total_variance:.6e}")

# 可视化
fig, axes = plt.subplots(2, 2, figsize=(14, 10))

# 1. Morris方法结果(μ* vs σ)
ax1 = axes[0, 0]
colors = ['red', 'blue', 'green', 'orange', 'purple']
for i, name in enumerate(param_names):
    ax1.scatter(sigma_ee[i], mu_star[i], s=200, c=colors[i], label=name, edgecolors='black', linewidths=1.5)
ax1.set_xlabel('σ (Standard Deviation of EE)', fontsize=11)
ax1.set_ylabel('μ* (Mean of |EE|)', fontsize=11)
ax1.set_title('Morris Method: Elementary Effects', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)

# 2. Sobol指数排序
ax2 = axes[0, 1]
sorted_indices = np.argsort(S1)[::-1]
sorted_names = [param_names[i] for i in sorted_indices]
sorted_S1 = S1[sorted_indices]
bars = ax2.barh(range(n_params), sorted_S1, color=[colors[i] for i in sorted_indices], edgecolor='black', linewidth=1.5)
ax2.set_yticks(range(n_params))
ax2.set_yticklabels(sorted_names)
ax2.set_xlabel('First-order Sobol Index S1', fontsize=11)
ax2.set_title('Parameter Importance (Sobol Index)', fontsize=12, fontweight='bold')
ax2.grid(True, alpha=0.3, axis='x')
for i, (bar, val) in enumerate(zip(bars, sorted_S1)):
    ax2.text(val + 0.01, bar.get_y() + bar.get_height()/2, f'{val:.3f}', va='center', fontsize=10)

# 3. 参数敏感性对比
ax3 = axes[1, 0]
x_pos = np.arange(n_params)
width = 0.35
bars1 = ax3.bar(x_pos - width/2, mu_star/np.sum(mu_star), width, label='Morris (normalized μ*)', color='steelblue', edgecolor='black')
bars2 = ax3.bar(x_pos + width/2, S1, width, label='Sobol S1', color='coral', edgecolor='black')
ax3.set_xticks(x_pos)
ax3.set_xticklabels(param_names)
ax3.set_ylabel('Sensitivity Index', fontsize=11)
ax3.set_title('Sensitivity Comparison: Morris vs Sobol', fontsize=12, fontweight='bold')
ax3.legend()
ax3.grid(True, alpha=0.3, axis='y')

# 4.  tornado图(局部敏感性)
ax4 = axes[1, 1]
# 计算局部敏感性系数(在均值处)
base_params = [params_info[name][0] for name in param_names]
base_response = cantilever_disp(base_params)

local_sens = []
for i, name in enumerate(param_names):
    mu, sigma = params_info[name]
    perturbed_params = base_params.copy()
    perturbed_params[i] = mu + sigma  # +1σ扰动
    pert_response = cantilever_disp(perturbed_params)
    sens = (pert_response - base_response) / base_response * 100  # 百分比变化
    local_sens.append(sens)

# 绘制tornado图
y_pos = np.arange(n_params)
colors_tornado = ['red' if s > 0 else 'blue' for s in local_sens]
bars = ax4.barh(y_pos, local_sens, color=colors_tornado, edgecolor='black', linewidth=1.5)
ax4.set_yticks(y_pos)
ax4.set_yticklabels(param_names)
ax4.set_xlabel('Response Change (%)', fontsize=11)
ax4.set_title('Tornado Diagram (Local Sensitivity)', fontsize=12, fontweight='bold')
ax4.axvline(x=0, color='black', linewidth=1)
ax4.grid(True, alpha=0.3, axis='x')
for i, (bar, val) in enumerate(zip(bars, local_sens)):
    ax4.text(val + (0.5 if val > 0 else -0.5), bar.get_y() + bar.get_height()/2, f'{val:.1f}%', 
             va='center', ha='left' if val > 0 else 'right', fontsize=9)

plt.tight_layout()
plt.savefig('case5_sensitivity_analysis.png', dpi=150, bbox_inches='tight')
plt.close()
print("\n✓ 案例五图像已保存")

案例说明

  • 实现了Morris方法进行参数筛选
  • 使用Sobol指数进行全局敏感性分析
  • 通过tornado图展示局部敏感性
  • 结果可用于识别关键设计参数和优化方向

8. 结果分析与讨论

8.1 案例结果总结

通过本主题的五个案例,我们系统展示了随机有限元方法的核心内容:

案例一(蒙特卡洛分析)

  • 实现了蒙特卡洛模拟的基本流程
  • 获得了位移响应的完整概率分布
  • 验证了样本量对收敛性的影响
  • 95%置信区间为设计提供了概率边界

案例二(多变量相关性)

  • 展示了多随机变量的联合不确定性传播
  • 分析了相关性对响应变异性的影响
  • 正相关会增大响应的不确定性
  • 敏感性分析识别了高度h为最关键参数

案例三(FORM可靠性分析)

  • 实现了FORM算法的完整流程
  • 找到了最可能失效点(设计点)
  • 计算了可靠度指标β和失效概率Pf
  • 与蒙特卡洛方法对比验证了FORM的精度

案例四(PCE方法)

  • 使用Legendre多项式构建PCE模型
  • 通过少量配点计算获得了高精度统计特性
  • 展示了PCE的谱收敛特性
  • PCE方法比蒙特卡洛高效2-3个数量级

案例五(敏感性分析)

  • 应用Morris方法进行参数筛选
  • 使用Sobol指数量化各参数的重要性
  • 识别了高度h和弹性模量E为最关键参数
  • 为设计优化提供了明确方向

8.2 方法对比与选择

方法 精度 计算成本 适用场景
蒙特卡洛 极高 通用方法、验证基准
摄动法 小不确定性、快速估算
PCE 中等维度、光滑响应
FORM 可靠性分析、小失效概率
随机配点 高维问题、稀疏网格

8.3 工程应用建议

初步设计阶段

  • 使用摄动法或FORM进行快速不确定性评估
  • 识别关键不确定性来源
  • 进行初步可靠性筛选

详细设计阶段

  • 使用蒙特卡洛进行精确验证
  • 建立PCE代理模型用于优化
  • 进行全面的敏感性分析

优化设计阶段

  • 结合PCE代理模型进行鲁棒优化
  • 考虑可靠性约束
  • 使用重要性抽样评估小失效概率

9. 高级主题与前沿研究

9.1 高维不确定性量化

维数灾难问题

传统方法在高维问题(>10维)中面临挑战,需要采用:

  • 稀疏网格技术
  • 降维方法(PCA、K-L展开)
  • 机器学习代理模型

稀疏多项式混沌

通过压缩感知技术,从大量基函数中选择重要项:

y(ξ)=∑i∈JciΨi(ξ) y(\xi) = \sum_{i \in \mathcal{J}} c_i \Psi_i(\xi) y(ξ)=iJciΨi(ξ)

其中J\mathcal{J}J为稀疏索引集。

9.2 深度学习在UQ中的应用

神经网络代理模型

使用深度神经网络替代传统代理模型:

y=fNN(x;θ) y = f_{NN}(\mathbf{x}; \theta) y=fNN(x;θ)

物理信息神经网络(PINN)

将物理方程嵌入神经网络损失函数:

L=Ldata+λLphysics \mathcal{L} = \mathcal{L}_{data} + \lambda \mathcal{L}_{physics} L=Ldata+λLphysics

生成模型

使用VAE、GAN等生成模型学习输入-输出的联合分布。

9.3 贝叶斯不确定性量化

贝叶斯模型更新

结合先验知识和观测数据更新模型参数:

p(θ∣D)=p(D∣θ)p(θ)p(D) p(\theta | D) = \frac{p(D | \theta) p(\theta)}{p(D)} p(θD)=p(D)p(Dθ)p(θ)

马尔可夫链蒙特卡洛(MCMC)

用于从复杂后验分布中抽样:

  • Metropolis-Hastings算法
  • Gibbs抽样
  • Hamiltonian Monte Carlo

9.4 多保真度方法

多保真度建模

结合高精度(高成本)和低精度(低成本)模型:

yHF(x)=yLF(x)+δ(x) y_{HF}(\mathbf{x}) = y_{LF}(\mathbf{x}) + \delta(\mathbf{x}) yHF(x)=yLF(x)+δ(x)

协克里金

利用高斯过程建立多保真度代理模型。


10. 结论

随机有限元与不确定性量化为工程结构的可靠性分析和鲁棒设计提供了强大的理论框架和计算工具。通过本主题的学习,我们掌握了:

  1. 不确定性分类与建模:理解偶然不确定性和认知不确定性的区别,能够选择合适的概率分布描述工程不确定性。

  2. 蒙特卡洛方法:掌握大样本随机抽样的基本流程,能够评估响应的统计特性和收敛性。

  3. 谱方法(PCE):理解多项式混沌展开的原理,能够构建高效的代理模型进行不确定性传播。

  4. 可靠性分析:掌握FORM/SORM方法,能够计算结构的失效概率和可靠度指标。

  5. 敏感性分析:能够使用Morris方法和Sobol指数识别关键参数,为设计优化提供指导。

  6. 工程应用:能够将不确定性量化方法应用于实际工程问题,进行可靠性评估和鲁棒设计。

随着计算能力的提升和机器学习技术的发展,不确定性量化方法将在更多领域发挥关键作用,特别是在数字孪生、智能设计和风险决策等领域。


参考文献

  1. Ghanem, R. G., & Spanos, P. D. (1991). Stochastic Finite Elements: A Spectral Approach. Springer.

  2. Sudret, B. (2008). Global sensitivity analysis using polynomial chaos expansions. Reliability Engineering & System Safety, 93(7), 964-979.

  3. Xiu, D., & Karniadakis, G. E. (2002). The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM Journal on Scientific Computing, 24(2), 619-644.

  4. Saltelli, A., et al. (2008). Global Sensitivity Analysis: The Primer. Wiley.

  5. Melchers, R. E., & Beck, A. T. (2018). Structural Reliability Analysis and Prediction. John Wiley & Sons.

  6. Smith, R. C. (2013). Uncertainty Quantification: Theory, Implementation, and Applications. SIAM.

  7. Sullivan, T. J. (2015). Introduction to Uncertainty Quantification. Springer.

  8. Nannapaneni, S., & Mahadevan, S. (2020). Reliability Analysis and Prediction with Reliability Digital Twins. Wiley.


附录:关键公式汇总

A.1 蒙特卡洛估计

μY≈1N∑i=1Nyi,σY2≈1N−1∑i=1N(yi−μY)2 \mu_Y \approx \frac{1}{N} \sum_{i=1}^{N} y_i, \quad \sigma_Y^2 \approx \frac{1}{N-1} \sum_{i=1}^{N} (y_i - \mu_Y)^2 μYN1i=1Nyi,σY2N11i=1N(yiμY)2

A.2 多项式混沌展开

Y(ξ)=∑i=0PyiΨi(ξ),E[Y]=y0,Var[Y]=∑i=1Pyi2E[Ψi2] Y(\xi) = \sum_{i=0}^{P} y_i \Psi_i(\xi), \quad E[Y] = y_0, \quad Var[Y] = \sum_{i=1}^{P} y_i^2 E[\Psi_i^2] Y(ξ)=i=0PyiΨi(ξ),E[Y]=y0,Var[Y]=i=1Pyi2E[Ψi2]

A.3 FORM可靠度指标

β=min⁡g(u)=0∥u∥,Pf≈Φ(−β) \beta = \min_{g(\mathbf{u})=0} \|\mathbf{u}\|, \quad P_f \approx \Phi(-\beta) β=g(u)=0minu,PfΦ(β)

A.4 Sobol指数

Si=VarXi(EX∼i[Y∣Xi])Var[Y],∑iSi≤1 S_i = \frac{Var_{X_i}(E_{\mathbf{X}_{\sim i}}[Y|X_i])}{Var[Y]}, \quad \sum_{i} S_i \leq 1 Si=Var[Y]VarXi(EXi[YXi]),iSi1

A.5 变异系数

CoV=σμ,NMC≥(zα/2⋅CoVϵ)2 CoV = \frac{\sigma}{\mu}, \quad N_{MC} \geq \left(\frac{z_{\alpha/2} \cdot CoV}{\epsilon}\right)^2 CoV=μσ,NMC(ϵzα/2CoV)2

# -*- coding: utf-8 -*-
"""
主题018:随机有限元与不确定性量化
运行所有案例生成仿真图像
"""
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.optimize import minimize
from numpy.polynomial.legendre import leggauss
import warnings
warnings.filterwarnings('ignore')
import os

output_dir = r'd:\文档\强度仿真\主题018'
os.makedirs(output_dir, exist_ok=True)

plt.rcParams['font.sans-serif'] = ['SimHei', 'DejaVu Sans']
plt.rcParams['axes.unicode_minus'] = False

print("=" * 60)
print("主题018:随机有限元与不确定性量化")
print("=" * 60)

# ==================== 案例一:材料参数不确定性的蒙特卡洛分析 ====================

print("\n【案例一:材料参数不确定性的蒙特卡洛分析】")
print("-" * 50)

L = 1.0
b = 0.05
h = 0.1
P = 1000
I = b * h**3 / 12

E_mean = 70e9
E_cov = 0.1
E_std = E_mean * E_cov

print(f"梁参数: L={L}m, b={b}m, h={h}m")
print(f"载荷: P={P}N")
print(f"弹性模量: μ={E_mean/1e9:.2f} GPa, CoV={E_cov*100:.1f}%")

n_samples = 10000
np.random.seed(42)
E_samples = np.random.normal(E_mean, E_std, n_samples)
delta_samples = P * L**3 / (3 * E_samples * I)

delta_mean = np.mean(delta_samples)
delta_std = np.std(delta_samples)
delta_cov = delta_std / delta_mean

print(f"\n蒙特卡洛模拟结果 (N={n_samples}):")
print(f"位移均值: {delta_mean*1000:.4f} mm")
print(f"位移标准差: {delta_std*1000:.4f} mm")
print(f"位移变异系数: {delta_cov*100:.2f}%")

delta_deterministic = P * L**3 / (3 * E_mean * I)
print(f"\n确定性分析结果:")
print(f"位移: {delta_deterministic*1000:.4f} mm")

ci_lower = np.percentile(delta_samples, 2.5)
ci_upper = np.percentile(delta_samples, 97.5)
print(f"\n95%置信区间: [{ci_lower*1000:.4f}, {ci_upper*1000:.4f}] mm")

fig, axes = plt.subplots(2, 2, figsize=(14, 10))

ax1 = axes[0, 0]
count, bins, patches = ax1.hist(E_samples/1e9, bins=50, density=True, alpha=0.7, color='steelblue', edgecolor='black')
x_pdf = np.linspace(E_samples.min(), E_samples.max(), 100)
pdf = stats.norm.pdf(x_pdf, E_mean, E_std)
ax1.plot(x_pdf/1e9, pdf*1e9, 'r-', linewidth=2, label='PDF')
ax1.axvline(E_mean/1e9, color='red', linestyle='--', linewidth=2, label=f'Mean={E_mean/1e9:.1f} GPa')
ax1.set_xlabel('E (GPa)', fontsize=11)
ax1.set_ylabel('PDF', fontsize=11)
ax1.set_title('Distribution of Elastic Modulus', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)

ax2 = axes[0, 1]
count, bins, patches = ax2.hist(delta_samples*1000, bins=50, density=True, alpha=0.7, color='orange', edgecolor='black')
ax2.axvline(delta_mean*1000, color='red', linestyle='--', linewidth=2, label=f'Mean={delta_mean*1000:.3f} mm')
ax2.axvline(ci_lower*1000, color='green', linestyle=':', linewidth=2, label=f'95% CI')
ax2.axvline(ci_upper*1000, color='green', linestyle=':', linewidth=2)
ax2.set_xlabel('δ (mm)', fontsize=11)
ax2.set_ylabel('PDF', fontsize=11)
ax2.set_title('Distribution of Tip Displacement', fontsize=12, fontweight='bold')
ax2.legend()
ax2.grid(True, alpha=0.3)

ax3 = axes[1, 0]
ax3.scatter(E_samples[::50]/1e9, delta_samples[::50]*1000, alpha=0.5, s=10, c='blue')
ax3.plot(E_samples/1e9, P*L**3/(3*E_samples*I)*1000, 'r-', linewidth=2, label='Theoretical')
ax3.set_xlabel('E (GPa)', fontsize=11)
ax3.set_ylabel('δ (mm)', fontsize=11)
ax3.set_title('E-δ Relationship', fontsize=12, fontweight='bold')
ax3.legend()
ax3.grid(True, alpha=0.3)

ax4 = axes[1, 1]
sample_sizes = np.logspace(2, 4, 20).astype(int)
means = []
stds = []
for n in sample_sizes:
    means.append(np.mean(delta_samples[:n])*1000)
    stds.append(np.std(delta_samples[:n])*1000)
ax4.semilogx(sample_sizes, means, 'b-', linewidth=2, label='Mean')
ax4.axhline(delta_mean*1000, color='red', linestyle='--', linewidth=2)
ax4.fill_between(sample_sizes, np.array(means)-np.array(stds), np.array(means)+np.array(stds), alpha=0.3, color='blue')
ax4.set_xlabel('Number of Samples', fontsize=11)
ax4.set_ylabel('δ (mm)', fontsize=11)
ax4.set_title('Monte Carlo Convergence', fontsize=12, fontweight='bold')
ax4.legend()
ax4.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig(f'{output_dir}/case1_monte_carlo_material.png', dpi=150, bbox_inches='tight')
plt.close()
print("✓ 案例一图像已保存")


# ==================== 案例二:多变量相关性的不确定性传播 ====================

print("\n" + "=" * 60)
print("案例二:多变量相关性的不确定性传播")
print("=" * 60)

L = 2.0
P = 5000

E_mean, E_cov = 200e9, 0.08
b_mean, b_cov = 0.1, 0.05
h_mean, h_cov = 0.2, 0.05

E_std = E_mean * E_cov
b_std = b_mean * b_cov
h_std = h_mean * h_cov

print(f"随机变量统计特性:")
print(f"  E: μ={E_mean/1e9:.0f} GPa, CoV={E_cov*100:.0f}%")
print(f"  b: μ={b_mean*1000:.0f} mm, CoV={b_cov*100:.0f}%")
print(f"  h: μ={h_mean*1000:.0f} mm, CoV={h_cov*100:.0f}%")

correlation_cases = [
    ('Independent', 0.0),
    ('Weak Correlation', 0.3),
    ('Strong Correlation', 0.7)
]

results = {}

for case_name, rho in correlation_cases:
    print(f"\n{case_name} (ρ={rho}):")
    mean = [b_mean, h_mean]
    cov = [[b_std**2, rho*b_std*h_std],
           [rho*b_std*h_std, h_std**2]]
    
    n_samples = 5000
    np.random.seed(42)
    
    E_samples = np.random.normal(E_mean, E_std, n_samples)
    bh_samples = np.random.multivariate_normal(mean, cov, n_samples)
    b_samples = np.maximum(bh_samples[:, 0], 0.01)
    h_samples = np.maximum(bh_samples[:, 1], 0.01)
    
    I_samples = b_samples * h_samples**3 / 12
    delta_samples = P * L**3 / (3 * E_samples * I_samples)
    
    delta_mean = np.mean(delta_samples)
    delta_std = np.std(delta_samples)
    delta_cov = delta_std / delta_mean
    
    print(f"  位移均值: {delta_mean*1000:.4f} mm")
    print(f"  位移标准差: {delta_std*1000:.4f} mm")
    print(f"  位移变异系数: {delta_cov*100:.2f}%")
    
    results[case_name] = {
        'mean': delta_mean,
        'std': delta_std,
        'cov': delta_cov,
        'samples': delta_samples
    }

fig, axes = plt.subplots(2, 2, figsize=(14, 10))

ax1 = axes[0, 0]
colors = ['blue', 'green', 'red']
for (case_name, _), color in zip(correlation_cases, colors):
    data = results[case_name]['samples']
    ax1.hist(data*1000, bins=40, density=True, alpha=0.4, label=case_name, color=color)
    ax1.axvline(results[case_name]['mean']*1000, color=color, linestyle='--', linewidth=2)
ax1.set_xlabel('δ (mm)', fontsize=11)
ax1.set_ylabel('PDF', fontsize=11)
ax1.set_title('Displacement Distribution', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)

ax2 = axes[0, 1]
rhos = np.linspace(-0.9, 0.9, 19)
covs = []
for rho in rhos:
    mean = [b_mean, h_mean]
    cov = [[b_std**2, rho*b_std*h_std],
           [rho*b_std*h_std, h_std**2]]
    np.random.seed(42)
    E_samples = np.random.normal(E_mean, E_std, 2000)
    bh_samples = np.random.multivariate_normal(mean, cov, 2000)
    b_samples = np.maximum(bh_samples[:, 0], 0.01)
    h_samples = np.maximum(bh_samples[:, 1], 0.01)
    I_samples = b_samples * h_samples**3 / 12
    delta_samples = P * L**3 / (3 * E_samples * I_samples)
    covs.append(np.std(delta_samples) / np.mean(delta_samples) * 100)

ax2.plot(rhos, covs, 'b-', linewidth=2, marker='o')
ax2.axvline(0, color='gray', linestyle='--', linewidth=1)
ax2.set_xlabel('ρ(b,h)', fontsize=11)
ax2.set_ylabel('CoV of δ (%)', fontsize=11)
ax2.set_title('Effect of Correlation', fontsize=12, fontweight='bold')
ax2.grid(True, alpha=0.3)

ax3 = axes[1, 0]
np.random.seed(42)
mean = [b_mean, h_mean]
cov = [[b_std**2, 0.7*b_std*h_std],
       [0.7*b_std*h_std, h_std**2]]
bh_samples = np.random.multivariate_normal(mean, cov, 1000)
ax3.scatter(bh_samples[:, 0]*1000, bh_samples[:, 1]*1000, alpha=0.5, s=20, c='purple')
ax3.set_xlabel('b (mm)', fontsize=11)
ax3.set_ylabel('h (mm)', fontsize=11)
ax3.set_title('Scatter Plot: b vs h (ρ=0.7)', fontsize=12, fontweight='bold')
ax3.grid(True, alpha=0.3)

ax4 = axes[1, 1]
variables = ['E', 'b', 'h']
np.random.seed(42)
mean = [b_mean, h_mean]
cov = [[b_std**2, 0.3*b_std*h_std],
       [0.3*b_std*h_std, h_std**2]]

E_base = np.random.normal(E_mean, E_std, 3000)
bh_base = np.random.multivariate_normal(mean, cov, 3000)
b_base = np.maximum(bh_base[:, 0], 0.01)
h_base = np.maximum(bh_base[:, 1], 0.01)
I_base = b_base * h_base**3 / 12
delta_base = P * L**3 / (3 * E_base * I_base)

perturbation = 0.01
sensitivities = []

E_pert = E_base * (1 + perturbation)
delta_E = P * L**3 / (3 * E_pert * I_base)
sens_E = np.mean((delta_E - delta_base) / delta_base) / perturbation
sensitivities.append(abs(sens_E))

b_pert = b_base * (1 + perturbation)
I_pert = b_pert * h_base**3 / 12
delta_b = P * L**3 / (3 * E_base * I_pert)
sens_b = np.mean((delta_b - delta_base) / delta_base) / perturbation
sensitivities.append(abs(sens_b))

h_pert = h_base * (1 + perturbation)
I_pert = b_base * h_pert**3 / 12
delta_h = P * L**3 / (3 * E_base * I_pert)
sens_h = np.mean((delta_h - delta_base) / delta_base) / perturbation
sensitivities.append(abs(sens_h))

bars = ax4.bar(variables, sensitivities, color=['red', 'green', 'blue'], edgecolor='black', linewidth=1.5)
ax4.set_ylabel('|∂δ/∂X|', fontsize=11)
ax4.set_title('Parameter Sensitivity', fontsize=12, fontweight='bold')
ax4.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, sensitivities):
    ax4.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.05, f'{val:.2f}', ha='center', fontsize=10)

plt.tight_layout()
plt.savefig(f'{output_dir}/case2_multivariate_correlation.png', dpi=150, bbox_inches='tight')
plt.close()
print("✓ 案例二图像已保存")


# ==================== 案例三:一阶可靠度方法(FORM)实现 ====================

print("\n" + "=" * 60)
print("案例三:一阶可靠度方法(FORM)实现")
print("=" * 60)

L = 4.0
b = 0.2
h = 0.4
I = b * h**3 / 12
E = 30e9

mu_q, sigma_q = 5000, 1000
mu_sigma_y, sigma_sigma_y = 20e6, 3e6

print(f"随机变量:")
print(f"  q: μ={mu_q} N/m, σ={sigma_q} N/m")
print(f"  σ_y: μ={mu_sigma_y/1e6:.1f} MPa, σ={sigma_sigma_y/1e6:.1f} MPa")

def g(x):
    q, sigma_y = x
    M_max = q * L**2 / 8
    sigma_max = M_max * (h/2) / I
    return sigma_y - sigma_max

def x_to_u(x, mu, sigma):
    return (x - mu) / sigma

def u_to_x(u, mu, sigma):
    return u * sigma + mu

def g_u(u, mu, sigma):
    x = u_to_x(u, mu, sigma)
    return g(x)

mu = np.array([mu_q, mu_sigma_y])
sigma = np.array([sigma_q, sigma_sigma_y])

u = np.array([0.0, 0.0])
max_iter = 100
tol = 1e-6

print("\nFORM迭代过程:")
for i in range(max_iter):
    x = u_to_x(u, mu, sigma)
    g_val = g(x)
    
    eps = 1e-6
    grad = np.zeros(2)
    for j in range(2):
        x_pert = x.copy()
        x_pert[j] += eps
        grad[j] = (g(x_pert) - g_val) / eps
    
    grad_u = grad * sigma
    grad_norm = np.linalg.norm(grad_u)
    
    alpha = -grad_u / grad_norm
    beta = (g_val - np.dot(grad_u, u)) / grad_norm
    u_new = beta * alpha
    
    print(f"  迭代 {i+1}: β={beta:.4f}, g={g_val:.4e}")
    
    if np.linalg.norm(u_new - u) < tol:
        u = u_new
        break
    u = u_new

beta_form = beta
x_design = u_to_x(u, mu, sigma)
P_f_form = stats.norm.cdf(-beta_form)

print(f"\nFORM结果:")
print(f"  β = {beta_form:.4f}")
print(f"  Pf = {P_f_form:.4e}")
print(f"  设计点: q*={x_design[0]:.2f}, σ_y*={x_design[1]/1e6:.2f} MPa")

n_mc = 100000
np.random.seed(42)
q_mc = np.random.normal(mu_q, sigma_q, n_mc)
sigma_y_mc = np.random.normal(mu_sigma_y, sigma_sigma_y, n_mc)

failures = sum(1 for i in range(n_mc) if g([q_mc[i], sigma_y_mc[i]]) <= 0)
P_f_mc = failures / n_mc
beta_mc = -stats.norm.ppf(P_f_mc)

print(f"\n蒙特卡洛验证 (N={n_mc}):")
print(f"  失效次数: {failures}")
print(f"  Pf = {P_f_mc:.4e}")
print(f"  β = {beta_mc:.4f}")

fig, axes = plt.subplots(2, 2, figsize=(14, 10))

ax1 = axes[0, 0]
u1_range = np.linspace(-4, 4, 100)
u2_range = np.linspace(-4, 4, 100)
U1, U2 = np.meshgrid(u1_range, u2_range)
G = np.zeros_like(U1)
for i in range(len(u1_range)):
    for j in range(len(u2_range)):
        G[j, i] = g_u([U1[j, i], U2[j, i]], mu, sigma)

contour = ax1.contour(U1, U2, G, levels=[0], colors='red', linewidths=2)
ax1.contourf(U1, U2, G, levels=[-1e10, 0], colors=['red'], alpha=0.2)
ax1.plot(u[0], u[1], 'g*', markersize=20, label='Design Point')
ax1.plot([0, u[0]], [0, u[1]], 'g--', linewidth=2, label=f'β={beta_form:.2f}')
ax1.set_xlabel('u_q', fontsize=11)
ax1.set_ylabel('u_σy', fontsize=11)
ax1.set_title('Limit State in Standard Normal Space', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)
ax1.set_aspect('equal')

ax2 = axes[0, 1]
q_range = np.linspace(2000, 8000, 100)
sigma_y_range = np.linspace(10e6, 30e6, 100)
Q, SIGMA_Y = np.meshgrid(q_range, sigma_y_range)
G_phys = np.zeros_like(Q)
for i in range(len(q_range)):
    for j in range(len(sigma_y_range)):
        G_phys[j, i] = g([Q[j, i], SIGMA_Y[j, i]])

contour = ax2.contour(Q, SIGMA_Y/1e6, G_phys, levels=[0], colors='red', linewidths=2)
ax2.contourf(Q, SIGMA_Y/1e6, G_phys, levels=[-1e10, 0], colors=['red'], alpha=0.2)
ax2.plot(x_design[0], x_design[1]/1e6, 'g*', markersize=20, label='Design Point')
ax2.set_xlabel('q (N/m)', fontsize=11)
ax2.set_ylabel('σ_y (MPa)', fontsize=11)
ax2.set_title('Limit State in Physical Space', fontsize=12, fontweight='bold')
ax2.legend()
ax2.grid(True, alpha=0.3)

ax3 = axes[1, 0]
methods = ['FORM', 'Monte Carlo']
betas = [beta_form, beta_mc]
colors = ['blue', 'orange']
bars = ax3.bar(methods, betas, color=colors, edgecolor='black', linewidth=1.5)
ax3.set_ylabel('β', fontsize=11)
ax3.set_title('Reliability Index Comparison', fontsize=12, fontweight='bold')
ax3.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, betas):
    ax3.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.05, f'{val:.3f}', ha='center', fontsize=11)

ax4 = axes[1, 1]
P_f_values = [P_f_form, P_f_mc]
bars = ax4.bar(methods, P_f_values, color=colors, edgecolor='black', linewidth=1.5)
ax4.set_ylabel('Pf', fontsize=11)
ax4.set_title('Failure Probability Comparison', fontsize=12, fontweight='bold')
ax4.set_yscale('log')
ax4.grid(True, alpha=0.3, axis='y')
for bar, val in zip(bars, P_f_values):
    ax4.text(bar.get_x() + bar.get_width()/2, bar.get_height() * 1.5, f'{val:.2e}', ha='center', fontsize=10)

plt.tight_layout()
plt.savefig(f'{output_dir}/case3_form_reliability.png', dpi=150, bbox_inches='tight')
plt.close()
print("✓ 案例三图像已保存")


# ==================== 案例四:多项式混沌展开(PCE)方法 ====================

print("\n" + "=" * 60)
print("案例四:多项式混沌展开(PCE)方法")
print("=" * 60)

L = 1.0
b = 0.05
h = 0.1
I = b * h**3 / 12
P = 1000

E_min, E_max = 60e9, 80e9
E_mean = (E_min + E_max) / 2

print(f"随机变量: E ~ Uniform({E_min/1e9:.0f}, {E_max/1e9:.0f}) GPa")

def E_to_xi(E):
    return 2 * (E - E_min) / (E_max - E_min) - 1

def xi_to_E(xi):
    return E_min + (xi + 1) * (E_max - E_min) / 2

def legendre_poly(xi, n):
    if n == 0:
        return np.ones_like(xi)
    elif n == 1:
        return xi
    elif n == 2:
        return 0.5 * (3*xi**2 - 1)
    elif n == 3:
        return 0.5 * (5*xi**3 - 3*xi)
    elif n == 4:
        return 0.125 * (35*xi**4 - 30*xi**2 + 3)
    else:
        P = [np.ones_like(xi), xi]
        for i in range(2, n+1):
            P_next = ((2*i-1)*xi*P[i-1] - (i-1)*P[i-2]) / i
            P.append(P_next)
        return P[n]

max_order = 4
n_terms = max_order + 1

print(f"\nPCE展开阶数: {max_order}")
print(f"多项式项数: {n_terms}")

n_quad = 10
xi_quad, w_quad = leggauss(n_quad)

coefficients = []
for i in range(n_terms):
    numerator = 0
    for j in range(n_quad):
        xi = xi_quad[j]
        w = w_quad[j]
        E = xi_to_E(xi)
        delta = P * L**3 / (3 * E * I)
        P_i = legendre_poly(xi, i)
        numerator += delta * P_i * w
    
    denominator = 2 / (2*i + 1)
    c_i = numerator / denominator
    coefficients.append(c_i)
    print(f"  c_{i} = {c_i*1000:.6f} mm")

mean_pce = coefficients[0]
variance_pce = 0
for i in range(1, n_terms):
    norm_sq = 2 / (2*i + 1)
    variance_pce += coefficients[i]**2 * norm_sq

std_pce = np.sqrt(variance_pce)

print(f"\nPCE统计特性:")
print(f"  均值: {mean_pce*1000:.4f} mm")
print(f"  标准差: {std_pce*1000:.4f} mm")
print(f"  变异系数: {std_pce/mean_pce*100:.2f}%")

n_mc = 100000
np.random.seed(42)
E_mc = np.random.uniform(E_min, E_max, n_mc)
delta_mc = P * L**3 / (3 * E_mc * I)

mean_mc = np.mean(delta_mc)
std_mc = np.std(delta_mc)

print(f"\n蒙特卡洛验证 (N={n_mc}):")
print(f"  均值: {mean_mc*1000:.4f} mm")
print(f"  标准差: {std_mc*1000:.4f} mm")

def pce_response(xi, coeffs):
    result = 0
    for i, c in enumerate(coeffs):
        result += c * legendre_poly(xi, i)
    return result

fig, axes = plt.subplots(2, 2, figsize=(14, 10))

ax1 = axes[0, 0]
xi_plot = np.linspace(-1, 1, 100)
E_plot = xi_to_E(xi_plot)
delta_true = P * L**3 / (3 * E_plot * I)
delta_pce = pce_response(xi_plot, coefficients)

ax1.plot(E_plot/1e9, delta_true*1000, 'b-', linewidth=2, label='True')
ax1.plot(E_plot/1e9, delta_pce*1000, 'r--', linewidth=2, label=f'PCE (order {max_order})')
ax1.set_xlabel('E (GPa)', fontsize=11)
ax1.set_ylabel('δ (mm)', fontsize=11)
ax1.set_title('PCE vs True Response', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)

ax2 = axes[0, 1]
orders = list(range(n_terms))
ax2.bar(orders, np.array(coefficients)*1000, color='steelblue', edgecolor='black', linewidth=1.5)
ax2.set_xlabel('Order', fontsize=11)
ax2.set_ylabel('Coefficient (mm)', fontsize=11)
ax2.set_title('PCE Coefficients', fontsize=12, fontweight='bold')
ax2.grid(True, alpha=0.3, axis='y')

ax3 = axes[1, 0]
orders_test = range(1, 8)
mean_errors = []
std_errors = []
for order in orders_test:
    n_terms_test = order + 1
    coeffs_test = []
    for i in range(n_terms_test):
        numerator = 0
        for j in range(n_quad):
            xi = xi_quad[j]
            w = w_quad[j]
            E = xi_to_E(xi)
            delta = P * L**3 / (3 * E * I)
            P_i = legendre_poly(xi, i)
            numerator += delta * P_i * w
        denominator = 2 / (2*i + 1)
        c_i = numerator / denominator
        coeffs_test.append(c_i)
    
    mean_test = coeffs_test[0]
    variance_test = sum(coeffs_test[i]**2 * 2/(2*i+1) for i in range(1, n_terms_test))
    std_test = np.sqrt(variance_test)
    
    mean_errors.append(abs(mean_test - mean_mc) / mean_mc * 100)
    std_errors.append(abs(std_test - std_mc) / std_mc * 100)

ax3.semilogy(orders_test, mean_errors, 'b-o', linewidth=2, markersize=8, label='Mean Error')
ax3.semilogy(orders_test, std_errors, 'r-s', linewidth=2, markersize=8, label='Std Error')
ax3.set_xlabel('PCE Order', fontsize=11)
ax3.set_ylabel('Error (%)', fontsize=11)
ax3.set_title('PCE Convergence', fontsize=12, fontweight='bold')
ax3.legend()
ax3.grid(True, alpha=0.3, which='both')

ax4 = axes[1, 1]
n_pce_samples = 10000
xi_samples = np.random.uniform(-1, 1, n_pce_samples)
delta_pce_samples = pce_response(xi_samples, coefficients)

ax4.hist(delta_mc*1000, bins=50, density=True, alpha=0.5, label='MC', color='blue')
ax4.hist(delta_pce_samples*1000, bins=50, density=True, alpha=0.5, label='PCE', color='red')
ax4.axvline(mean_mc*1000, color='blue', linestyle='--', linewidth=2, label=f'MC Mean={mean_mc*1000:.3f}')
ax4.axvline(mean_pce*1000, color='red', linestyle='--', linewidth=2, label=f'PCE Mean={mean_pce*1000:.3f}')
ax4.set_xlabel('δ (mm)', fontsize=11)
ax4.set_ylabel('PDF', fontsize=11)
ax4.set_title('PDF: MC vs PCE', fontsize=12, fontweight='bold')
ax4.legend()
ax4.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig(f'{output_dir}/case4_pce_method.png', dpi=150, bbox_inches='tight')
plt.close()
print("✓ 案例四图像已保存")


# ==================== 案例五:敏感性分析与参数重要性排序 ====================

print("\n" + "=" * 60)
print("案例五:敏感性分析与参数重要性排序")
print("=" * 60)

def cantilever_disp(params):
    P, L, b, h, E = params
    I = b * h**3 / 12
    return P * L**3 / (3 * E * I)

params_info = {
    'P': [1000, 100],
    'L': [1.0, 0.05],
    'b': [0.05, 0.0025],
    'h': [0.1, 0.005],
    'E': [70e9, 7e9]
}

param_names = list(params_info.keys())
n_params = len(param_names)

print("随机变量统计特性:")
for name, (mu, sigma) in params_info.items():
    print(f"  {name}: μ={mu:.4f}, σ={sigma:.4f}, CoV={sigma/mu*100:.1f}%")

def morris_method(n_trajectories=50, delta=0.1):
    mu_star = np.zeros(n_params)
    sigma_ee = np.zeros(n_params)
    elementary_effects = [[] for _ in range(n_params)]
    
    for _ in range(n_trajectories):
        x_base = np.random.rand(n_params)
        
        for i in range(n_params):
            x = x_base.copy()
            if x[i] + delta <= 1:
                x_perturbed = x.copy()
                x_perturbed[i] += delta
            else:
                x_perturbed = x.copy()
                x_perturbed[i] -= delta
            
            params_base = []
            params_perturbed = []
            for j, name in enumerate(param_names):
                mu_param, sigma_param = params_info[name]
                p_base = stats.norm.ppf(x[j], mu_param, sigma_param)
                p_pert = stats.norm.ppf(x_perturbed[j], mu_param, sigma_param)
                params_base.append(p_base)
                params_perturbed.append(p_pert)
            
            y_base = cantilever_disp(params_base)
            y_pert = cantilever_disp(params_perturbed)
            
            mu_param, sigma_param = params_info[param_names[i]]
            ee = (y_pert - y_base) / delta * sigma_param / mu_param
            elementary_effects[i].append(ee)
    
    for i in range(n_params):
        mu_star[i] = np.mean(np.abs(elementary_effects[i]))
        sigma_ee[i] = np.std(elementary_effects[i])
    
    return mu_star, sigma_ee

print("\nMorris方法分析:")
mu_star, sigma_ee = morris_method(n_trajectories=100)
for i, name in enumerate(param_names):
    print(f"  {name}: μ*={mu_star[i]:.4f}, σ={sigma_ee[i]:.4f}")

def estimate_sobol_indices(n_samples=5000):
    np.random.seed(42)
    
    A = np.random.rand(n_samples, n_params)
    B = np.random.rand(n_samples, n_params)
    
    Y_A = np.zeros(n_samples)
    Y_B = np.zeros(n_samples)
    
    for k in range(n_samples):
        params_A = [stats.norm.ppf(A[k, j], *params_info[name]) for j, name in enumerate(param_names)]
        params_B = [stats.norm.ppf(B[k, j], *params_info[name]) for j, name in enumerate(param_names)]
        Y_A[k] = cantilever_disp(params_A)
        Y_B[k] = cantilever_disp(params_B)
    
    total_var = np.var(np.concatenate([Y_A, Y_B]))
    
    S1 = np.zeros(n_params)
    
    for i in range(n_params):
        A_Bi = A.copy()
        A_Bi[:, i] = B[:, i]
        
        Y_ABi = np.zeros(n_samples)
        for k in range(n_samples):
            params_ABi = [stats.norm.ppf(A_Bi[k, j], *params_info[name]) for j, name in enumerate(param_names)]
            Y_ABi[k] = cantilever_disp(params_ABi)
        
        S1[i] = np.mean(Y_B * (Y_ABi - Y_A)) / total_var
    
    return S1, total_var

print("\nSobol指数分析:")
S1, total_variance = estimate_sobol_indices(n_samples=3000)
for i, name in enumerate(param_names):
    print(f"  {name}: S1={S1[i]:.4f}")
print(f"  总方差: {total_variance:.6e}")

fig, axes = plt.subplots(2, 2, figsize=(14, 10))

ax1 = axes[0, 0]
colors = ['red', 'blue', 'green', 'orange', 'purple']
for i, name in enumerate(param_names):
    ax1.scatter(sigma_ee[i], mu_star[i], s=200, c=colors[i], label=name, edgecolors='black', linewidths=1.5)
ax1.set_xlabel('σ (EE)', fontsize=11)
ax1.set_ylabel('μ* (|EE|)', fontsize=11)
ax1.set_title('Morris Method', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)

ax2 = axes[0, 1]
sorted_indices = np.argsort(S1)[::-1]
sorted_names = [param_names[i] for i in sorted_indices]
sorted_S1 = S1[sorted_indices]
bars = ax2.barh(range(n_params), sorted_S1, color=[colors[i] for i in sorted_indices], edgecolor='black', linewidth=1.5)
ax2.set_yticks(range(n_params))
ax2.set_yticklabels(sorted_names)
ax2.set_xlabel('S1', fontsize=11)
ax2.set_title('Parameter Importance (Sobol)', fontsize=12, fontweight='bold')
ax2.grid(True, alpha=0.3, axis='x')
for i, (bar, val) in enumerate(zip(bars, sorted_S1)):
    ax2.text(val + 0.01, bar.get_y() + bar.get_height()/2, f'{val:.3f}', va='center', fontsize=10)

ax3 = axes[1, 0]
x_pos = np.arange(n_params)
width = 0.35
bars1 = ax3.bar(x_pos - width/2, mu_star/np.sum(mu_star), width, label='Morris', color='steelblue', edgecolor='black')
bars2 = ax3.bar(x_pos + width/2, S1, width, label='Sobol', color='coral', edgecolor='black')
ax3.set_xticks(x_pos)
ax3.set_xticklabels(param_names)
ax3.set_ylabel('Index', fontsize=11)
ax3.set_title('Sensitivity Comparison', fontsize=12, fontweight='bold')
ax3.legend()
ax3.grid(True, alpha=0.3, axis='y')

ax4 = axes[1, 1]
base_params = [params_info[name][0] for name in param_names]
base_response = cantilever_disp(base_params)

local_sens = []
for i, name in enumerate(param_names):
    mu, sigma = params_info[name]
    perturbed_params = base_params.copy()
    perturbed_params[i] = mu + sigma
    pert_response = cantilever_disp(perturbed_params)
    sens = (pert_response - base_response) / base_response * 100
    local_sens.append(sens)

y_pos = np.arange(n_params)
colors_tornado = ['red' if s > 0 else 'blue' for s in local_sens]
bars = ax4.barh(y_pos, local_sens, color=colors_tornado, edgecolor='black', linewidth=1.5)
ax4.set_yticks(y_pos)
ax4.set_yticklabels(param_names)
ax4.set_xlabel('Change (%)', fontsize=11)
ax4.set_title('Tornado Diagram', fontsize=12, fontweight='bold')
ax4.axvline(x=0, color='black', linewidth=1)
ax4.grid(True, alpha=0.3, axis='x')
for i, (bar, val) in enumerate(zip(bars, local_sens)):
    ax4.text(val + (0.5 if val > 0 else -0.5), bar.get_y() + bar.get_height()/2, f'{val:.1f}%', 
             va='center', ha='left' if val > 0 else 'right', fontsize=9)

plt.tight_layout()
plt.savefig(f'{output_dir}/case5_sensitivity_analysis.png', dpi=150, bbox_inches='tight')
plt.close()
print("✓ 案例五图像已保存")

print("\n" + "=" * 60)
print("所有案例计算完成!")
print("=" * 60)
print(f"\n输出文件位置: {output_dir}")
print("  - case1_monte_carlo_material.png")
print("  - case2_multivariate_correlation.png")
print("  - case3_form_reliability.png")
print("  - case4_pce_method.png")
print("  - case5_sensitivity_analysis.png")

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐