Sensitivity Analysis

Quantifying the sensitivity of a model output f to the model parameters z can be an important component of any modeling exercise. This section demonstrates how to use popular local and global sensitivity analysis.

Sobol Indices

Any function f with finite variance parameterized by a set of independent variables z with ρ(z)=j=1dρ(zj) and support Γ=j=1dΓj can be decomposed into a finite sum, referred to as the ANOVA decomposition,

f(z)=f^0+i=1df^i(zi)+i,j=1df^i,j(zi,zj)++f^1,,d(z1,,zd)

or more compactly

f(z)=uDf^u(zu)

where f^u quantifies the dependence of the function f on the variable dimensions iu and u=(u1,,us)D={1,,d}.

The functions f^u can be obtained by integration, specifically

f^u(zu)=ΓDuf(z)dρDu(z)vuf^v(zv),

where dρDu(z)=judρj(z) and ΓDu=juΓj.

The first-order terms f^u(zi), u0=1 represent the effect of a single variable acting independently of all others. Similarly, the second-order terms u0=2 represent the contributions of two variables acting together, and so on.

The terms of the ANOVA expansion are orthogonal, i.e. the weighted L2 inner product (f^u,f^v)Lρ2=0, for uv. This orthogonality facilitates the following decomposition of the variance of the function f

V[f]=uDV[f^u],V[f^u]=Γufu2dρu,

where dρu(z)=judρj(z).

The quantities V[f^u]/V[f] are referred to as Sobol indices [SMCS2001] and are frequently used to estimate the sensitivity of f to single, or combinations of input parameters. Note that this is a global sensitivity, reflecting a variance attribution over the range of the input parameters, as opposed to the local sensitivity reflected by a derivative. Two popular measures of sensitivity are the main effect and total effect indices given respectively by

Si=V[f^ei]V[f],SiT=uJV[f^u]V[f]

where ei is the unit vector, with only one non-zero entry located at the i-th element, and J={u:iu}.

Sobol indices can be computed different ways. In the following we will use polynomial chaos expansions, as in [SRESS2008].

import matplotlib.pyplot as plt
from pyapprox.benchmarks import setup_benchmark
from pyapprox.surrogates import approximate
from pyapprox import analysis
benchmark = setup_benchmark("ishigami", a=7, b=0.1)

num_samples = 1000
train_samples = benchmark.variable.rvs(num_samples)
train_vals = benchmark.fun(train_samples)

approx_res = approximate(
    train_samples, train_vals, 'polynomial_chaos',
    {'basis_type': 'hyperbolic_cross', 'variable': benchmark.variable,
     'options': {'max_degree': 8}})
pce = approx_res.approx

res = analysis.gpc_sobol_sensitivities(pce, benchmark.variable)

Now lets compare the estimated values with the exact value

print(res.main_effects[:, 0])
print(benchmark.main_effects[:, 0])
[3.13751427e-01 4.42645977e-01 6.47617333e-08]
[0.31390519 0.44241114 0.        ]

We can visualize the sensitivity indices using the following

fig, axs = plt.subplots(1, 3, figsize=(3*8, 6))
analysis.plot_main_effects(benchmark.main_effects, axs[0])
analysis.plot_total_effects(benchmark.total_effects, axs[1])
analysis.plot_interaction_values(benchmark.sobol_indices,
                                 benchmark.sobol_interaction_indices, axs[2])
axs[0].set_title(r'$\mathrm{Main\;Effects}$')
axs[1].set_title(r'$\mathrm{Total\;Effects}$')
axs[2].set_title(r'$\mathrm{Sobol\;Indices}$')
plt.show()
$\mathrm{Main\;Effects}$, $\mathrm{Total\;Effects}$, $\mathrm{Sobol\;Indices}$

References

#..
# .. [MT1991]  `M.D. Morris. Factorial Sampling Plans for Preliminary Computational Experiments, Technometrics, 33:2, 161-174, 1991 <https://doi.org/10.1080/00401706.1991.10484804>`_

Total running time of the script: ( 0 minutes 2.777 seconds)

Gallery generated by Sphinx-Gallery