Benchmarks
The pyapprox.benchmarks
provides a number of benchmarks commonly used to evaluate the performance of quadrature, sensitivity analysis, inference and design algorithms.
Following shows how to use the common interface, provided by pyapprox.benchmarks.Benchmark
, to access the data necessary
to run a benchmark.
To demonstrate the benchmark class consider the problem of estimating sensitivity indices of the Ishigami function
The mean, variance, main effect and total effect sensitivity indices are well known for this problem.
The following sets up a pyapprox.benchmarks.Benchmark
object which returns the Ishigami function its Jacobian, Hessian the joint density of the input variables \(z\) and the various sensitivity indices. The attributes of the benchmark can be accessed using the member keys()
>>> from pyapprox.benchmarks.benchmarks import setup_benchmark
>>> benchmark = setup_benchmark("ishigami",a=7,b=0.1)
>>> print(benchmark.keys())
dict_keys(['fun', 'jac', 'hess', 'variable', 'mean', 'variance', 'main_effects', 'total_effects', 'sobol_indices'])
>>> print(hess(np.zeros(3)))
[[-0. 0. 0.]
[ 0. 14. 0.]
[ 0. 0. 0.]]
The various attributes of the benchmark can be accessed easily. For example above we evaluated the Hessian at the point \(z=(0,0,0)^T\).
The following tabulates the benchmarks provided in pyapprox.benchmarks
. Each benchmark can be instantiated with using setup_benchmark(name,**kwargs) Follow the links to find details on the options available for each benchmark which are specified via kwargs.
Sensitivity Analysis
setup_ishigami_function()
setup_sobol_g_function()
setup_oakley_function()
Quadrature
Inference
setup_rosenbrock_function()
setup_advection_diffusion_kle_inversion_benchmark()
Multi-fidelity Modeling
setup_polynomial_ensemble()
setup_multi_index_advection_diffusion_benchmark()