Developer Reference Guide

pyapprox.approximate Module

Functions

adaptive_approximate(fun, variable, method)

Adaptive approximation of a scalar or vector-valued function of one or more variables.

adaptive_approximate_polynomial_chaos(fun, …)

Compute an adaptive Polynomial Chaos Expansion of a function.

adaptive_approximate_sparse_grid(fun, …[, …])

Compute a sparse grid approximation of a function.

approximate(train_samples, train_vals, method)

Approximate a scalar or vector-valued function of one or more variables from a set of points provided by the user

approximate_gaussian_process(train_samples, …)

Compute a Gaussian process approximation of a function from a fixed data set using the Matern kernel

approximate_polynomial_chaos(train_samples, …)

Compute a Polynomial Chaos Expansion of a function from a fixed data set.

clenshaw_curtis_rule_growth(level)

The number of samples in the 1D Clenshaw-Curtis quadrature rule of a given level.

compute_hyperbolic_indices(num_vars, level)

compute_l2_error(f, g, variable, nsamples[, rel])

Compute the \(\ell^2\) error of the output of two functions f and g, i.e.

cross_validate_pce_degree(pce, …[, …])

Use cross validation to find the polynomial degree which best fits the data.

expand_basis(indices)

expanding_basis_omp_pce(pce, train_samples, …)

Iteratively expand and restrict the polynomial basis and use cross validation to find the best basis [R7617a192da2c-JESJCP2015]

fit_linear_model(basis_matrix, train_vals, …)

generate_independent_random_samples(…[, …])

Generate samples from a tensor-product probability measure.

get_backward_neighbor(subspace_index, var_num)

get_forward_neighbor(subspace_index, var_num)

get_sparse_grid_univariate_leja_quadrature_rules_economical(…)

Return a list of unique quadrature rules.

hash_array(array[, decimals])

Hash an array for dictionary or set based lookup

is_bounded_continuous_variable(rv)

max_level_admissibility_function(max_level, …)

restrict_basis(indices, coefficients, tol)

variance_pce_refinement_indicator(…[, …])

Set pce coefficients of new subspace poly indices to zero to compute previous mean then set them to be non-zero

variance_refinement_indicator(…[, …])

when config index is increased but the other indices are 0 the subspace will only have one random sample.

Classes

AdaptiveLejaPCE(num_vars, candidate_samples)

AffineRandomVariableTransformation(variable)

ApproximateResult

CombinationSparseGrid(num_vars)

IndependentMultivariateRandomVariable(…[, …])

Class representing independent random variables

LarsCV(**kwargs)

Cross-validated Least Angle Regression model.

LassoCV(**kwargs)

Lasso linear model with iterative fitting along a regularization path.

LassoLarsCV(**kwargs)

Cross-validated Lasso, using the LARS algorithm.

OptimizeResult

Represents the optimization result.

OrthogonalMatchingPursuitCV(**kwargs)

Cross-validated Orthogonal Matching Pursuit model (OMP).

partial

partial(func, *args, **keywords) - new function with partial application of the given arguments and keywords.

pyapprox.sensitivity_analysis Module

Functions

analyze_sensitivity_morris(fun, …[, nlevels])

Compute sensitivity indices by constructing an adaptive polynomial chaos expansion.

analyze_sensitivity_polynomial_chaos(pce[, …])

Compute variance based sensitivity metrics from a polynomial chaos expansion

analyze_sensitivity_sparse_grid(sparse_grid)

Compute sensitivity indices from a sparse grid by converting it to a polynomial chaos expansion

cdist(XA, XB[, metric])

Compute distance between each pair of the two collections of inputs.

compute_hyperbolic_indices(num_vars, level)

downselect_morris_trajectories(samples, …)

get_main_and_total_effect_indices_from_pce(…)

Assume basis is orthonormal Assume first coefficient is the coefficient of the constant basis.

get_morris_elementary_effects(samples, values)

Get the Morris elementary effects from a set of trajectories.

get_morris_samples(nvars, nlevels, ntrajectories)

Compute a set of Morris trajectories used to compute elementary effects

get_morris_sensitivity_indices(elem_effects)

Compute the Morris sensitivity indices mu and sigma from the elementary effects computed for a set of trajectories.

get_morris_trajectory(nvars, nlevels[, eps])

Compute a morris trajectory used to compute elementary effects

get_sobol_indices(coefficients, indices[, …])

hash_array(array[, decimals])

Hash an array for dictionary or set based lookup

nchoosek(nn, kk)

plot_interaction_values(interaction_values, …)

Plot sobol indices in a pie chart showing relative size.

plot_main_effects(main_effects, ax[, …])

Plot the main effects in a pie chart showing relative size.

plot_total_effects(total_effects, ax[, …])

Plot the total effects in a pie chart showing relative size.

print_morris_sensitivity_indices(mu, sigma)

Classes

OptimizeResult

Represents the optimization result.

SensivitityResult

combinations

combinations(iterable, r) –> combinations object

pyapprox.quadrature Module

Functions

compute_mean_and_variance_sparse_grid(…[, …])

Compute the mean and variance of a sparse_grid by converting it to a polynomial chaos expansion

Classes

OptimizeResult

Represents the optimization result.

QuadratureResult

pyapprox.benchmarks.benchmarks Module

Functions

beam_constraint_I(samples)

Desired behavior is when constraint is less than 0

beam_constraint_II(samples)

Desired behavior is when constraint is less than 0

beam_constraint_II_design_jac(samples)

Jacobian with respect to the design variables Desired behavior is when constraint is less than 0

beam_constraint_I_design_jac(samples)

Jacobian with respect to the design variables Desired behavior is when constraint is less than 0

cantilever_beam_constraints(samples)

cantilever_beam_constraints_jacobian(samples)

cantilever_beam_objective(samples)

cantilever_beam_objective_grad(samples)

define_beam_random_variables()

evaluate_quadratic_form(matrix, samples)

Evaluate x.T.dot(A).dot(x) for several vectors x

get_ishigami_funciton_statistics([a, b])

p_i(X_i) ~ U[-pi,pi]

get_oakley_function_data()

Get the data \(a_1,a_2,a_3\) and \(M\) of the Oakley function

get_sobol_g_function_statistics(a[, …])

See article: Variance based sensitivity analysis of model output.

ishigami_function(samples[, a, b])

ishigami_function_hessian(samples[, a, b])

ishigami_function_jacobian(samples[, a, b])

morris_function(samples)

oakley_function(samples)

oakley_function_statistics()

rosen(x)

The Rosenbrock function.

rosen_der(x)

The derivative (i.e.

rosen_hess_prod(x, p)

Product of the Hessian matrix of the Rosenbrock function with a vector.

rosenbrock_function(samples)

rosenbrock_function_hessian_prod(samples, vec)

rosenbrock_function_jacobian(samples)

rosenbrock_function_mean(nvars)

Mean of rosenbrock function with uniform variables in [-2,2]^d

setup_advection_diffusion_benchmark(nvars, …)

Compute functionals of the following model of transient advection-diffusion (with 3 configure variables which control the two spatial mesh resolutions and the timestep)

setup_advection_diffusion_source_inversion_benchmark([…])

Compute functionals of the following model of transient diffusion of a contaminant

setup_benchmark(name, **kwargs)

setup_cantilever_beam_benchmark()

setup_genz_function(nvars, test_name[, …])

Setup the Genz Benchmarks.

setup_ishigami_function(a, b)

Setup the Ishigami function benchmark

setup_mfnets_helmholtz_benchmark([…])

Setup the multi-fidelity benchmark used to combine helmholtz data.

setup_multi_level_advection_diffusion_benchmark(…)

Compute functionals of the transient advection-diffusion (with 1 configure variables which controls the two spatial mesh resolutions and the timestep).

setup_oakley_function()

Setup the Oakely function benchmark

setup_rosenbrock_function(nvars)

Setup the Rosenbrock function benchmark

setup_sobol_g_function(nvars)

Setup the Sobol-G function benchmark

sobol_g_function(coefficients, samples)

The coefficients control the sensitivity of each variable.

variance_linear_combination_of_indendent_variables(…)

Classes

Benchmark

Contains functions and results needed to implement known benchmarks.

DesignVariable(bounds)

GenzFunction(func_type, num_vars[, c, w, name])

IndependentMultivariateRandomVariable(…[, …])

Class representing independent random variables

OptimizeResult

Represents the optimization result.

partial

partial(func, *args, **keywords) - new function with partial application of the given arguments and keywords.

pyapprox.benchmarks.sensitivity_benchmarks Module

Functions

evaluate_quadratic_form(matrix, samples)

Evaluate x.T.dot(A).dot(x) for several vectors x

get_ishigami_funciton_statistics([a, b])

p_i(X_i) ~ U[-pi,pi]

get_oakley_function_data()

Get the data \(a_1,a_2,a_3\) and \(M\) of the Oakley function

get_sobol_g_function_statistics(a[, …])

See article: Variance based sensitivity analysis of model output.

ishigami_function(samples[, a, b])

ishigami_function_hessian(samples[, a, b])

ishigami_function_jacobian(samples[, a, b])

morris_function(samples)

oakley_function(samples)

oakley_function_statistics()

sobol_g_function(coefficients, samples)

The coefficients control the sensitivity of each variable.

variance_linear_combination_of_indendent_variables(…)

pyapprox.fenics_models.advection_diffusion_wrappers Module

Functions

collect_dirichlet_boundaries(function_space, …)

compute_convergence_rates(run_model, u_e[, …])

Compute convergences rates for various error norms Adapted from https://fenicsproject.org/pub/tutorial/html/._ftut1020.html

compute_errors(u_e, u)

Compute various measures of the error u - u_e, where u is a finite element Function and u_e is an Expression.

constrained_newton_energy_solve(F, uh[, …])

See https://uvilla.github.io/inverse15/UnconstrainedMinimization.html

convergence_order(errors[, base])

copy_expression(expr)

generate_polygonal_mesh(resolution, …[, …])

Sometimes segault is thrown when mshr.generate_mesh() is called.

get_1d_dirichlet_boundary_conditions_from_expression(…)

get_2d_bndry_segment(x1, y1, x2, y2)

Define boundary segment along the line between (x1,y1) and (x2,y2) Assumes x1,y1 x2,y2 come in clockwise order

get_2d_rectangular_mesh_boundaries(xl, xr, …)

get_2d_rectangular_mesh_boundary_segment(…)

get_2d_unit_square_mesh_boundaries()

get_all_boundaries()

get_boundary_indices(function_space)

get_default_velocity(degree, vel_vec)

get_dirichlet_boundary_conditions_from_expression(…)

get_gaussian_source_forcing(degree, …)

get_misc_forcing(degree)

Use the forcing from [JEGGIJNME2020]

get_nobile_diffusivity(corr_len, degree, …)

get_num_subdomain_dofs(Vh, subdomain)

Get the number of dofs on a subdomain

get_polygon_boundary_segments(ampothem, nedges)

get_robin_boundary_conditions_from_expression(…)

get_surface_of_3d_function(Vh_2d, z, function)

get_vertices_of_polygon(ampothem, nedges)

homogenize_boundaries(bcs)

info_blue(msg)

info_green(msg)

info_red(msg)

load_fenics_function(function_space, filename)

mark_boundaries(mesh, boundary_conditions)

on_any_boundary(x, on_boundary)

plot_functions(functions[, nrows])

qoi_functional_misc(u)

Use the QoI from [JEGGIJNME2020]

qoi_functional_source_inversion(sols)

JINGLAI LI AND YOUSSEF M.

run_model(function_space, kappa, forcing, …)

Use implicit euler to solve transient advection diffusion equation

run_steady_state_model(function_space, …)

Solve steady-state diffusion equation

save_fenics_function(function, filename)

setup_advection_diffusion_benchmark(nvars, …)

Compute functionals of the following model of transient advection-diffusion (with 3 configure variables which control the two spatial mesh resolutions and the timestep)

setup_advection_diffusion_source_inversion_benchmark([…])

Compute functionals of the following model of transient diffusion of a contaminant

setup_dirichlet_and_periodic_boundary_conditions_and_function_space(…)

setup_multi_level_advection_diffusion_benchmark(…)

Compute functionals of the transient advection-diffusion (with 1 configure variables which controls the two spatial mesh resolutions and the timestep).

setup_zero_flux_neumann_boundary_conditions(…)

split_function_recursively(function)

unconstrained_newton_solve(F, J, uh[, …])

F: dl.Expression

Classes

AdvectionDiffusionModel(final_time, degree, …)

AdvectionDiffusionSourceInversionModel(…)

Diffusivity(*args, **kwargs)

RectangularMeshPeriodicBoundary(Ly, **kwargs)

domain [0,Lx],[0,Ly] y-boundary is periodic, i.e. top and bottom boundaries.

pyapprox.fenics_models.helmholtz_benchmarks Module

Functions

generate_helmholtz_bases(samples[, …])

generate_polygonal_mesh(resolution, …[, …])

Sometimes segault is thrown when mshr.generate_mesh() is called.

get_helmholtz_speaker_boundary_conditions(…)

get_polygon_boundary_segments(ampothem, nedges)

get_speaker_boundary_segment_indices(nedges, …)

helmholtz_basis(sols, active_speakers, samples)

plot_mfnets_helmholtz_benchmark_data(benchmark)

run_model(kappa, forcing, function_space[, …])

Solve complex valued Helmholtz equation by solving coupled system, one for the real part of the solution one for the imaginary part.

setup_helmholtz_model(mesh_resolution[, …])

# 6320 speed of sound in aluminium # 343 speed of sound of air at 20 degrees celsius

setup_mfnets_helmholtz_benchmark([…])

Setup the multi-fidelity benchmark used to combine helmholtz data.

Classes

Benchmark

Contains functions and results needed to implement known benchmarks.

HelmholtzBasis(sols, active_speakers)

pyapprox.multivariate_polynomials Module

Functions

add_polynomials(indices_list, coeffs_list)

Add many polynomials together.

cartesian_product(input_sets[, elem_size])

Compute the cartesian product of an arbitray number of sets.

compute_hyperbolic_indices(num_vars, level)

compute_multivariate_orthonormal_basis_product(…)

Compute the product of two multivariate orthonormal bases and re-express as an expansion using the orthnormal basis.

compute_product_coeffs_1d_for_each_variable(…)

compute_univariate_orthonormal_basis_products(…)

Compute all the products of univariate orthonormal bases and re-express them as expansions using the orthnormal basis.

conditional_moments_of_polynomial_chaos_expansion(…)

Return mean and variance of polynomial chaos expansion with some variables fixed at specified values.

define_poly_options_from_variable(variable)

define_poly_options_from_variable_transformation(…)

discrete_chebyshev_recurrence(N, Ntrials)

Compute the recursion coefficients of the polynomials which are orthonormal with respect to the probability measure

evaluate_multivariate_orthonormal_polynomial(…)

Evaluate a multivaiate orthonormal polynomial and its s-derivatives (s=1,…,num_derivs) using a three-term recurrence coefficients.

evaluate_multivariate_orthonormal_polynomial_derivs(…)

evaluate_multivariate_orthonormal_polynomial_derivs_deprecated(…)

evaluate_multivariate_orthonormal_polynomial_values(…)

evaluate_multivariate_orthonormal_polynomial_values_deprecated(…)

evaluate_orthonormal_polynomial_1d(x, nmax, ab)

Evaluate univariate orthonormal polynomials using their three-term recurrence coefficients.

evaluate_orthonormal_polynomial_deriv_1d(x, …)

Evaluate the univariate orthonormal polynomials and its s-derivatives (s=1,…,num_derivs) using a three-term recurrence coefficients.

flattened_rectangular_lower_triangular_matrix_index(ii, …)

Get flattened index kk from row and column indices (ii,jj) of a lower triangular part of MxN matrix

gauss_quadrature(recursion_coeffs, N)

Computes Gauss quadrature from recurrence coefficients

generate_independent_random_samples(…[, …])

Generate samples from a tensor-product probability measure.

get_distribution_info(rv)

Shapes and scales can appear in either args of kwargs depending on how user initializes frozen object.

get_polynomial_from_variable(variable)

get_recursion_coefficients(opts, num_coefs)

get_tensor_product_quadrature_rule(degrees, …)

if get error about outer product failing it may be because univariate_quadrature rule is returning a weights array for every level, i.e. l=0,…level.

get_tensor_product_quadrature_rule_from_pce(…)

get_univariate_quadrature_rules_from_pce(…)

hahn_recurrence(Nterms, N, alphaPoly, betaPoly)

Compute the recursion coefficients of the polynomials which are orthonormal with respect to the hypergeometric probability mass function

hermite_recurrence(Nterms[, rho, probability])

Compute the recursion coefficients of for the Hermite polynomial family.

jacobi_recurrence(N[, alpha, beta, probability])

Compute the recursion coefficients of Jacobi polynomials which are orthonormal with respect to the Beta random variables

krawtchouk_recurrence(Nterms, Ntrials, p)

Compute the recursion coefficients of the polynomials which are orthonormal with respect to the binomial probability mass function

lanczos(nodes, weights, N)

Parameters

modified_chebyshev_orthonormal(nterms, …)

Use the modified Chebyshev algorithm to compute the recursion coefficients of the orthonormal polynomials p_i(x) orthogonal to a target measure w(x) with the modified moments

monomial_basis_matrix(indices, samples[, …])

Evaluate a multivariate monomial basis at a set of samples.

multiply_multivariate_orthonormal_polynomial_expansions(…)

outer_product(input_sets)

Construct the outer product of an arbitary number of sets.

precompute_multivariate_orthonormal_polynomial_univariate_values(…)

precompute_multivariate_orthonormal_polynomial_univariate_values_deprecated(…)

remove_variables_from_polynomial_chaos_expansion(…)

This function is not optimal.

Classes

AffineRandomVariableTransformation(variable)

PolynomialChaosExpansion()

partial

partial(func, *args, **keywords) - new function with partial application of the given arguments and keywords.

pyapprox.bayesian_inference.markov_chain_monte_carlo Module

Functions

approx_fprime(xk, f, epsilon, *args)

Finite-difference approximation of the gradient of a scalar function.

extract_map_sample_from_pymc3_dict(…)

extract_mcmc_chain_from_pymc3_trace(trace, …)

get_distribution_info(rv)

Shapes and scales can appear in either args of kwargs depending on how user initializes frozen object.

get_pymc_variable(rv, pymc_var_name)

get_pymc_variables(variables[, pymc_var_names])

run_bayesian_inference_gaussian_error_model(…)

Draw samples from the posterior distribution using Markov Chain Monte Carlo for data that satisfies

Classes

GaussianLogLike(model, data, noise_covar)

A Gaussian log-likelihood function for a model with parameters given in sample

LogLike(loglike)

Specify what type of object will be passed and returned to the Op when it is called.

LogLikeGrad(loglike[, loglike_grad])

This Op will be called with a vector of values and also return a vector of values - the gradients in each dimension.

LogLikeWithGrad(loglike[, loglike_grad])

PYMC3LogLikeWrapper(loglike[, loglike_grad])

Turn pyapprox model in to one which can be used by PYMC3.

pyapprox.optimal_experimental_design Module

Functions

aoptimality_criterion(homog_outer_prods, …)

Evaluate the A-optimality criterion for a given design probability measure for the linear model

compute_homoscedastic_outer_products(factors)

Compute

compute_prediction_variance(…[, …])

coptimality_criterion(homog_outer_prods, …)

Evaluate the C-optimality criterion for a given design probability measure for the linear model

doptimality_criterion(homog_outer_prods, …)

Evaluate the D-optimality criterion for a given design probability measure for the linear model

extract_minimax_design_from_optimize_result(res)

extract_r_oed_design_from_optimize_result(…)

get_M0_and_M1_matrices(homog_outer_prods, …)

Compute the matrices \(M_0\) and \(M_1\) used to compute the asymptotic covariance matrix \(C(\mu) = M_1^{-1} M_0 M^{-1}\) of the linear model

get_minimax_bounds(num_design_pts)

get_minimax_default_initial_guess(num_design_pts)

get_minimax_linear_constraints(num_design_pts)

get_r_oed_bounds(num_pred_pts, num_design_pts)

get_r_oed_default_initial_guess(…)

get_r_oed_jacobian_structure(num_pred_pts, …)

get_r_oed_linear_constraints(num_pred_pts, …)

goptimality_criterion(homog_outer_prods, …)

valuate the G-optimality criterion for a given design probability measure for the linear model

ioptimality_criterion(homog_outer_prods, …)

Evaluate the I-optimality criterion for a given design probability measure for the linear model

minimax_oed_constraint_jacobian(local_oed_jac, x)

minimax_oed_constraint_objective(…)

minimax_oed_objective(x)

minimax_oed_objective_jacobian(x)

minimize(fun, x0[, args, method, jac, hess, …])

Minimization of scalar function of one or more variables.

optimal_experimental_design(design_pts, fun, …)

Compute optimal experimental designs for models of the form

r_oed_constraint_jacobian(num_design_pts, …)

r_oed_constraint_objective(num_design_pts, …)

r_oed_objective(beta, pred_weights, x)

r_oed_objective_jacobian(beta, pred_weights, x)

r_oed_sparse_constraint_jacobian(…)

roptimality_criterion(beta, …[, …])

Evaluate the R-optimality criterion for a given design probability measure for the linear model

solve_triangular(a, b[, trans, lower, …])

Solve the equation a x = b for x, assuming a is a triangular matrix.

Classes

AlphabetOptimalDesign(criteria, design_factors)

Notes

Bounds(lb, ub[, keep_feasible])

Bounds constraint on the variables.

LinearConstraint(A, lb, ub[, keep_feasible])

Linear constraint on the variables.

NonlinearConstraint(fun, lb, ub[, jac, …])

Nonlinear constraint on the variables.

partial

partial(func, *args, **keywords) - new function with partial application of the given arguments and keywords.

pyapprox.models.wrappers Module

Functions

Pool([processes, initializer, initargs, …])

Returns a process pool object

combine_saved_model_data(saved_data_basename)

default_map_to_multidimensional_index(…)

eval(function, samples)

evaluate_1darray_function_on_2d_array(…[, …])

Evaluate a function at a set of samples using a function that only takes one sample at a time

get_all_sample_combinations(samples1, samples2)

For two sample sets of different random variables loop over all combinations

get_num_args(function)

Return the number of arguments of a function.

hash_array(array[, decimals])

Hash an array for dictionary or set based lookup

run_model_samples_in_parallel(model, …[, …])

run_shell_command(shell_command[, opts])

Execulte a shell command.

time_function_evaluations(function, samples)

Classes

ActiveSetVariableModel(function, num_vars, …)

DataFunctionModel(function[, data, …])

MultiLevelWrapper(model, …[, …])

Specify a one-dimension model hierachy from a multiple dimensional hierarchy For example if model has configure variables which refine the x and y physical directions then one can specify a multilevel hierarchy by creating new indices with the mapping k=(i,i).

PoolModel(function, max_eval_concurrency[, …])

PyFunction(function)

SingleFidelityWrapper(model, config_values)

TimerModelWrapper(function[, base_model])

WorkTracker()

Store the cost needed to evaluate a function under different configurations, e.g.

WorkTrackingModel(function[, base_model, …])

partial

partial(func, *args, **keywords) - new function with partial application of the given arguments and keywords.

pyapprox.models.async_model Module

Classes

AsynchronousEvaluationModel(shell_command[, …])

Evaluate a model in parallel when model instances are invoked by a shell script.

pyapprox.control_variate_monte_carlo Module

Functions for estimating expectations using frequentist control-variate Monte-Carlo based methods such as multi-level Monte-Carlo, control-variate Monte-Carlo, and approximate control-variate Monte-Carlo.

Functions

acv_sample_allocation_cost_constraint(…)

acv_sample_allocation_cost_constraint_all(…)

acv_sample_allocation_cost_constraint_jacobian_all(…)

acv_sample_allocation_jacobian_all_lagrange_torch(…)

acv_sample_allocation_jacobian_all_torch(…)

acv_sample_allocation_jacobian_torch(…)

acv_sample_allocation_objective(estimator, …)

acv_sample_allocation_objective_all(estimator, x)

acv_sample_allocation_objective_all_lagrange(…)

acv_sample_allocation_sample_ratio_constraint(…)

allocate_samples_acv(cov, costs, …[, …])

Determine the samples to be allocated to each model

allocate_samples_acv_best_kl(cov, costs, …)

allocate_samples_mfmc(cov, costs, target_cost)

Determine the samples to be allocated to each model when using MFMC

allocate_samples_mlmc(cov, costs, target_cost)

Determine the samples to be allocated to each model when using MLMC

bootstrap_mfmc_estimator(values, weights[, …])

Boostrap the approximate MFMC estimate of the mean of high-fidelity data with low-fidelity models with unknown means

bootstrap_monte_carlo_estimator(values[, …])

Approxiamte the variance of the Monte Carlo estimate of the mean using bootstraping

compare_estimator_variances(target_costs, …)

compute_approximate_control_variate_mean_estimate(…)

Use approximate control variate Monte Carlo to estimate the mean of high-fidelity data with low-fidelity models with unknown means

compute_control_variate_mean_estimate(…)

Use control variate Monte Carlo to estimate the mean of high-fidelity data with low-fidelity models with known means

compute_correlations_from_covariance(cov)

Compute the correlation matrix of a covariance matrix.

compute_covariance_from_control_variate_samples(values)

Compute the covariance between information sources from a set of evaluations of each information source.

compute_single_fidelity_and_approximate_control_variate_mean_estimates(…)

Compute the approximate control variate estimate of a high-fidelity model from using it and a set of lower fidelity models.

estimate_model_ensemble_covariance(…)

Estimate the covariance of a model ensemble from a set of pilot samples

estimate_variance_reduction(model_ensemble, …)

Numerically estimate the variance of an approximate control variate estimator and compare its value to the estimator using only the high-fidelity data.

generate_independent_random_samples(…[, …])

Generate samples from a tensor-product probability measure.

generate_samples_and_values_acv_IS(…)

generate_samples_and_values_acv_KL(…)

K : integer (K<=nmodels-1)

generate_samples_and_values_mfmc(…[, …])

Parameters

generate_samples_and_values_mlmc(…)

Parameters

get_all_sample_combinations(samples1, samples2)

For two sample sets of different random variables loop over all combinations

get_allocate_samples_acv_trust_region_constraints(…)

get_approximate_control_variate_weights(cov, …)

Get the weights used by the approximate control variate estimator.

get_control_variate_rsquared(cov)

Compute \(r^2\) used to compute the variance reduction of control variate Monte Carlo

get_control_variate_weights(cov)

Get the weights used by the control variate estimator with known low fidelity means.

get_correlation_from_covariance(cov)

Compute the correlation matrix from a covariance matrix

get_discrepancy_covariances_IS(cov, …[, pkg])

Get the covariances of the discrepancies \(\delta\) between each low-fidelity model and its estimated mean when the same \(N\) samples are used to compute the covariance between each models and \(N-r_\alpha\) samples are allocated to estimate the low-fidelity means, and each of these sets are drawn independently from one another.

get_discrepancy_covariances_KL(cov, …[, pkg])

Get the covariances of the discrepancies \(\delta\) between each low-fidelity model and its estimated mean using the MFMC sampling strategy and the ACV KL estimator.

get_discrepancy_covariances_MF(cov, …[, pkg])

Get the covariances of the discrepancies \(\delta\) between each low-fidelity model and its estimated mean using the MFMC sampling strategy.

get_initial_guess(initial_guess, cov, costs, …)

get_lagrange_multiplier_mlmc(cov, costs, …)

Given an optimal sample allocation recover the optimal value of the Lagrange multiplier.

get_mfmc_control_variate_weights(cov)

get_mfmc_control_variate_weights_pool_wrapper(…)

Create interface that adhears to assumed api for variance reduction check cannot be defined as a lambda locally in a test when using with multiprocessing pool because python cannot pickle such lambda functions

get_mlmc_control_variate_weights(nmodels)

Get the weights used by the MLMC control variate estimator

get_mlmc_control_variate_weights_pool_wrapper(…)

Create interface that adhears to assumed api for variance reduction check cannot be defined as a lambda locally in a test when using with multiprocessing pool because python cannot pickle such lambda functions

get_pilot_covariance(nmodels, variable, …)

Parameters

get_rsquared_acv(cov, nsample_ratios, …)

Compute r^2 used to compute the variance reduction of Approximate Control Variate Algorithms

get_rsquared_acv_KL_best(cov, nsample_ratios)

get_rsquared_mfmc(cov, nsample_ratios)

Compute r^2 used to compute the variance reduction of Multifidelity Monte Carlo (MFMC)

get_rsquared_mlmc(cov, nsample_ratios[, pkg])

Compute r^2 used to compute the variance reduction of Multilevel Monte Carlo (MLMC)

get_variance_reduction(get_rsquared, cov, …)

Compute the variance reduction:

minimize(fun, x0[, args, method, jac, hess, …])

Minimization of scalar function of one or more variables.

plot_acv_sample_allocation(nsamples_history, …)

plot_estimator_variances(nsamples_history, …)

solve_allocate_samples_acv_slsqp_optimization(…)

solve_allocate_samples_acv_trust_region_optimization(…)

standardize_sample_ratios(nhf_samples, …)

Ensure num high fidelity samples is positive (>0) and then recompute sample ratios.

validate_nsample_ratios(nhf_samples, …)

Check that nsample_ratios* nhf_samples are all integers and that nsample_ratios are all larger than 1

Classes

ACVMF(cov, costs)

ACVMFKL(cov, costs, target_cost, K, L)

ACVMFKLBest(cov, costs)

MC(cov, costs)

MFMC(cov, costs)

MLMC(cov, costs)

ModelEnsemble(functions[, names])

Wrapper class to allow easy one-dimensional indexing of models in an ensemble.

partial

partial(func, *args, **keywords) - new function with partial application of the given arguments and keywords.