Note
Click here to download the full example code
Model Definition¶
This tutorial describes how to setup a function with random inputs. It also provides examples of how to use model wrappers to time function calls and evaluate a function at multiple samples in parallel.
We start by defining a function of two random variables. We will use the Rosenbrock becnhmark. See pyapprox.benchmarks.benchmarks.setup_rosenbrock_function()
import pyapprox as pya
from pyapprox.benchmarks.benchmarks import setup_benchmark
benchmark = setup_benchmark('rosenbrock',nvars=2)
Print the attributes of the benchmark with
print(benchmark.keys())
Out:
dict_keys(['fun', 'jac', 'hessp', 'variable', 'mean', 'loglike', 'loglike_grad'])
Any of these attributes, e.g. the Rosenbrock function (the attribute fun
can be accessed using benchmark.fun.
Now lets define the inputs to the function of interest. For independent random variables we use SciPy random variablest to represent each one-dimensional variables. For documentation refer to the scipy.stats module.
We define multivariate random variables by specifying each 1D variable in a list. Here we will setup a 2D variable which is the tensor product of two independent and identically distributed uniform random variables
from scipy import stats
univariate_variables = [stats.uniform(-2,4),stats.uniform(-2,4)]
variable = pya.IndependentMultivariateRandomVariable(univariate_variables)
This variable is also defined in the benchmark.variable attribute. To print a summary of the random variable
print(variable)
Out:
I.I.D. Variable
Number of variables: 2
Unique variables and global id:
uniform(loc=-2,scale=4): z0, z1
We can draw random samples from variable and evaluate the function using
nsamples = 10
samples = pya.generate_independent_random_samples(variable,nsamples)
values = benchmark.fun(samples)
User defined functions¶
Pyapprox can be used with pretty much any function provided an appropriate interface is defined. Here will show how to setup a simple function.
PyApprox requires all functions to take 2D np.ndarray with shape (nvars,nsamples) and requires a function to return a 2D np.ndarray with shape (nsampels,nqoi). nqoi==1 for scalar valued functions and nqoi>1 for vectored value functions.
Lets define a function which does not match this criteria and use wrappers provided by PyApprox to convert it to the correct format. Specifically we will define a function that only takes a 1D np.ndarray and returns a scalar
import numpy as np
def fun(sample):
assert sample.ndim==1
return np.sum(sample**2)
from pyapprox.models.wrappers import evaluate_1darray_function_on_2d_array
def pyapprox_fun(samples):
values = evaluate_1darray_function_on_2d_array(fun,samples)
return values
values = pyapprox_fun(samples)
The function pyapprox.models.wrappers.evaluate_1darray_function_on_2d_array()
avoids the need to write a for loop but we can do this also and does some checking to make sure values is the correct shape
values_loop = np.array([np.atleast_1d(fun(s)) for s in samples.T])
assert np.allclose(values,values_loop)
Timing function evaluations¶
It is often useful to be able to track the time needed to evaluate a function. We can track this using the pyapprox.models.wrappers.TimerModelWrapper
and pyapprox.models.wrappers.WorkTrackingModel
objects which are designed to work together. The former time each evaluation of a function that returns output of shape (nsampels,qoi) and appends the time to the quantities of interest returned by the function, i.e returns a 2D np.ndarray with shape (nsamples,qoi+1). The second extracts the time and removes it from the quantities of interest and returns output with the original shape (nsmaples,nqoi) of the user function.
Lets use the class with a function that takes a random amount of time. We will use the previous function but add a random pause between 0 and .1 seconds
import time
def fun_pause(sample):
assert sample.ndim==1
time.sleep(np.random.uniform(0,.05))
return np.sum(sample**2)
def pyapprox_fun(samples):
return evaluate_1darray_function_on_2d_array(fun_pause,samples)
from pyapprox.models.wrappers import TimerModelWrapper, WorkTrackingModel
timer_fun = TimerModelWrapper(pyapprox_fun)
worktracking_fun = WorkTrackingModel(timer_fun)
values = worktracking_fun(samples)
The pyapprox.models.wrappers.WorkTrackingModel
has an attribute pyapprox.models.wrappers.WorkTracker
which stores the execution time of each function evaluation as a dictionary. The key corresponds is the model id. For this example the id will always be the same, but the id can vary and this is useful when evaluating mutiple models, e.g. when using multi-fidelity methods. To print the dictionary use
costs = worktracking_fun.work_tracker.costs
print(costs)
Out:
{(0,): [0.0445559024810791, 0.0033240318298339844, 0.03546309471130371, 0.05207014083862305, 0.037081241607666016, 0.01415395736694336, 0.030753135681152344, 0.001789093017578125, 0.017220020294189453, 0.007019758224487305]}
We can also call the work tracker to query the median cost for a model with a given id. The default id is 0.
fun_id=np.atleast_2d([0])
print(worktracking_fun.work_tracker(fun_id))
Out:
[0.02398658]
Evaluating multiple models¶
Now let apply this two an ensemble of models to explore the use of model ids. First create a second function.
def fun_pause_2(sample):
time.sleep(np.random.uniform(.05,.1))
return np.sum(sample**2)
def pyapprox_fun_2(samples):
return evaluate_1darray_function_on_2d_array(fun_pause_2,samples)
Now using pyapprox.control_variate_monte_carlo.ModelEnsemble
we can create a function which takes the random samples plus an additional configure variable which defines which model to evaluate. Lets use half the samples to evaluate the first model and evalaute the second model at the remaining samples
from pyapprox.control_variate_monte_carlo import ModelEnsemble
model_ensemble = ModelEnsemble([pyapprox_fun,pyapprox_fun_2])
timer_fun_ensemble = TimerModelWrapper(model_ensemble)
worktracking_fun_ensemble = WorkTrackingModel(
timer_fun_ensemble,num_config_vars=1)
fun_ids = np.ones(nsamples); fun_ids[:nsamples//2]=0
ensemble_samples = np.vstack([samples,fun_ids])
values = worktracking_fun_ensemble(ensemble_samples)
Here we had to pass the number (1) of configure variables to the WorkTrackingModel. PyApprox assumes that the configure variables are the last rows of the samples 2D array
Now check that the new values are the same as when using the individual functions directly
assert np.allclose(values[:nsamples//2],pyapprox_fun(samples[:,:nsamples//2]))
assert np.allclose(values[nsamples//2:],pyapprox_fun_2(samples[:,nsamples//2:]))
Again we can query the exection times of each model
costs = worktracking_fun_ensemble.work_tracker.costs
print(costs)
query_fun_ids = np.atleast_2d([0,1])
print(worktracking_fun_ensemble.work_tracker(query_fun_ids))
Out:
{(0,): [0.013247966766357422, 0.013377904891967773, 0.020057201385498047, 0.05009603500366211, 0.05231881141662598], (1,): [0.09871411323547363, 0.08461213111877441, 0.06339883804321289, 0.09815406799316406, 0.07094907760620117]}
[0.0200572 0.08461213]
As expected there are 5 samples tracked for each model and the median evaluation time of the second function is about twice as large as for the first function.
Evaluating functions at multiple samples in parallel¶
For expensive models it is often useful to be able to evaluate each model concurrently. This can be achieved using pyapprox.models.wrappers.PoolModel
. Note this function is not intended for use with distributed memory systems, but rather is intended to use all the threads of a personal computer or compute node. See pyapprox.models.async_model.AsynchronousEvaluationModel
if you are interested in running multiple simulations in parallel on a distributed memory system.
PoolModel cannot be used to wrap WorkTrackingModel. However it can still be used with WorkTrackingModel using the sequence of wrappers below.
from pyapprox.models.wrappers import PoolModel
max_eval_concurrency=1#set higher
#clear WorkTracker counters
pool_model = PoolModel(timer_fun_ensemble,max_eval_concurrency,assert_omp=False)
worktracking_fun_ensemble.work_tracker.costs = dict()
worktracking_fun_ensemble = WorkTrackingModel(
pool_model,num_config_vars=1)
#create more samples to notice improvement in wall time
nsamples = 10
samples = pya.generate_independent_random_samples(variable,nsamples)
fun_ids = np.ones(nsamples); fun_ids[:nsamples//2]=0
ensemble_samples = np.vstack([samples,fun_ids])
import time
t0= time.time()
values = worktracking_fun_ensemble(ensemble_samples)
t1 = time.time()
print(f'With {max_eval_concurrency} threads that took {t1-t0} seconds')
import os
if ('OMP_NUM_THREADS' not in os.environ or int(os.environ['OMP_NUM_THREADS'])!=1):
#make sure to set OMP_NUM_THREADS=1 to maximize benefit of pool model
print('Warning set OMP_NUM_THREADS=1 for best performance')
max_eval_concurrency=4
pool_model.set_max_eval_concurrency(max_eval_concurrency)
t0= time.time()
values = worktracking_fun_ensemble(ensemble_samples)
t1 = time.time()
print(f'With {max_eval_concurrency} threads that took {t1-t0} seconds')
Out:
With 1 threads that took 0.5142409801483154 seconds
Warning set OMP_NUM_THREADS=1 for best performance
With 4 threads that took 0.2192528247833252 seconds
Lets print a summary of the costs to make sure individual function evaluation costs are still being recorded correctly
print(worktracking_fun_ensemble.work_tracker)
Out:
WorkTracker Cost Summary
Funtion ID Median Cost
(0,) 0.025630712509155273
(1,) 0.09358561038970947
Note¶
PoolModel cannot be used with lambda functions. You will get error similar to pickle.PicklingError: Can’t pickle <function <lambda> at 0x12b4e6440>: attribute lookup <lambda> on __main__ failed
# sphinx_gallery_thumbnail_path = './figures/cantilever-beam.png'
Total running time of the script: ( 0 minutes 2.199 seconds)