check_gradients

pyapprox.util.check_gradients(fun, jac, zz, plot=False, disp=True, rel=True, direction=None, fd_eps=None)[source]

Compare a user specified jacobian with the jacobian computed with finite difference with multiple step sizes.

Parameters:
funcallable

A function with one of the following signatures

fun(z) -> (vals)

or

fun(z, jac) -> (vals, grad)

or

fun(z, direction) -> (vals, directional_grad)

where z is a 2D np.ndarray with shape (nvars, 1) and the first output is a 2D np.ndarray with shape (nqoi, 1) and the second output is a gradient with shape (nqoi, nvars). jac is a flag that specifies if the function returns only the funciton value (False) or the function value and gradient (True)

jaccallable or string

If jac=”jacp” then provided the jacobian of fun with signature

jac(z) -> np.ndarray

where z is a 2D np.ndarray with shape (nvars, 1) and the output is a 2D np.ndarray with shape (nqoi, nvars). This assumes that fun only returns a value (not gradient) and has signature

fun(z) -> np.ndarray

zznp.ndarray (nvars, 1)

A sample of z at which to compute the gradient

plotboolean

Plot the errors as a function of the finite difference step size

dispboolean

True - print the errors False - do not print

relboolean

True - compute the relative error in the directional derivative, i.e. the absolute error divided by the directional derivative using jac. False - compute the absolute error in the directional derivative

directionnp.ndarray (nvars, 1)

Direction to which Jacobian is applied. Default is None in which case random direction is chosen.

fd_epsnp.ndarray (nstep_sizes)

The finite difference step sizes used to compute the gradient. If None then fd_eps=np.logspace(-13, 0, 14)[::-1]

Returns:
errorsnp.ndarray (14, nqoi)

The errors in the directional derivative of fun at 14 different values of finite difference tolerance for each quantity of interest