solve_FSD_constrained_least_squares_smooth

pyapprox.optimization.solve_FSD_constrained_least_squares_smooth(samples, values, eval_basis_matrix, eta_indices=None, probabilities=None, eps=0.1, optim_options={}, return_full=False, method='trust-constr', smoother_type='log', scale_data=False)[source]

Solve first order stochastic dominance (FSD) constrained least squares

Parameters:
samplesnp.ndarary (nvars, nsamples)

The training samples

valuesnp.ndarary (nsamples, 1)

The function values at the training samples

eval_basis_matrixcallable

A function returning the basis evaluated at the set of samples with signature eval_basis_matrix(samples) -> np.ndarray (nsamples, nbasis)

eta_indicesnp.ndarray (nconstraint_samples)

Indices of the training data at which constraints are enforced neta <= nsamples

probabilitiesnp.ndarray(nvalues)

The probability weight assigned to each training data. When sampling randomly from a probability measure the probabilities are all 1/nsamples

epsfloat

A parameter which controls the amount that the heaviside function is smoothed. As eps decreases the smooth approximation converges to the heaviside function but the derivatives of the approximation become more difficult to compute

optim_optionsdict

The keyword arguments passed to the non-linear optimization used to solve the regression problem

smoother_typestring

The name of the function used to smooth the heaviside function Supported types are [quartic, quintic, log]

return_fullboolean

False - return regression solution True - return regression solution and regression optimizer object

methodstring

The name of the non-linear solver used to solve the regresion problem

scale_databoolean

False - use raw training values True - scale training values to have unit standard deviation

Returns:
coefnp.narray (nbasis, 1)

The solution to the regression problem

opt_problemFSDOptProblem

The object used to solve the regression problem