nnopinf.operators.StandardOperator#

class nnopinf.operators.StandardOperator(n_outputs, depends_on, n_hidden_layers, n_neurons_per_layer, activation=<built-in method tanh of type object>, name='StandardOperator')[source]#

Bases: Module

\(f: v \mapsto f(v)\)

Constructs an operator \(f: v \mapsto f(v),\) with a dense neural network for \(v \in \mathbb{R}^K\), and \(f(v) \in \mathbb{R}^{M}\).

Note

  • The output dimension does not need to match the input dimension.

  • There is no “acts on” input as we are not inferring a matrix operator.

Parameters:
  • n_outputs (int) – Output dimension of the operator, i.e., M in the above description

  • depends_on (tuple of nnopinf.Variable) – The variables the operator depends on, i.e., the v in f(v)

  • n_hidden_layers (int) – Number of hidden layers in the network

  • n_neurons_per_layer (int) – Number of neurons in each hidden layer

  • activation (PyTorch activation function (e.g., torch.nn.functional.relu)) – Activation function used at each hidden layer.

  • name (string) – Operator name. Used when saving to file

Examples

>>> import nnopinf
>>> import nnopinf.operators
>>> x_input = nnopinf.Variable(size=3,name="x")
>>> mu_input = nnopinf.Variable(size=2,name="mu")
>>> StandardMlp = nnopinf.operators.StandardOperator(n_outputs=5,depends_on=(x_input,mu_input,),n_hidden_layers=2,n_neurons_per_layer=2)
forward(inputs, return_jacobian=False)[source]#

Forward pass of operator

Parameters:
  • inputs (dict(str, np.array)) – Dictionary of input data in the form of arrays referenced by the variable name, i.e., inputs[‘x’] = np.ones(3)

  • return_jacobian (bool, optional) – If True, return the (approximate) Jacobian in addition to the output.

Examples

>>> import nnopinf
>>> import nnopinf.operators
>>> import numpy as np
>>> x_input = nnopinf.Variable(size=3,name="x")
>>> mu_input = nnopinf.Variable(size=2,name="mu")
>>> StandardMlp = nnopinf.operators.StandardOperator(n_outputs = 5,depends_on=(x_input,mu_input,),n_hidden_layers=2,n_neurons_per_layer=2)
>>> inputs = {}
>>> inputs['x'] = np.random.normal(3)
>>> inputs['mu'] = np.random.normal(2)
>>> f = StandardMlp.forward(inputs)
set_scalings(input_scalings_dict, output_scaling)[source]#

Apply input and output scaling factors directly to the network weights.

Parameters:
  • input_scalings_dict (dict) – Mapping from variable name to the corresponding feature-wise input scaling vector.

  • output_scaling (tensor-like) – Feature-wise scaling vector for the operator output.