Lava Backend
isort:skip_file
- fugu.backends.lava_backend.warnIfValueExceedsPrecision(value, precision, value_name)
- fugu.backends.lava_backend.warnIfFeatureNotAvailable(feature)
- exception fugu.backends.lava_backend.FeatureNotAvailableException
Bases:
Exception
- exception fugu.backends.lava_backend.NonIntegerDelayValueException
Bases:
Exception
- fugu.backends.lava_backend.reduce_factor_by_bits(value, num_bits)
- fugu.backends.lava_backend.calculate_loihi_scale_factor(max_threshold=0, max_weight=0, max_bias=0, starting_scale=1048576, min_scale=64, threshold_bit_limit=16)
- class fugu.backends.lava_backend.InputIterator
Bases:
object
Supporting class for lava_Backend. Feeds the inputs to Dataloader process. Note that the output of this iterator is a tuple (input values, ground truth). Ground truth can be a scalar or vector. Since we don’t care, we output the scalar 0.
- shape()
- class fugu.backends.lava_backend.lava_Backend
Bases:
Backend
- _allocate(v, dv, vth, b, count=1)
Lava LIF has single fixed {du, dv, vth} for entire population. We don’t do anything with du, but every (dv, vth) combination requires a separate population (or “process” in Lava terminology). We also need to assemble a list of initial voltages for each population.
- compile(scaffold, compile_args={})
creates neuron populations and synapses
- run(n_steps=10, return_potentials=False)
Runs circuit for n_steps then returns data
- cleanup()
Deletes/frees neurons and synapses
- reset()
Resets time-step to 0 and resets neuron/synapse properties
- set_properties(properties={})
Set properties for specific neurons and synapses :param properties: dictionary of parameter for bricks
Example
- for brick in properties:
neuron_props, synapse_props = self.circuit[brick].get_changes(properties[brick]) for neuron in neuron_props:
set neuron properties
- for synapse in synapse_props:
set synapse properties
- @NOTE: Currently, this function behaves differently for Input Bricks
Instead of returning the changes, they change internally and reset the iterator
This is because of how initial spike times are calculated using said bricks
I have not yet found a way of incorporating my proposed method (above) into these bricks yet
- set_input_spikes()