Constructing likelihood models

Constructing likelihood models#

In module like_models.py we have our likelihood function models. Therefore, this is very similar to the data_gen.py module except instead of generating the synthetic data given an event, it computes the log likelihood of synthetic data given an event. The core function for this is compute_loglikes. This function takes as input the following variables:

  • theta: Corresponds to the event hypothesis whose likelihood we want to assess.

  • sensors: Corresponds to the sensor network configuration.

  • data: the synthetic data that we want to compute the likelihood of (given the event hypothesis theta) for each sensor.

In this code, data is the full dataset for all experiments since for each experiment we need to compute the likelihood for each event hypothesis so it is most efficient to do it in this vectorized form. So data has dimensions

(nlpts_data * ndata, # of sensors * Length of sensor output vec)

and corresponds to the variables dataz in the eig_calc.py code. The compute_loglikes function returns one output variable:

  • loglikes: the log likelihood of the data given the event hypothesis theta. Has dimensions [nlpts_data * ndata].

Within the compute_loglikes function, any sensor type model can be implemented as long as it agrees with the models used in the data_gen.py module. As currently written, the compute_loglikes function looks like

def compute_loglikes(theta,sensors,data):
    dloglikes = detection_likelihood(theta,sensors,data)
    aloglikes = arrival_likelihood_gaussian(theta, sensors, data)
    loglikes = dloglikes + aloglikes
    
    return loglikes

The likelihood, given the event hypothesis theta, is computed based on the probability of detecting an arrival at each station. If an arrival is detected, then the probability of detecting an arrival with that arrival time is also computed. Other likelihood models could be easily added to the module and put into this function.