bilby.core.prior.joint.MultivariateNormalDist
- class bilby.core.prior.joint.MultivariateNormalDist(names, nmodes=1, mus=None, sigmas=None, corrcoefs=None, covs=None, weights=None, bounds=None)[source]
Bases:
MultivariateGaussianDist
A synonym for the
MultivariateGaussianDist
distribution.- __init__(names, nmodes=1, mus=None, sigmas=None, corrcoefs=None, covs=None, weights=None, bounds=None)[source]
A class defining a multi-variate Gaussian, allowing multiple modes for a Gaussian mixture model.
Note: if using a multivariate Gaussian prior, with bounds, this can lead to biases in the marginal likelihood estimate and posterior estimate for nested samplers routines that rely on sampling from a unit hypercube and having a prior transform, e.g., nestle, dynesty and MultiNest.
- Parameters:
- names: list
A list of the parameter names in the multivariate Gaussian. The listed parameters must have the same order that they appear in the lists of means, standard deviations, and the correlation coefficient, or covariance, matrices.
- nmodes: int
The number of modes for the mixture model. This defaults to 1, which will be checked against the shape of the other inputs.
- mus: array_like
A list of lists of means of each mode in a multivariate Gaussian mixture model. A single list can be given for a single mode. If this is None then means at zero will be assumed.
- sigmas: array_like
A list of lists of the standard deviations of each mode of the multivariate Gaussian. If supplying a correlation coefficient matrix rather than a covariance matrix these values must be given. If this is None unit variances will be assumed.
- corrcoefs: array
A list of square matrices containing the correlation coefficients of the parameters for each mode. If this is None it will be assumed that the parameters are uncorrelated.
- covs: array
A list of square matrices containing the covariance matrix of the multivariate Gaussian.
- weights: list
A list of weights (relative probabilities) for each mode of the multivariate Gaussian. This will default to equal weights for each mode.
- bounds: list
A list of bounds on each parameter. The defaults are for bounds at +/- infinity.
- __call__(*args, **kwargs)
Call self as a function.
Methods
__init__
(names[, nmodes, mus, sigmas, ...])A class defining a multi-variate Gaussian, allowing multiple modes for a Gaussian mixture model.
add_mode
([mus, sigmas, corrcoef, cov, weight])Add a new mode.
Check if all requested parameters have been filled.
Check if all the rescaled parameters have been filled.
from_repr
(string)Generate the distribution from its __repr__
get_instantiation_dict
()ln_prob
(value)Get the log-probability of a sample.
prob
(samp)Get the probability of a sample.
rescale
(value, **kwargs)Rescale from a unit hypercube to JointPriorDist.
Reset the requested parameters to None.
Reset the rescaled parameters to None.
reset_sampled
()sample
([size])Draw, and set, a sample from the Dist, accompanying method _sample needs to overwritten
- ln_prob(value)[source]
Get the log-probability of a sample. For bounded priors the probability will not be properly normalised.
- Parameters:
- value: array_like
A 1d vector of the sample, or 2d array of sample values with shape NxM, where N is the number of samples and M is the number of parameters.
- prob(samp)[source]
Get the probability of a sample. For bounded priors the probability will not be properly normalised.
- rescale(value, **kwargs)[source]
Rescale from a unit hypercube to JointPriorDist. Note that no bounds are applied in the rescale function. (child classes need to overwrite accompanying method _rescale().
- Parameters:
- value: array
A 1d vector sample (one for each parameter) drawn from a uniform distribution between 0 and 1, or a 2d NxM array of samples where N is the number of samples and M is the number of parameters.
- kwargs: dict
All keyword args that need to be passed to _rescale method, these keyword args are called in the JointPrior rescale methods for each parameter
- Returns:
- array:
An vector sample drawn from the multivariate Gaussian distribution.