bilby.core.sampler.ptemcee.Ptemcee
- class bilby.core.sampler.ptemcee.Ptemcee(likelihood, priors, outdir='outdir', label='label', use_ratio=False, check_point_plot=True, skip_import_verification=False, resume=True, nsamples=5000, burn_in_nact=50, burn_in_fixed_discard=0, mean_logl_frac=0.01, thin_by_nact=0.5, autocorr_tol=50, autocorr_c=5, safety=1, autocorr_tau=1, gradient_tau=0.1, gradient_mean_log_posterior=0.1, Q_tol=1.02, min_tau=1, check_point_delta_t=600, threads=1, exit_code=77, plot=False, store_walkers=False, ignore_keys_for_tau=None, pos0='prior', niterations_per_check=5, log10beta_min=None, verbose=True, **kwargs)[source]
Bases:
MCMCSampler
bilby wrapper ptemcee (https://github.com/willvousden/ptemcee)
All positional and keyword arguments (i.e., the args and kwargs) passed to run_sampler will be propagated to ptemcee.Sampler, see documentation for that class for further help. Under Other Parameters, we list commonly used kwargs and the bilby defaults.
- Parameters:
- nsamples: int, (5000)
The requested number of samples. Note, in cases where the autocorrelation parameter is difficult to measure, it is possible to end up with more than nsamples.
- burn_in_nact, thin_by_nact: int, (50, 1)
The number of burn-in autocorrelation times to discard and the thin-by factor. Increasing burn_in_nact increases the time required for burn-in. Increasing thin_by_nact increases the time required to obtain nsamples.
- burn_in_fixed_discard: int (0)
A fixed number of samples to discard for burn-in
- mean_logl_frac: float, (0.0.1)
The maximum fractional change the mean log-likelihood to accept
- autocorr_tol: int, (50)
The minimum number of autocorrelation times needed to trust the estimate of the autocorrelation time.
- autocorr_c: int, (5)
The step size for the window search used by emcee.autocorr.integrated_time
- safety: int, (1)
A multiplicative factor for the estimated autocorrelation. Useful for cases where non-convergence can be observed by eye but the automated tools are failing.
- autocorr_tau: int, (1)
The number of autocorrelation times to use in assessing if the autocorrelation time is stable.
- gradient_tau: float, (0.1)
The maximum (smoothed) local gradient of the ACT estimate to allow. This ensures the ACT estimate is stable before finishing sampling.
- gradient_mean_log_posterior: float, (0.1)
The maximum (smoothed) local gradient of the logliklilhood to allow. This ensures the ACT estimate is stable before finishing sampling.
- Q_tol: float (1.01)
The maximum between-chain to within-chain tolerance allowed (akin to the Gelman-Rubin statistic).
- min_tau: int, (1)
A minimum tau (autocorrelation time) to accept.
- check_point_delta_t: float, (600)
The period with which to checkpoint (in seconds).
- threads: int, (1)
If threads > 1, a MultiPool object is setup and used.
- exit_code: int, (77)
The code on which the sampler exits.
- store_walkers: bool (False)
If true, store the unthinned, unburnt chains in the result. Note, this is not recommended for cases where tau is large.
- ignore_keys_for_tau: str
A pattern used to ignore keys in estimating the autocorrelation time.
- pos0: str, list, np.ndarray, dict
If a string, one of “prior” or “minimize”. For “prior”, the initial positions of the sampler are drawn from the sampler. If “minimize”, a scipy.optimize step is applied to all parameters a number of times. The walkers are then initialized from the range of values obtained. If a list, for the keys in the list the optimization step is applied, otherwise the initial points are drawn from the prior. If a
numpy
array the shape should be(ntemps, nwalkers, ndim)
. If adict
, this should be a dictionary with keys matching thesearch_parameter_keys
. Each entry should be an array with shape(ntemps, nwalkers)
.- niterations_per_check: int (5)
The number of iteration steps to take before checking ACT. This effectively pre-thins the chains. Larger values reduce the per-eval timing due to improved efficiency. But, if it is made too large the pre-thinning may be overly aggressive effectively wasting compute-time. If you see tau=1, then niterations_per_check is likely too large.
- Other Parameters:
- nwalkers: int, (200)
The number of walkers
- nsteps: int, (100)
The number of steps to take
- ntemps: int (10)
The number of temperatures used by ptemcee
- Tmax: float
The maximum temperature
- __init__(likelihood, priors, outdir='outdir', label='label', use_ratio=False, check_point_plot=True, skip_import_verification=False, resume=True, nsamples=5000, burn_in_nact=50, burn_in_fixed_discard=0, mean_logl_frac=0.01, thin_by_nact=0.5, autocorr_tol=50, autocorr_c=5, safety=1, autocorr_tau=1, gradient_tau=0.1, gradient_mean_log_posterior=0.1, Q_tol=1.02, min_tau=1, check_point_delta_t=600, threads=1, exit_code=77, plot=False, store_walkers=False, ignore_keys_for_tau=None, pos0='prior', niterations_per_check=5, log10beta_min=None, verbose=True, **kwargs)[source]
- __call__(*args, **kwargs)
Call self as a function.
Methods
__init__
(likelihood, priors[, outdir, ...])calc_likelihood_count
()calculate_autocorrelation
(samples[, c])Uses the emcee.autocorr module to estimate the autocorrelation
check_draw
(theta[, warning])Checks if the draw will generate an infinite prior or likelihood
get_expected_outputs
([outdir, label])Get lists of the expected outputs directories and files.
get_initial_points_from_prior
([npoints])Method to draw a set of live points from the prior
get_pos0
()Master logic for setting pos0
get_pos0_from_array
()Initialize the starting points from a passed dictionary.
get_pos0_from_minimize
([minimize_list])Draw the initial positions using an initial minimization step
Draw the initial positions from the prior
Get a random draw from the prior distribution
get_zero_array
()get_zero_chain_array
()log_likelihood
(theta)- Parameters:
log_prior
(theta)- Parameters:
Prints logging info as to how nburn was calculated
prior_transform
(theta)Prior transform method that is passed into the external sampler.
run_sampler
(*args, **kwargs)A template method to run in subclasses
Either initialize the sampler or read in the resume file
write_current_state
([plot])write_current_state_and_exit
([signum, frame])Make sure that if a pool of jobs is running only the parent tries to checkpoint and exit.
Attributes
abbreviation
check_point_equiv_kwargs
list: List of parameters providing prior constraints
default_kwargs
external_sampler_name
list: List of parameter keys that are not being sampled
hard_exit
dict: Container for the kwargs.
nburn_equiv_kwargs
int: Number of dimensions of the search parameter space
npool
npool_equiv_kwargs
nwalkers_equiv_kwargs
Kwargs passed to samper.sampler()
Kwargs passed to initialize ptemcee.Sampler()
sampler_name
sampling_seed_equiv_kwargs
Name of keyword argument for setting the sampling for the specific sampler.
list: List of parameter keys that are being sampled
- calculate_autocorrelation(samples, c=3)[source]
Uses the emcee.autocorr module to estimate the autocorrelation
- Parameters:
- samples: array_like
A chain of samples.
- c: float
The minimum number of autocorrelation times needed to trust the estimate (default: 3). See emcee.autocorr.integrated_time.
- check_draw(theta, warning=True)[source]
Checks if the draw will generate an infinite prior or likelihood
Also catches the output of numpy.nan_to_num.
- Parameters:
- theta: array_like
Parameter values at which to evaluate likelihood
- warning: bool
Whether or not to print a warning
- Returns:
- bool, cube (nlive,
True if the likelihood and prior are finite, false otherwise
- property constraint_parameter_keys
list: List of parameters providing prior constraints
- property fixed_parameter_keys
list: List of parameter keys that are not being sampled
- classmethod get_expected_outputs(outdir=None, label=None)[source]
Get lists of the expected outputs directories and files.
These are used by
bilby_pipe
when transferring files via HTCondor.- Parameters:
- outdirstr
The output directory.
- labelstr
The label for the run.
- Returns:
- list
List of file names.
- list
List of directory names. Will always be empty for ptemcee.
- get_initial_points_from_prior(npoints=1)[source]
Method to draw a set of live points from the prior
This iterates over draws from the prior until all the samples have a finite prior and likelihood (relevant for constrained priors).
- Parameters:
- npoints: int
The number of values to return
- Returns:
- unit_cube, parameters, likelihood: tuple of array_like
unit_cube (nlive, ndim) is an array of the prior samples from the unit cube, parameters (nlive, ndim) is the unit_cube array transformed to the target space, while likelihood (nlive) are the likelihood evaluations.
- get_pos0_from_dict()[source]
Initialize the starting points from a passed dictionary.
The
pos0
passed to theSampler
should be a dictionary with keys matching thesearch_parameter_keys
. Each entry should have shape(ntemps, nwalkers)
.
- get_pos0_from_minimize(minimize_list=None)[source]
Draw the initial positions using an initial minimization step
See pos0 in the class initialization for details.
- Returns:
- pos0: list
The initial postitions of the walkers, with shape (ntemps, nwalkers, ndim)
- get_pos0_from_prior()[source]
Draw the initial positions from the prior
- Returns:
- pos0: list
The initial postitions of the walkers, with shape (ntemps, nwalkers, ndim)
- get_random_draw_from_prior()[source]
Get a random draw from the prior distribution
- Returns:
- draw: array_like
An ndim-length array of values drawn from the prior. Parameters with delta-function (or fixed) priors are not returned
- property kwargs
dict: Container for the kwargs. Has more sophisticated logic in subclasses
- log_likelihood(theta)[source]
- Parameters:
- theta: list
List of values for the likelihood parameters
- Returns:
- float: Log-likelihood or log-likelihood-ratio given the current
likelihood.parameter values
- log_prior(theta)[source]
- Parameters:
- theta: list
List of sampled values on a unit interval
- Returns:
- float: Joint ln prior probability of theta
- property ndim
int: Number of dimensions of the search parameter space
- prior_transform(theta)[source]
Prior transform method that is passed into the external sampler.
- Parameters:
- theta: list
List of sampled values on a unit interval
- Returns:
- list: Properly rescaled sampled values
- property sampler_function_kwargs
Kwargs passed to samper.sampler()
- property sampler_init_kwargs
Kwargs passed to initialize ptemcee.Sampler()
- sampling_seed_key = None
Name of keyword argument for setting the sampling for the specific sampler. If a specific sampler does not have a sampling seed option, then it should be left as None.
- property search_parameter_keys
list: List of parameter keys that are being sampled
- write_current_state_and_exit(signum=None, frame=None)[source]
Make sure that if a pool of jobs is running only the parent tries to checkpoint and exit. Only the parent has a ‘pool’ attribute.
For samplers that must hard exit (typically due to non-Python process) use
os._exit
that cannot be excepted. Other samplers exiting can be caught as aSystemExit
.