leaspy.samplers.PopulationMetropolisHastingsSampler

class PopulationMetropolisHastingsSampler(name: str, shape: tuple, *, scale: float | FloatTensor, random_order_dimension: bool = True, mean_acceptation_rate_target_bounds: Tuple[float, float] = (0.2, 0.4), adaptive_std_factor: float = 0.1, **base_sampler_kws)

Bases: AbstractPopulationGibbsSampler

Metropolis-Hastings sampler for population variables.

Note

The sampling is done for all values at once. This speeds up considerably sampling but usually requires more iterations.

Parameters:
namestr

The name of the random variable to sample.

shapetuple of int

The shape of the random variable to sample.

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra 1% factor will be applied on top of this scale (STD_SCALE_FACTOR) Note that if you pass a torch tensor, its shape should be compatible with shape of the variable.

random_order_dimensionbool (default True)

This parameter controls whether we randomize the order of indices during the sampling loop. Article https://proceedings.neurips.cc/paper/2016/hash/e4da3b7fbbce2345d7772b0674a318d5-Abstract.html gives a rationale on why we should activate this flag.

mean_acceptation_rate_target_bounds:obj:`tuple`[lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on mean acceptation rate. Outside this range, the adaptation of the std-dev of sampler is triggered so to maintain a target acceptation rate in between these too bounds (e.g: ~30%).

adaptive_std_factorfloat in ]0, 1[

Factor by which we increase or decrease the std-dev of sampler when we are out of the custom bounds for the mean acceptation rate. We decrease it by 1 - factor if too low, and increase it with 1 + factor if too high.

**base_sampler_kws

Keyword arguments passed to AbstractSampler init method. In particular, you may pass the acceptation_history_length hyperparameter.

Attributes:
shape_acceptation
shape_adapted_std

Shape of adaptative variance.

Methods

sample(data, model, realizations, ...)

For each dimension (1D or 2D) of the population variable, compute current attachment and regularity.

validate_scale(scale)

Validate user provided scale in float or torch.Tensor form.

sample(data: Dataset, model: AbstractModel, realizations: CollectionRealization, temperature_inv: float, **attachment_computation_kws) Tuple[Tensor, Tensor]

For each dimension (1D or 2D) of the population variable, compute current attachment and regularity.

Propose a new value for the given dimension of the given population variable, and compute new attachment and regularity.

Do a MH step, keeping if better, or if worse with a probability.

Parameters:
dataDataset

Dataset used for sampling.

modelAbstractModel

Model for which to sample a random variable.

realizationsCollectionRealization

Realization state.

temperature_invfloat > 0

The temperature to use.

**attachment_computation_kws

Currently not used for population parameters.

Returns:
attachment, regularity_vartorch.Tensor 0D (scalars)

The attachment and regularity (only for the current variable) at the end of this sampling step (summed on all individuals).

property shape_acceptation: Tuple[int, ...]

Return the shape of acceptation tensor for a single iteration.

Returns:
tuple of int

The shape of the acceptation history.

property shape_adapted_std: tuple

Shape of adaptative variance.

validate_scale(scale: float | Tensor) Tensor

Validate user provided scale in float or torch.Tensor form.

If necessary, scale is casted to a torch.Tensor.

Parameters:
scalefloat or torch.Tensor

The scale to be validated.

Returns:
torch.Tensor

Valid scale.