leaspy.samplers
.PopulationFastGibbsSampler
- class PopulationFastGibbsSampler(name: str, shape: tuple, *, scale: float | FloatTensor, random_order_dimension: bool = True, mean_acceptation_rate_target_bounds: Tuple[float, float] = (0.2, 0.4), adaptive_std_factor: float = 0.1, **base_sampler_kws)
Bases:
AbstractPopulationGibbsSampler
Fast Gibbs sampler for population variables.
Note
The sampling batches along the dimensions except the first one. This speeds up sampling process for 2 dimensional parameters.
- Parameters:
- name
str
The name of the random variable to sample.
- shape
tuple
ofint
The shape of the random variable to sample.
- scale
float
> 0 ortorch.FloatTensor
> 0 An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra 1% factor will be applied on top of this scale (STD_SCALE_FACTOR) Note that if you pass a torch tensor, its shape should be compatible with shape of the variable.
- random_order_dimension
bool
(default True) This parameter controls whether we randomize the order of indices during the sampling loop. Article https://proceedings.neurips.cc/paper/2016/hash/e4da3b7fbbce2345d7772b0674a318d5-Abstract.html gives a rationale on why we should activate this flag.
- mean_acceptation_rate_target_bounds:obj:`tuple`[lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1
Bounds on mean acceptation rate. Outside this range, the adaptation of the std-dev of sampler is triggered so to maintain a target acceptation rate in between these too bounds (e.g: ~30%).
- adaptive_std_factor
float
in ]0, 1[ Factor by which we increase or decrease the std-dev of sampler when we are out of the custom bounds for the mean acceptation rate. We decrease it by 1 - factor if too low, and increase it with 1 + factor if too high.
- **base_sampler_kws
Keyword arguments passed to AbstractSampler init method. In particular, you may pass the acceptation_history_length hyperparameter.
- name
- Attributes:
- shape_acceptation
shape_adapted_std
Shape of adaptative variance.
Methods
sample
(data, model, realizations, ...)For each dimension (1D or 2D) of the population variable, compute current attachment and regularity.
validate_scale
(scale)Validate user provided scale in
float
ortorch.Tensor
form.- sample(data: Dataset, model: AbstractModel, realizations: CollectionRealization, temperature_inv: float, **attachment_computation_kws) Tuple[Tensor, Tensor]
For each dimension (1D or 2D) of the population variable, compute current attachment and regularity.
Propose a new value for the given dimension of the given population variable, and compute new attachment and regularity.
Do a MH step, keeping if better, or if worse with a probability.
- Parameters:
- data
Dataset
Dataset used for sampling.
- model
AbstractModel
Model for which to sample a random variable.
- realizations
CollectionRealization
Realization state.
- temperature_inv
float
> 0 The temperature to use.
- **attachment_computation_kws
Currently not used for population parameters.
- data
- Returns:
- attachment, regularity_var
torch.Tensor
0D (scalars) The attachment and regularity (only for the current variable) at the end of this sampling step (summed on all individuals).
- attachment, regularity_var
- property shape_acceptation: Tuple[int, ...]
Return the shape of acceptation tensor for a single iteration.
- validate_scale(scale: float | Tensor) Tensor
Validate user provided scale in
float
ortorch.Tensor
form.If necessary, scale is casted to a
torch.Tensor
.- Parameters:
- scale
float
ortorch.Tensor
The scale to be validated.
- scale
- Returns:
torch.Tensor
Valid scale.