leaspy.samplers.IndividualGibbsSampler

class IndividualGibbsSampler(name: str, shape: tuple, *, n_patients: int, scale: float | FloatTensor, random_order_dimension: bool = True, mean_acceptation_rate_target_bounds: Tuple[float, float] = (0.2, 0.4), adaptive_std_factor: float = 0.1, **base_sampler_kws)

Bases: GibbsSamplerMixin, AbstractIndividualSampler

Gibbs sampler for individual variables.

Individual variables are handled with a grouped Gibbs sampler. There is currently no other sampler available for individual variables.

Parameters:
namestr

The name of the random variable to sample.

shapetuple of int

The shape of the random variable to sample.

n_patientsint

Number of patients.

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra 1% factor will be applied on top of this scale (STD_SCALE_FACTOR) Note that if you pass a torch tensor, its shape should be compatible with shape of the variable.

random_order_dimensionbool (default True)

This parameter controls whether we randomize the order of indices during the sampling loop. Article https://proceedings.neurips.cc/paper/2016/hash/e4da3b7fbbce2345d7772b0674a318d5-Abstract.html gives a rationale on why we should activate this flag.

mean_acceptation_rate_target_bounds:obj:`tuple`[lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on mean acceptation rate. Outside this range, the adaptation of the std-dev of sampler is triggered so to maintain a target acceptation rate in between these too bounds (e.g: ~30%).

adaptive_std_factorfloat in ]0, 1[

Factor by which we increase or decrease the std-dev of sampler when we are out of the custom bounds for the mean acceptation rate. We decrease it by 1 - factor if too low, and increase it with 1 + factor if too high.

**base_sampler_kws

Keyword arguments passed to AbstractSampler init method. In particular, you may pass the acceptation_history_length hyperparameter.

Attributes:
shape_acceptation
shape_adapted_std

Shape of adaptative variance.

Methods

sample(data, model, realizations, ...)

For each individual variable, compute current patient-batched attachment and regularity.

validate_scale(scale)

Validate user provided scale in float or torch.Tensor form.

sample(data: Dataset, model: AbstractModel, realizations: CollectionRealization, temperature_inv: float, **attachment_computation_kws) Tuple[Tensor, Tensor]

For each individual variable, compute current patient-batched attachment and regularity.

Propose a new value for the individual variable, and compute new patient-batched attachment and regularity.

Do a MH step, keeping if better, or if worse with a probability.

Parameters:
dataDataset
modelAbstractModel
realizationsCollectionRealization
temperature_invfloat > 0
**attachment_computation_kws

Optional keyword arguments for attachment computations. As of now, we only use it for individual variables, and only attribute_type. It is used to know whether to compute attachments from the MCMC toolbox (esp. during fit) or to compute it from regular model parameters (esp. during personalization in mean/mode realization)

Returns:
attachment, regularity_vartorch.Tensor 1D (n_individuals,)

The attachment and regularity (only for the current variable) at the end of this sampling step, per individual.

property shape_acceptation: Tuple[int, ...]

Return the shape of acceptation tensor for a single iteration.

Returns:
tuple of int

The shape of the acceptation history.

property shape_adapted_std: tuple

Shape of adaptative variance.

validate_scale(scale: float | Tensor) Tensor

Validate user provided scale in float or torch.Tensor form.

Scale of variable should always be positive (component-wise if multidimensional).

Parameters:
scalefloat or torch.Tensor

The scale to be validated.

Returns:
torch.Tensor

Valid scale.