leaspy.algo.utils.samplers.gibbs_sampler.GibbsSampler

class GibbsSampler(info: dict, n_patients: int, *, scale: Union[float, FloatTensor], random_order_dimension: bool = True, mean_acceptation_rate_target_bounds: Tuple[float, float] = (0.2, 0.4), adaptive_std_factor: float = 0.1, sampler_type: str = 'Gibbs', **base_sampler_kws)

Bases: AbstractSampler

Gibbs sampler class.

Parameters
infodict[str, Any]

The dictionary describing the random variable to sample. It should contains the following entries:

  • name : str

  • shape : tuple[int, …]

  • type : ‘population’ or ‘individual’

n_patientsint > 0

Number of patients (useful for individual variables)

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale (hyperparameters):

Note that if you pass a torch tensor, its shape should be compatible with shape of the variable.

random_order_dimensionbool (default True)

This parameter controls whether we randomize the order of indices during the sampling loop. (only for population variables, since we perform group sampling for individual variables) Article https://proceedings.neurips.cc/paper/2016/hash/e4da3b7fbbce2345d7772b0674a318d5-Abstract.html gives a rationale on why we should activate this flag.

mean_acceptation_rate_target_boundstuple[lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on mean acceptation rate. Outside this range, the adaptation of the std-dev of sampler is triggered so to maintain a target acceptation rate in between these too bounds (e.g: ~30%).

adaptive_std_factorfloat in ]0, 1[

Factor by which we increase or decrease the std-dev of sampler when we are out of the custom bounds for the mean acceptation rate. We decrease it by 1 - factor if too low, and increase it with 1 + factor if too high.

sampler_typestr

If ‘Gibbs’, sampling is done iteratively for all coordinate values. If ‘FastGibbs’, sampling batches along the dimensions except the first one. Speeds up sampling process for 2 dimensional parameters If ‘Metropolis-Hastings’, sampling is done for all values at once. Speeds up considerably sampling but usually requires more iterations <!> Types other than ‘Gibbs’ are only supported for population variables for now since;

individual variables are handled with a grouped Gibbs sampler.

**base_sampler_kws

Keyword arguments passed to AbstractSampler init method. In particular, you may pass the acceptation_history_length hyperparameter.

Raises
LeaspyInputError
Attributes
In addition to the attributes present in :class:`.AbstractSampler`:
stdtorch.FloatTensor

Adaptative std-dev of variable

sampler_typestr

Sampler type : Gibbs, FastGibbs or Metropolis-Hastings

Methods

sample(dataset, model, realizations, ...)

Sample new realization (either population or individual) for a given realization state, dataset, model and temperature

sample(dataset: Dataset, model: AbstractModel, realizations: CollectionRealization, temperature_inv: float, **attachment_computation_kws) Tuple[FloatTensor, FloatTensor]

Sample new realization (either population or individual) for a given realization state, dataset, model and temperature

<!> Modifies in-place the realizations object, <!> as well as the model through its update_MCMC_toolbox for population variables.

Parameters
datasetDataset

Dataset class object build with leaspy class object Data, model & algo

modelAbstractModel

Model for loss computations and updates

realizationsCollectionRealization

Contain the current state & information of all the variables of interest

temperature_invfloat > 0

Inverse of the temperature used in tempered MCMC-SAEM

**attachment_computation_kws

Optional keyword arguments for attachment computations. As of now, we only use it for individual variables, and only attribute_type. It is used to know whether to compute attachments from the MCMC toolbox (esp. during fit) or to compute it from regular model parameters (esp. during personalization in mean/mode realization)

Returns
attachment, regularity_vartorch.FloatTensor 0D (population variable) or 1D (individual variable, with length n_individuals)

The attachment and regularity (only for the current variable) at the end of this sampling step (globally or per individual, depending on variable type).