evox.algorithms.so.es_variants.persistent_es
¶
Module Contents¶
Classes¶
The implementation of the Persistent ES algorithm. |
API¶
- class evox.algorithms.so.es_variants.persistent_es.PersistentES(pop_size: int, center_init: torch.Tensor, optimizer: Literal[adam] | None = None, lr: float = 0.05, sigma: float = 0.03, T: int = 100, K: int = 10, sigma_decay: float = 1.0, sigma_limit: float = 0.01, device: torch.device | None = None)[source]¶
Bases:
evox.core.Algorithm
The implementation of the Persistent ES algorithm.
Reference: Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies (http://proceedings.mlr.press/v139/vicol21a.html)
This code has been inspired by or utilizes the algorithmic implementation from evosax. More information about evosax can be found at the following URL: GitHub Link: https://github.com/RobertTLange/evosax
Initialization
Initialize the Persistent-ES algorithm with the given parameters.
- Parameters:
pop_size – The size of the population.
center_init – The initial center of the population. Must be a 1D tensor.
optimizer – The optimizer to use. Defaults to None. Currently, only “adam” or None is supported.
lr – The learning rate for the optimizer. Defaults to 0.05.
sigma – The standard deviation of the noise. Defaults to 0.03.
sigma_decay – The decay factor for the standard deviation. Defaults to 1.0.
sigma_limit – The minimum value for the standard deviation. Defaults to 0.01.
T – The inner problem length. Defaults to 100.
K – The number of inner problems. Defaults to 10.
device – The device to use for the tensors. Defaults to None.