evox.problems.hpo_wrapper#

Module Contents#

Classes#

HPOMonitor

The base class for hyper parameter optimization (HPO) monitors used in HPOProblem.workflow.monitor.

HPOFitnessMonitor

The monitor for hyper parameter optimization (HPO) that records the best fitness found so far in the optimization process.

HPOProblemWrapper

The problem for hyper parameter optimization (HPO).

API#

class evox.problems.hpo_wrapper.HPOMonitor[source]#

Bases: evox.core.Monitor, abc.ABC

The base class for hyper parameter optimization (HPO) monitors used in HPOProblem.workflow.monitor.

Initialization

Initialize the ModuleBase.

Parameters:
  • *args – Variable length argument list, passed to the parent class initializer.

  • **kwargs – Arbitrary keyword arguments, passed to the parent class initializer.

Attributes: static_names (list): A list to store static member names.

abstract tell_fitness() torch.Tensor[source]#

Get the best fitness found so far in the optimization process that this monitor is monitoring.

Returns:

The best fitness so far.

class evox.problems.hpo_wrapper.HPOFitnessMonitor(multi_obj_metric: Optional[Callable] = None)[source]#

Bases: evox.problems.hpo_wrapper.HPOMonitor

The monitor for hyper parameter optimization (HPO) that records the best fitness found so far in the optimization process.

Initialization

Initialize the HPO fitness monitor.

Parameters:

multi_obj_metric – The metric function to use for multi-objective optimization, unused in single-objective optimization. Currently we only support “IGD” or “HV” for multi-objective optimization. Defaults to None.

pre_tell(fitness: torch.Tensor)[source]#

Update the best fitness value found so far based on the provided fitness tensor and multi-objective metric.

Parameters:

fitness – A tensor representing fitness values. It can be either a 1D tensor for single-objective optimization or a 2D tensor for multi-objective optimization.

Raises:

AssertionError – If the dimensionality of the fitness tensor is not 1 or 2.

tell_fitness() torch.Tensor[source]#

Get the best fitness found so far in the optimization process that this monitor is monitoring.

Returns:

The best fitness so far.

class evox.problems.hpo_wrapper.HPOProblemWrapper(iterations: int, num_instances: int, workflow: evox.core.Workflow, copy_init_state: bool = True)[source]#

Bases: evox.core.Problem

The problem for hyper parameter optimization (HPO).

Usage

algo = SomeAlgorithm(...)
algo.setup(...)
prob = SomeProblem(...)
prob.setup(...)
monitor = HPOFitnessMonitor()
workflow = StdWorkflow()
workflow.setup(algo, prob, monitor=monitor)
hpo_prob = HPOProblemWrapper(iterations=..., num_instances=...)
hpo_prob.setup(workflow)
params = HPOProblemWrapper.extract_parameters(hpo_prob.init_state)
hpo_prob.evaluate(params) # execute the evaluation
# ...

Initialization

Initialize the HPO problem wrapper.

Parameters:
  • iterations – The number of iterations to be executed in the optimization process.

  • num_instances – The number of instances to be executed in parallel in the optimization process.

  • workflow – The workflow to be used in the optimization process. Must be wrapped by core.jit_class.

  • copy_init_state – Whether to copy the initial state of the workflow for each evaluation. Defaults to True. If your workflow contains operations that IN-PLACE modify the tensor(s) in initial state, this should be set to True. Otherwise, you can set it to False to save memory.

evaluate(hyper_parameters: Dict[str, torch.nn.Parameter])[source]#

Evaluate the fitness (given by the internal workflow’s monitor) of the batch of hyper parameters by running the internal workflow.

Parameters:

hyper_parameters – The hyper parameters to evaluate.

Returns:

The final fitness of the hyper parameters.

get_init_params()[source]#

Return the initial hyper-parameters dictionary of the underlying workflow.