evox.problems.hpo_wrapper
¶
Module Contents¶
Classes¶
The base class for hyper parameter optimization (HPO) monitors used in |
|
The monitor for hyper parameter optimization (HPO) that records the best fitness found so far in the optimization process. |
|
The problem for hyper parameter optimization (HPO). |
|
API¶
- class evox.problems.hpo_wrapper.HPOMonitor(num_repeats: int = 1, fit_aggregation: Optional[Callable[[torch.Tensor, int], torch.Tensor]] = _mean_fit_aggregation)[source]¶
Bases:
evox.core.Monitor
,abc.ABC
The base class for hyper parameter optimization (HPO) monitors used in
HPOProblem.workflow.monitor
.Initialization
Initialize the ModuleBase.
- Parameters:
*args – Variable length argument list, passed to the parent class initializer.
**kwargs – Arbitrary keyword arguments, passed to the parent class initializer.
Attributes: static_names (list): A list to store static member names.
- class evox.problems.hpo_wrapper.HPOFitnessMonitor(num_repeats: int = 1, fit_aggregation: Optional[Callable[[torch.Tensor, int], torch.Tensor]] = _mean_fit_aggregation, multi_obj_metric: Optional[Callable] = None)[source]¶
Bases:
evox.problems.hpo_wrapper.HPOMonitor
The monitor for hyper parameter optimization (HPO) that records the best fitness found so far in the optimization process.
Initialization
Initialize the HPO fitness monitor.
- Parameters:
multi_obj_metric – The metric function to use for multi-objective optimization, unused in single-objective optimization. Currently we only support “IGD” or “HV” for multi-objective optimization. Defaults to
None
.
- pre_tell(fitness: torch.Tensor)[source]¶
Update the best fitness value found so far based on the provided fitness tensor and multi-objective metric.
- Parameters:
fitness – A tensor representing fitness values. It can be either a 1D tensor for single-objective optimization or a 2D tensor for multi-objective optimization.
- Raises:
AssertionError – If the dimensionality of the fitness tensor is not 1 or 2.
- class evox.problems.hpo_wrapper.HPOProblemWrapper(iterations: int, num_instances: int, workflow: evox.core.Workflow, num_repeats: int = 1, copy_init_state: bool = False)[source]¶
Bases:
evox.core.Problem
The problem for hyper parameter optimization (HPO).
Example
algo = SomeAlgorithm(...) prob = SomeProblem(...) monitor = HPOFitnessMonitor() workflow = StdWorkflow(algo, prob, monitor=monitor) hpo_prob = HPOProblemWrapper(iterations=..., num_instances=...) params = hpo_prob.get_init_params() # alter `params` ... hpo_prob.evaluate(params) # execute the evaluation # ...
Initialization
Initialize the HPO problem wrapper.
- Parameters:
iterations – The number of iterations to be executed in the optimization process.
num_instances – The number of instances to be executed in parallel in the optimization process, i.e., the population size of the outer algorithm.
workflow – The workflow to be used in the optimization process. Must be wrapped by
core.jit_class
.num_repeats – The number of times to repeat the evaluation process for each instance. Defaults to 1.
copy_init_state – Whether to copy the initial state of the workflow for each evaluation. Defaults to
True
. If your workflow contains operations that IN-PLACE modify the tensor(s) in initial state, this should be set toTrue
. Otherwise, you can set it toFalse
to save memory.
- evaluate(hyper_parameters: Dict[str, torch.nn.Parameter])[source]¶
Evaluate the fitness (given by the internal workflow’s monitor) of the batch of hyper parameters by running the internal workflow.
- Parameters:
hyper_parameters – The hyper parameters to evaluate.
- Returns:
The final fitness of the hyper parameters.
- class evox.problems.hpo_wrapper.HPOData[source]¶
Bases:
typing.NamedTuple
- workflow_step: Callable[[Dict[str, torch.Tensor]], Tuple[Dict[str, torch.Tensor]]]¶
None
- compiled_workflow_step: Callable[[Dict[str, torch.Tensor]], Tuple[Dict[str, torch.Tensor]]]¶
None
- state_keys: List[str]¶
None
- buffer_keys: Optional[List[str]]¶
None