Skip to main content

classy.optim.factories

Classes

AdafactorWithWarmupFactory

class AdafactorWithWarmupFactory()

Factory for AdaFactor optimizer with warmup learning rate scheduler reference paper for Adafactor: https://arxiv.org/abs/1804.04235

__init__

def __init__(
    lr: float,
    warmup_steps: int,
    total_steps: int,
    weight_decay: float,
    no_decay_params: Optional[List[str]],
)

AdagradWithWarmupFactory

class AdagradWithWarmupFactory()

Factory for Adagrad optimizer with warmup learning rate scheduler reference paper for Adagrad: https://jmlr.org/papers/v12/duchi11a.html

__init__

def __init__(
    lr: float,
    warmup_steps: int,
    total_steps: int,
    weight_decay: float,
    no_decay_params: Optional[List[str]],
)

AdamWWithWarmupFactory

class AdamWWithWarmupFactory()

Factory for AdamW optimizer with warmup learning rate scheduler reference paper for AdamW: https://arxiv.org/abs/1711.05101

__init__

def __init__(
    lr: float,
    warmup_steps: int,
    total_steps: int,
    weight_decay: float,
    no_decay_params: Optional[List[str]],
)

AdamWithWarmupFactory

class AdamWithWarmupFactory()

Factory for Adam optimizer with warmup learning rate scheduler reference paper for Adam: https://arxiv.org/abs/1412.6980

__init__

def __init__(
    lr: float,
    warmup_steps: int,
    total_steps: int,
    weight_decay: float,
    no_decay_params: Optional[List[str]],
)

Factory

class Factory()

Factory interface that allows for simple instantiation of optimizers and schedulers for PyTorch Lightning. This class is essentially a work-around for lazy instantiation: * all params but for the module to be optimized are received in init * the actual instantiation of optimizers and schedulers takes place in the call method, where the module to be optimized is provided call will be invoked in the configure_optimizers hooks of LighiningModule-s and its return object directly returned. As such, the return type of call can be any of those allowed by configure_optimizers, namely: * Single optimizer * List or Tuple - List of optimizers * Two lists - The first list has multiple optimizers, the second a list of LR schedulers (or lr_dict) * Dictionary, with an ‘optimizer’ key, and (optionally) a ‘lr_scheduler’ key whose value is a single LR scheduler or lr_dict * Tuple of dictionaries as described, with an optional ‘frequency’ key * None - Fit will run without any optimizer

Subclasses (2)

RAdamFactory

class RAdamFactory()

Factory for RAdam optimizer reference paper for RAdam: https://arxiv.org/abs/1908.03265

__init__

def __init__(
    lr: float,
    weight_decay: float,
    no_decay_params: Optional[List[str]],
)

TorchFactory

class TorchFactory()

Simple factory wrapping standard PyTorch optimizers and schedulers.

WeightDecayOptimizer

class WeightDecayOptimizer()

Factory interface that allows for simple instantiation of optimizers and schedulers for PyTorch Lightning. This class is essentially a work-around for lazy instantiation: * all params but for the module to be optimized are received in init * the actual instantiation of optimizers and schedulers takes place in the call method, where the module to be optimized is provided call will be invoked in the configure_optimizers hooks of LighiningModule-s and its return object directly returned. As such, the return type of call can be any of those allowed by configure_optimizers, namely: * Single optimizer * List or Tuple - List of optimizers * Two lists - The first list has multiple optimizers, the second a list of LR schedulers (or lr_dict) * Dictionary, with an ‘optimizer’ key, and (optionally) a ‘lr_scheduler’ key whose value is a single LR scheduler or lr_dict * Tuple of dictionaries as described, with an optional ‘frequency’ key * None - Fit will run without any optimizer

Subclasses (5)

__init__

def __init__(
    weight_decay: float,
    no_decay_params: Optional[List[str]],
)

group_params

def group_params(
    self,
    module: torch.nn.modules.module.Module,
) ‑> list