
Optimizers - Keras
Base Optimizer API These methods and attributes are common to all Keras optimizers. [source] Optimizer class keras.optimizers.Optimizer()
Adam - Keras
learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.
SGD - Keras
learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.
Optimizers - Keras
Optimizers SGD RMSprop Adam AdamW Adadelta Adagrad Adamax Adafactor Nadam Ftrl [source] apply_gradients method Optimizer.apply_gradients( grads_and_vars, name=None, …
Muon - Keras
learning_rate: A float, keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use. The learning rate.
LossScaleOptimizer - Keras
If wrapping a tf.keras.optimizers.Optimizer, hyperparameters can be accessed and set on the LossScaleOptimizer, which will be delegated to the wrapped optimizer.
Lamb - Keras
learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.
Model training APIs - Keras
Arguments optimizer: String (name of optimizer) or optimizer instance. See keras.optimizers. loss: Loss function. May be a string (name of loss function), or a keras.losses.Loss instance. See …
ExponentialDecay - Keras
The learning rate schedule is also serializable and deserializable using keras.optimizers.schedules.serialize and keras.optimizers.schedules.deserialize. Arguments
Keras documentation: KerasTuner
Keras documentation: KerasTunerKerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily configure …