学习率调度 #

为什么需要学习率调度? #

text
┌─────────────────────────────────────────────────────────────┐
│                    学习率调度的重要性                        │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  固定学习率问题:                                            │
│  ├── 太大: 训练不稳定,难以收敛                            │
│  ├── 太小: 收敛太慢                                        │
│  └── 后期可能震荡                                          │
│                                                             │
│  学习率调度:                                                │
│  ├── 初期: 大学习率快速收敛                                │
│  ├── 中期: 逐渐减小精细调整                                │
│  └── 后期: 小学习率稳定收敛                                │
│                                                             │
└─────────────────────────────────────────────────────────────┘

内置学习率调度 #

ExponentialDecay #

python
import keras

lr_schedule = keras.optimizers.schedules.ExponentialDecay(
    initial_learning_rate=0.1,
    decay_steps=1000,
    decay_rate=0.9,
    staircase=False
)

optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)

PiecewiseConstantDecay #

python
import keras

lr_schedule = keras.optimizers.schedules.PiecewiseConstantDecay(
    boundaries=[1000, 5000, 10000],
    values=[0.1, 0.05, 0.01, 0.001]
)

optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)

PolynomialDecay #

python
import keras

lr_schedule = keras.optimizers.schedules.PolynomialDecay(
    initial_learning_rate=0.1,
    decay_steps=10000,
    end_learning_rate=0.0001,
    power=0.5
)

optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)

CosineDecay #

python
import keras

lr_schedule = keras.optimizers.schedules.CosineDecay(
    initial_learning_rate=0.1,
    decay_steps=1000,
    alpha=0.0
)

optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)

CosineDecayRestarts #

python
import keras

lr_schedule = keras.optimizers.schedules.CosineDecayRestarts(
    initial_learning_rate=0.1,
    first_decay_steps=1000,
    t_mul=2.0,
    m_mul=0.9,
    alpha=0.0
)

optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)

回调函数方式 #

ReduceLROnPlateau #

python
import keras

reduce_lr = keras.callbacks.ReduceLROnPlateau(
    monitor='val_loss',
    factor=0.5,
    patience=5,
    min_lr=1e-6,
    verbose=1
)

model.fit(
    x_train, y_train,
    validation_data=(x_val, y_val),
    epochs=100,
    callbacks=[reduce_lr]
)

LearningRateScheduler #

python
import keras

def lr_schedule(epoch, lr):
    if epoch < 10:
        return lr
    elif epoch < 20:
        return lr * 0.1
    else:
        return lr * 0.01

lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule, verbose=1)

model.fit(x_train, y_train, epochs=30, callbacks=[lr_callback])

自定义学习率调度 #

python
import keras
import math

class CosineAnnealingScheduler(keras.callbacks.Callback):
    def __init__(self, initial_lr, total_epochs, warmup_epochs=0):
        super().__init__()
        self.initial_lr = initial_lr
        self.total_epochs = total_epochs
        self.warmup_epochs = warmup_epochs
    
    def on_epoch_begin(self, epoch, logs=None):
        if epoch < self.warmup_epochs:
            lr = self.initial_lr * (epoch + 1) / self.warmup_epochs
        else:
            progress = (epoch - self.warmup_epochs) / (self.total_epochs - self.warmup_epochs)
            lr = self.initial_lr * 0.5 * (1 + math.cos(math.pi * progress))
        
        keras.backend.set_value(self.model.optimizer.learning_rate, lr)
        print(f'\nEpoch {epoch + 1}: Learning rate = {lr:.6f}')

model.fit(
    x_train, y_train,
    epochs=100,
    callbacks=[CosineAnnealingScheduler(0.1, 100, warmup_epochs=5)]
)

下一步 #

现在你已经掌握了学习率调度,接下来学习 模型检查点,了解如何保存最佳模型!

最后更新:2026-04-04