Skip to content

Clarification and Potential Issue with EarlyStopping Mechanism #26

@Zhu-Luyu

Description

@Zhu-Luyu

I have two concerns regarding the implementation of the EarlyStopping mechanism in your project:

  1. Adjustment of the delta Value After Reducing the Learning Rate: After the patience threshold is met and the learning rate is adjusted (reduced to one-tenth of its original value), the delta value for the EarlyStopping mechanism is changed from -0.001 to -0.002. Could you clarify the rationale behind making the delta value more stringent (-0.002) after adjusting the learning rate? This adjustment seems to require the model to exhibit a more significant improvement than before to avoid being considered as having "limited improvement". I think a smaller learning rate results in a more conservative increase in accuracy.

  2. Potential Issue with Resetting self.score_max Upon EarlyStopping Re-instantiation: When executing early_stopping = EarlyStopping(patience=opt.earlystop_epoch, delta=-0.002, verbose=True) in train.py, the self.score_max variable is reset to -np.Inf. This reset could potentially lead to a scenario where the "best" weights saved after re-instantiating the EarlyStopping object might not actually be better than the weights saved before, considering that self.score_max no longer retains its previous value but is instead reset. Shouldn't self.score_max be preserved across re-instantiations to ensure that only genuinely better model states are saved? This behavior seems like it might be a bug, as it contradicts the purpose of tracking the best model performance across training epochs.

Looking forward to your insights on these points.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions