Beyond Gradient Averaging in Parallel Optimization: Improved Robustness through Gradient Agreement Filtering
This paper introduces Gradient Agreement Filtering (GAF), a novel method to improve gradient averaging in distributed deep learning optimization. Traditional methods average micro-batch gradients to compute a macro-batch gradient, but this can lead to orthogonal or negatively correlated gradients in later training stages, resulting in overfitting. GAF reduces gradient variance by computing the cosine distance between micro-gradients and filtering out conflicting updates before averaging. Experiments on image classification benchmarks like CIFAR-100 and CIFAR-100N-Fine show that GAF significantly improves validation accuracy, even with smaller micro-batch sizes, achieving up to an 18.2% improvement over traditional approaches while reducing computational cost.