Preloader image
   

Optimization Tricks: momentum, batch-norm, and more | Lecture 10

Batch normalization is a technique for improving the performance and stability of artificial neural networks. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance. Batch normalization was introduced in a 2015 paper. It is used to normalize the input layer by adjusting and scaling the activations.

 

Highlights:
Stochastic Gradient Descent
Momentum Algorithm
Learning Rate Schedules
Adaptive Methods: AdaGrad, RMSProp, and Adam
Internal Covariate Shift
Batch Normalization
Weight Initialization
Local Minima
Saddle Points

 

Share this post on the following platforms easily:

No Comments

Post A Comment

error: Context Menu disabled!