logo
Loading...

Lecture 7 | Training Neural Networks II - Stanford University School of Engineering - 深度學習 Deep Learning 公開課 - Cupoy

Lecture 7 continues our discussion of practical issues for training neural networks. We discuss diff...

Lecture 7 continues our discussion of practical issues for training neural networks. We discuss different update rules commonly used to optimize neural networks during training, as well as different strategies for regularizing large neural networks including dropout. We also discuss transfer learning and finetuning. Keywords: Optimization, momentum, Nesterov momentum, AdaGrad, RMSProp, Adam, second-order optimization, L-BFGS, ensembles, regularization, dropout, data augmentation, transfer learning, finetuning Slides: http://cs231n.stanford.edu/slides/201...