This talk revolves around Polyak’s momentum gradient descent method, also known as ‘momentum’. Its stochastic version, momentum stochastic gradient descent (SGD), is one of the most commonly used optimization methods in deep learning. Throughout the talk we will study a number of important properties of this versatile method, and see how this understanding can be used to engineer better deep learning systems.
Vancouver and Waterloo research centres draw on unique local design
News
Vancouver and Waterloo research centres draw on unique local design
Simon Prince, Layla El Asri boost NLP focus in our Montreal research centre
News
Simon Prince, Layla El Asri boost NLP focus in our Montreal research centre
RBC Borealis supports new Korbit online course powered by AI tutor
News