
Over the past year, RBC Borealis has released an extensive series of research tutorials exploring infinite-width networks from different viewpoints. Each tutorial uses gradient descent or a fully Bayesian approach while focusing on either the network weights or the output function (figure 1).
The tutorials explore concepts like the Neural Tangent Kernel, Bayesian Neural Networks, and Neural Network Gaussian Processes. They are designed for people with no background in these topics, making them a valuable resource for anyone looking to deepen their knowledge of deep learning.
Explore the full collection below!

Figure 1. Four approaches to model fitting. We can either consider gradient descent (top row) or a Bayesian approach (bottom row). For either we can consider either parameter space (left column) or the function space (right column). This blog concerns the gradient descent (top row) for the linear regression model. Parts II and III of this series concerns gradient descent in neural network models which leads to the Neural Tangent Kernel (NTK). Subsequent parts concern the Bayesian approach (bottom row) for linear regression and neural networks, which leads to Bayesian neural networks (parameter space) and neural network Gaussian processes (function space). Figure inspired by $\href{https://www.youtube.com/watch?v=fcpI5z9q91A&t=1394s}{\color{blue}Sohl-Dickstein (2021).}$

Gradient Flow
This blog considers gradient descent in linear models; surprisingly, we can write closed-form expressions for the evolution of the parameters and the function itself. By analyzing these expressions, we can gain insights into the trainability and convergence speed of the model.

The Neural Tangent Kernel
In this blog, we show that, in the infinite width limit, a neural network behaves as if it is linear, and its training dynamics can be captured by the Neural Tangent Kernel (NTK).

Neural Tangent Kernel Applications
This blog explores the implications of the Neural Tangent Kernel (NTK). We present the expressions for the evolution of the residuals, loss, and parameters for neural networks with a least squares loss and analyze these expressions to gain insights into trainability and convergence speed.

Bayesian Machine Learning: Parameter Space
In this blog, we investigate Bayesian methods for linear models with a least squares loss from the parameter space perspective.

Bayesian Machine Learning: Function Space
This blog explores the Bayesian approach from the perspective of function space.

Bayesian Neural Networks
In this blog, we consider Bayesian learning for neural networks from the parameter perspective and see how Bayesian learning in neural networks from this perspective is intractable.

Neural Network Gaussian Processes
This blog applies Bayesian machine learning to neural networks from the function space perspective. We will see that this is tractable; we can make exact predictions from a deep neural network with infinite width, which are accompanied by estimates of uncertainty. These models are referred to as neural network Gaussian processes.
Work with us!
Impressed by the work of the team? RBC Borealis is looking to hire for various roles across different teams. Visit our career page now to find the right role for you and join our team!
View open roles