Training a Neural Network, 2
We now have a clear goal: minimize the loss of the neural network. We know we can change the network’s weights and biases to influence its predictions, but how do we do so in a way that decreases loss?
This section uses a bit of multivariable calculus. If you’re not comfortable with calculus, feel free to skip over the math parts.
For simplicity, let’s pretend we only have Alice in our dataset
Name Weight (minus 135) Height (minus 66) Gender
Alice -2 -1 1
Then the mean squared error loss is just Alice’s squared error:
MSE=11i=1∑1(ytrue−ypred)2=(ytrue−ypred)2=(1−ypred)2
Another way to think about loss is as a function of weights and biases. Let’s label each weight and bias in our network:
Then, we can write loss as a multivariable function:
Imagine we wanted to tweak w1. How would loss L change if we changed w1? That’s a question the partial derivative ∂w1∂L can answer. How do we calculate it?
Here’s where the math starts to get more complex. Don’t be discouraged! I recommend getting a pen and paper to follow along - it’ll help you understand.
To start, let’s rewrite the partial derivative in terms of ∂w1∂ypred instead:
∂w1∂L=∂ypred∂L∗∂w1∂ypredThis works because of the Chain Rule.We can calculate ∂ypred∂L because we computed L=(1−ypred)2 above:
∂ypred∂L=∂ypred∂(1−ypred)2=−2(1−ypred)Now, let’s figure out what to do with ∂w1∂ypred. Just like before, let h1,h2,o1 be the outputs of the neurons they represent. Then
ypred=o1=f(w5h1+w6h2+b3)f is the sigmoid activation function, remember?Since w1 only affects h1 (not h2), we can write
∂w1∂ypred=∂h1∂ypred∗∂w1∂h1∂h1∂ypred=w5∗f′(w5h1+w6h2+b3)More Chain Rule.We do the same thing for ∂w1∂h1:
h1=f(w1x1+w2x2+b1)∂w1∂h1=x1∗f′(w1x1+w2x2+b1)You guessed it, Chain Rule.x1 here is weight, and x2 is height. This is the second time we’ve seen f′(x) (the derivate of the sigmoid function) now! Let’s derive it:
f(x)=1+e−x1f′(x)=(1+e−x)2e−x=f(x)∗(1−f(x))We’ll use this nice form for f′(x) later.
We’re done! We’ve managed to break down ∂w1∂L into several parts we can calculate:
∂w1∂L=∂ypred∂L∗∂h1∂ypred∗∂w1∂h1
This system of calculating partial derivatives by working backwards is known as backpropagation, or “backprop”.
Phew. That was a lot of symbols - it’s alright if you’re still a bit confused. Let’s do an example to see this in action!
Example: Calculating the Partial Derivative
We’re going to continue pretending only Alice is in our dataset:
Name | Weight (minus 135) | Height (minus 66) | Gender |
---|
Alice | -2 | -1 | 1 |
Let’s initialize all the weights to 1 and all the biases to 0. If we do a feedforward pass through the network, we get:
h1=f(w1x1+w2x2+b1)=f(−2+−1+0)=0.0474h2=f(w3x1+w4x2+b2)=0.0474o1=f(w5h1+w6h2+b3)=f(0.0474+0.0474+0)=0.524
The network outputs ypred=0.524, which doesn’t strongly favor Male (0) or Female (1). Let’s calculate ∂w1∂L:
∂w1∂L=∂ypred∂L∗∂h1∂ypred∗∂w1∂h1∂ypred∂L=−2(1−ypred)=−2(1−0.524)=−0.952∂h1∂ypred=w5∗f′(w5h1+w6h2+b3)=1∗f′(0.0474+0.0474+0)=f(0.0948)∗(1−f(0.0948))=0.249∂w1∂h1=x1∗f′(w1x1+w2x2+b1)=−2∗f′(−2+−1+0)=−2∗f(−3)∗(1−f(−3))=−0.0904∂w1∂L=−0.952∗0.249∗−0.0904=0.0214
Reminder: we derived f′(x)=f(x)∗(1−f(x)) for our sigmoid activation function earlier.
We did it! This tells us that if we were to increase w1, L would increase a tiiiny bit as a result.
Training: Stochastic Gradient Descent
We have all the tools we need to train a neural network now! We’ll use an optimization algorithm called stochastic gradient descent (SGD) that tells us how to change our weights and biases to minimize loss. It’s basically just this update equation:
w1←w1−η∂w1∂L
η is a constant called the learning rate that controls how fast we train. All we’re doing is subtracting η∂w1∂L from w1:
- If ∂w1∂L is positive, w1 will decrease, which makes L decrease.
- If ∂w1∂L is negative, w1 will increase, which makes L decrease.
If we do this for every weight and bias in the network, the loss will slowly decrease and our network will improve.
Our training process will look like this:
- Choose one sample from our dataset. This is what makes it stochastic gradient descent - we only operate on one sample at a time.
- Calculate all the partial derivatives of loss with respect to weights or biases (e.g. ∂w1∂L, ∂w2∂L, etc).
- Use the update equation to update each weight and bias.
- Go back to step 1.
Let’s see it in action!
Code: A Complete Neural Network
It’s finally time to implement a complete neural network:
Name | Weight (minus 135) | Height (minus 66) | Gender |
---|
Alice | -2 | -1 | 1 |
Bob | 25 | 6 | 0 |
Charlie | 17 | 4 | 0 |
Diana | -15 | -6 | 1
|
import numpy as np
def sigmoid(x):
# Sigmoid activation function: f(x) = 1 / (1 + e^(-x))
return 1 / (1 + np.exp(-x))
def deriv_sigmoid(x):
# Derivative of sigmoid: f'(x) = f(x) * (1 - f(x))
fx = sigmoid(x)
return fx * (1 - fx)
def mse_loss(y_true, y_pred):
# y_true and y_pred are numpy arrays of the same length.
return ((y_true - y_pred) ** 2).mean()
class OurNeuralNetwork:
'''
A neural network with:
- 2 inputs
- a hidden layer with 2 neurons (h1, h2)
- an output layer with 1 neuron (o1)
*** DISCLAIMER ***:
The code below is intended to be simple and educational, NOT optimal.
Real neural net code looks nothing like this. DO NOT use this code.
Instead, read/run it to understand how this specific network works.
'''
def __init__(self):
# Weights
self.w1 = np.random.normal()
self.w2 = np.random.normal()
self.w3 = np.random.normal()
self.w4 = np.random.normal()
self.w5 = np.random.normal()
self.w6 = np.random.normal()
# Biases
self.b1 = np.random.normal()
self.b2 = np.random.normal()
self.b3 = np.random.normal()
def feedforward(self, x):
# x is a numpy array with 2 elements.
h1 = sigmoid(self.w1 * x[0] + self.w2 * x[1] + self.b1)
h2 = sigmoid(self.w3 * x[0] + self.w4 * x[1] + self.b2)
o1 = sigmoid(self.w5 * h1 + self.w6 * h2 + self.b3)
return o1
def train(self, data, all_y_trues):
'''
- data is a (n x 2) numpy array, n = # of samples in the dataset.
- all_y_trues is a numpy array with n elements.
Elements in all_y_trues correspond to those in data.
'''
learn_rate = 0.1
epochs = 1000 # number of times to loop through the entire dataset
for epoch in range(epochs):
for x, y_true in zip(data, all_y_trues):
# --- Do a feedforward (we'll need these values later)
sum_h1 = self.w1 * x[0] + self.w2 * x[1] + self.b1
h1 = sigmoid(sum_h1)
sum_h2 = self.w3 * x[0] + self.w4 * x[1] + self.b2
h2 = sigmoid(sum_h2)
sum_o1 = self.w5 * h1 + self.w6 * h2 + self.b3
o1 = sigmoid(sum_o1)
y_pred = o1
# --- Calculate partial derivatives.
# --- Naming: d_L_d_w1 represents "partial L / partial w1"
d_L_d_ypred = -2 * (y_true - y_pred)
# Neuron o1
d_ypred_d_w5 = h1 * deriv_sigmoid(sum_o1)
d_ypred_d_w6 = h2 * deriv_sigmoid(sum_o1)
d_ypred_d_b3 = deriv_sigmoid(sum_o1)
d_ypred_d_h1 = self.w5 * deriv_sigmoid(sum_o1)
d_ypred_d_h2 = self.w6 * deriv_sigmoid(sum_o1)
# Neuron h1
d_h1_d_w1 = x[0] * deriv_sigmoid(sum_h1)
d_h1_d_w2 = x[1] * deriv_sigmoid(sum_h1)
d_h1_d_b1 = deriv_sigmoid(sum_h1)
# Neuron h2
d_h2_d_w3 = x[0] * deriv_sigmoid(sum_h2)
d_h2_d_w4 = x[1] * deriv_sigmoid(sum_h2)
d_h2_d_b2 = deriv_sigmoid(sum_h2)
# --- Update weights and biases
# Neuron h1
self.w1 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_w1
self.w2 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_w2
self.b1 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_b1
# Neuron h2
self.w3 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_w3
self.w4 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_w4
self.b2 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_b2
# Neuron o1
self.w5 -= learn_rate * d_L_d_ypred * d_ypred_d_w5
self.w6 -= learn_rate * d_L_d_ypred * d_ypred_d_w6
self.b3 -= learn_rate * d_L_d_ypred * d_ypred_d_b3
# --- Calculate total loss at the end of each epoch
if epoch % 10 == 0:
y_preds = np.apply_along_axis(self.feedforward, 1, data)
loss = mse_loss(all_y_trues, y_preds)
print("Epoch %d loss: %.3f" % (epoch, loss))
# Define dataset
data = np.array([
[-2, -1], # Alice
[25, 6], # Bob
[17, 4], # Charlie
[-15, -6], # Diana
])
all_y_trues = np.array([
1, # Alice
0, # Bob
0, # Charlie
1, # Diana
])
# Train our neural network!
network = OurNeuralNetwork()
network.train(data, all_y_trues)
No comments