Training a Neural Network, 1
Training a Neural Network, 1
Say we have the following measurements:
|
Name |
Weight (lb) |
Height (in) |
Gender |
|
Alice |
133 |
65 |
F |
|
Bob |
160 |
72 |
M |
|
Charlie |
152 |
70 |
M |
|
Diana |
120 |
60 |
F |
|
Name |
Weight (minus 135) |
Height (minus 66) |
Gender |
|
Alice |
-2 |
-1 |
1 |
|
Bob |
25 |
6 |
0 |
|
Charlie |
17 |
4 |
0 |
|
Diana |
-15 |
-6 |
1 |
We’ll
use the mean squared error (MSE) loss:
MSE=ni=1(ytrue−ypred)2
Let’s
break this down:
- n is the number of samples, which is 4
(Alice, Bob, Charlie, Diana).
- y represents
the variable being predicted, which is Gender.
- ytrue is the true value
of the variable (the “correct answer”). For example, ytrue for
Alice would be 1 (Female).
- Ypred is the predicted value
of the variable. It’s whatever our network outputs.
(ytrue−ypred)2
is known as the squared error. Our loss function is simply taking the average
over all squared errors (hence the name mean squared error). The better our
predictions are, the lower our loss will be!
Better
predictions = Lower loss.
Training
a network = trying to minimize its loss.
An
Example Loss Calculation
Let’s
say our network always outputs 0 - in other words, it’s confident all humans
are Male 🤔. What would our loss be?
|
Name |
ytrue |
ypred |
(ytrue−ypred)2 |
|
Alice |
1 |
0 |
1 |
|
Bob |
0 |
0 |
0 |
|
Charlie |
0 |
0 |
0 |
|
Diana |
1 |
0 |
1 |
MSE= 1/4 (1+0+0+1) = 0.5
Code:
MSE Loss
Here’s some code to calculate loss for us:
import numpy as np
def mse_loss(y_true, y_pred):
# y_true and y_pred are numpy arrays of the same length.
return ((y_true - y_pred) ** 2).mean()
y_true = np.array([1, 0, 0, 1])
y_pred = np.array([0, 0, 0, 0])
print(mse_loss(y_true, y_pred))

No comments