Combining Neurons into a Neural Network
Combining Neurons into a Neural
Network
A
neural network is nothing more than a bunch of neurons connected together.
Here’s what a simple neural network might look like:
This
network has 2 inputs, a hidden layer with 2 neurons (h1 and h2), and an
output layer with 1 neuron (o1). Notice that the inputs for o1 are the
outputs from h1 and h2 — that’s what makes this a network.
An
Example: Feedforward
Let’s
use the network pictured above and assume all neurons have the same weights w =
[0,1], the same bias b=0, and the same sigmoid activation function. Let h1,
h2, o1 denote the outputs of the neurons they represent.
What
happens if we pass in the input x = [2, 3]?
The
output of the neural network for input x=[2,3] is 0.7216. Pretty simple, right?
A
neural network can have any number of layers with any number of neurons in
those layers. The basic idea stays the same: feed the input(s) forward through
the neurons in the network to get the output(s) at the end. For simplicity,
we’ll keep using the network pictured above for the rest of this post.
Coding
a Neural Network: Feedforward
Let’s
implement feedforward for our neural network. Here’s the image of the network
again for reference:
No comments