Don't wanna be here? Send us removal request.
Text
Neural Networks
Neural Networks are nodes ("neurons") interconnected with one another in different layers (AI neural networks do not seem to have too much in common with actual human brains).
Neural Networks usually consist of at least three layers. The first layer is always the input layer and the amount of nodes in this layer are the amount of input arguments for the Neural Network. The last layer is the output layer. The amount of nodes in the output layer are the amount of values expected as output.
There is always at least one "hidden" layer, that is in between the input layer and the output layer. The hidden layer is mainly there to compute the values fed into it.
The connections between the neurons have "weights", that multiply the values going through the connections. The neurons themselves often have "biases", that add up to the value going into the neuron.
After the values are manipulated by weights and biases, each neuron uses an "activation function" to further change the value.
A popular activation function is ReLU, which changes negative values to zero.
You can "train" a Neural Network to make it's outputs more accurate by adjusting it's weights and biases. Many people use Back Propagation to achieve this.
To train a Neural Network, you need a dataset. You essentially need input with the expected output, and training methods will try to get the actual output as close as possible to the expected output.
Afterwards, you would "test" your neural network on new, unseen data from a new dataset.
1 note
·
View note