Neural Network using BackPropogation in Python




[ad_1]

Link to github repo: https://github.com/geeksnome/machine-learning-made-easy/blob/master/backpropogation.py

Support me on Patreon: https://www.patreon.com/ajays97

Our facebook page: https://www.facebook.com/geeksnome
Our Twitter page: https://www.twitter.com/geeksnome
Our Instagram: https://www.instagram.com/geeksnome
Our Blog: http://geeksnome.com

Source


[ad_2]

Comment List

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    It's a very bad practice to name activation function results in terms of z. It creates confusion and slows the learning rate.

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Very simple and excellent one, thank you sir, but LR and Bias concept missing. give us one more with including those.

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    is a hidden layer the same as softmax layer?

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    why biases are not added?

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Thank you

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    It's really helpful for me. Thank you

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    A small note that the Weight 1 matrix is actually 2×3 not 3×2.. Since you transposed the matrix implicitly in the _init_ function when you switched the inputs and neurons. That is, for the Weight matrix, the rows become columns and columns become rows (Columns x Rows), and not Rows x Columns like usual. The dot product rule of the matrices says that the the Column (2nd dimension of the matrix) of the first matrix must match the number of Rows (first dimension) of the second matrix. In the example in the video, the input size is 2, weights correspond to the number of neurons in the hidden layer, so the matrix size is 2 inputs by 3 neurons [2][3] then the dot product of [3]1] for the output layer. Notice that 3 of the columns of the first matching the 3 in the rows of the second matrix.

    For the output, the second rule of matrix multiplications says that the size of the output is the Rows of the first matrix and the Columns of the second matrix. In the example above, the output matrix will have the size of [2][1], that is, 2 rows (weights), one column (neuron).

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    np.dot(X, self.W1) . how does this work here X is 3×2 and W1 is also 3×2.

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    This was really informative sir, but could you try a larger sample space and make sure the code is not memorizing? cheers

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Thank you ❤ I know the concept but dont able to code the matries now I able

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Why didn’t you optimize the loss function? You did (y-output) shouldn’t it have been (y-output)**2. The theory behind the choice of lost or cost function is fascinating

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    font is too small to read

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Hello Sir, could you explain where the learning rate and the biases are

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    please give an example by considering a dataset

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Sir my question is how to calculate weights regarding fake and real news (titles)

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    When you try to scale this from one hidden layer to n hidden layers, you will realize that his explanation is as good as the dead horse.

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Never thought this would happen! :-}

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    sir why didnt u use learning rate?

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    It was really very helpful!! But can you please explain why did you evaluate output_delta separately?? As I have found in some videos they have considered output_delta=output_error.

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Nice. It's really helpful to understand the bp.

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    where is git repo link?

  • GeeksNome - Hacking, Programming & Technology
    December 7, 2020

    Nice dude..

Write a comment