Create a Simple Neural Network in Python from Scratch
[ad_1]
In this video I’ll present you ways a synthetic neural community works, and how one can make one your self in Python. In the subsequent video we’ll make one that’s usable, however if you would like, that code can already be discovered on github. I like to recommend watching at 1.5x pace, except you are coding alongside.
Coding begins at 2:30
Part 2: https://www.youtube.com/watch?v=Py4xvZx-A1E
Github code for full neural community: https://github.com/jonasbostoen/simple-neural-network.py
Additional studying:
♦ https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1
♦ https://iamtrask.github.io/2015/07/12/basic-python-network/
supply
[ad_2]
In the next video we’re going to be making a blockchain in JavaScript, so subscribe if you’re interested in that stuff!
Very underrated channel..
Very clean! VERY NICE! 🙏😍 Great Video! 😊😊😊 thank you
finaly free code
WHERE IS the bias?
SUPERB
Man, this was so to the point! Thanks for your efforts. Best NN basics tutorial I've found so far! Very very useful!
Why do you need to transpose the arrays?
hey, guys, I have a problem when I run the code line 34: the synaptic_weights opreand with shape(3,1) doesn't match the np.dot shape (3,4). could any one help me
Pleas show me code system that diagonasis computer faults
Hello. What is the name of the compiler you are using to enter the code ?
does anyone know where the T comes from on the line with the synaptic weights?
Why make computation heavy algorithms in interpreted language like Python? Are you masochist and want to wait years for any resonable size NN to learn something?
I quite cannot get what you mean by 3 by 1 matrix in synaptic weights. In the data framework you presented in the beginning there are 4 training examples, but 3 inputs. I don't understand what is the difference between training examples(4) and inputs(3). Thanks beforehand
This video is 100% gold, thank you !
The derivative of sigmoid u have used was wrong!
S(x)'=s(x)(1-s(x)) not S(x)'=x(1-x)
https://math.stackexchange.com/questions/78575/derivative-of-sigmoid-function-sigma-x-frac11e-x
ValueError: shapes (4,1) and (3,1) not aligned: 1 (dim 1) != 3 (dim 0). How did you do that dot product? it is not possible
you are so cool. how do you learn different things so fast?
Excellent explanation!!!!!! Thank you very much
Very good explanation, but I did not understand at all the mathematics that performed 3 adjusted weights with a three-input array and three errors
Output = array[1[1]].value
Lol just kidding. This was a great video and I understood a ton
1:20 We need a little math. Bruh no pleaseee math anhhhhhhhhh
Hello people, unfortunately I didn't got it, but I want to tell you, what somebody have stolen you video:
https://www.youtube.com/watch?v=WFYxpi3O950&ab_channel=%D0%A5%D0%B0%D1%83%D0%B4%D0%B8%D0%A5%D0%BE%E2%84%A2-%D0%9F%D1%80%D0%BE%D1%81%D1%82%D0%BE%D0%BE%D0%BC%D0%B8%D1%80%D0%B5IT%21https://www.youtube.com/watch?v=WFYxpi3O950&ab_channel=%D0%A5%D0%B0%D1%83%D0%B4%D0%B8%D0%A5%D0%BE%E2%84%A2-%D0%9F%D1%80%D0%BE%D1%81%D1%82%D0%BE%D0%BE%D0%BC%D0%B8%D1%80%D0%B5IT%21
Привет из России
But your derivative is wrong. The derivative of a sigmoid function s(x) is equal to s'(x) = s(x)(1 – s(x)).
What does ø mean?
thank you
I don't really understand anything in this video. When i tried to change anything, i get an error. What should i do?
How did you manage to get your updated weights matx to assign as a (3,1) size, when the dot of the input layer and the adjustments is a (3,4) size? I have the exact same code written ( I've checked like 100 times) and this is the error I've gotten every time.
Great Stuff, Keep it up !!
My friend, your explanation in 15 minutes gave more clarity to me than hours of crash course tutorials online. So simple and well explained. Awesome stuff my man!
Please help.. why is it that when I change the 'rule' for outputting a 1 to be all numbers need to be a 1 I get weird outputs like this:
Outputs:
[[4.48411213e-07]
[9.91498793e-01]
[7.17983933e-03]
[7.17983933e-03]]
It is like your voice is putting your face to sleep. ;-;
Can anyone explain what transpose does?
9:37
Why did you write x * (1 – x) as the sigmoid derivative?
Isn't it sigmoid(x)(1 – sigmoid(x)) ?
Goddamn this is good.
Thanks for explaining .. Nice job!
Nice work! Finally found someone that can teach the way I can understand it..
I subscribed and look forward to watching all your videos!
I keep getting this error. I copied the code verbatim so i don't know what it is.
"Module 'numpy.random' has no 'random' member"
Hi guys, can anyone of tell me where the learning rate fit in the formula that he used?
I know the synaptic_weights are updated as W -= learning_rate*gradient. but he isn't using a learning rate anywhere, it's a bit confusing for me as most other sources update weights that way. Any help is appreciated, thanks in advance.
Am I the only one that does not understand the math?
um, arent you just basically generating a formula such that x*w1 + y*w2 + z*w3 = x?
so wouldn't it obv just eventually just make w1 = 1, w2 = 0, w3 = 0?
Why do you transpose the input_layer?
How does the sigmoid derivative work? I don;t understand the math behind it.
Thanks!
Hey you can't say new situation output is 1 because first all four scenarios third input is 1. Let's say,
Input 1 = A
Input 2 = B
Input 3 = C
If the boolean logic is A.C first four output is okay. But new situation should be zero instead of 1.
(This question is not related to neural networks but I'm little but confused how did you say the output of new situation)
How did you specifie the weights ?? Plz explain the logic..
this n1gga forgot euler's number in the sigmoid function. smh bruh
What's the name of the theme you're using?
i woudlve appreciated if your accent was more clear