Building a Neural Network from Scratch in Python
[ad_1]
We’re gonna use python to build a simple 3-layer feedforward neural network to predict the next number in a sequence. We’ll go over the concepts involved, the theory, and the applications. Lead by: Siraj Raval of Sirajology
https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A
Source
[ad_2]
Mantap videonya.
Saya juga ada nih rekomendasi channel lain buat belajar neural network siapa tau cocok hehe.
https://youtu.be/vyAsO_fzNF8
Far better to learn Neural Networks from Sentdex.He has a channel, and also a website.He gives a very good and simple explanation for everything that he does.
This is so bad. I don't know how these people are allowed to teach.
Excellent!
Siraj Raval plagiarized got away with it and even made money I guess
https://iamtrask.github.io/2015/07/12/basic-python-network/
i feel like he doesn't give good explanation for using dot and transpose. i took matrix algebra but didn't have a focus on implementation and would love an idea of why and how to apply those concepts
Maybe I missed something but I'm wondering why there is no backpropagation function for an L1_error like there is for L2_error (line 40). I'm new to both Python and Neural Networks but that seems odd to me. Would you (or anyone) mind helping me understand why backpropagation is not required for L1? Thanks
Hey I followed along recreating the python program as he went. I double checked to make sure we typed the same things. However, my error goes down a lot slower than his. Why might this be?
'''
The code in-case someone needs it
This is a three-layer neural network
Version Python 3.8.0
Numpy 1.17.3
'''
import numpy as np
#sigmoid functionkkk
def nonlin(x, deriv=False):
if(deriv==True):
return (x*(1-x))
return 1/(1+np.exp(-x))
x = np.array([[0,0,1],
[0,1,1],
[1,0,1],
[1,1,1]]) #input data
y = np.array([[0],
[1],
[1],
[0]]) #classes
np.random.seed(1) #seeding to get repetetive values
#synapses(weights)
syn0 = 2*np.random.random((3,4)) -1
syn1 = 2*np.random.random((4,1)) -1
#training
for j in range(60000):
#making the layers
l0 = x # the created x will be the input of l0
l1 = nonlin(np.dot(l0, syn0))
l2 = nonlin(np.dot(l1, syn1))
#backpropagation
l2_error = y – l2
if (j% 10000) == 0:
print('Error: '+ str(np.mean(np.abs(l2_error))))
#calculate deltas
l2_delta = l2_error*nonlin(l2, deriv=True)
l1_error = l2_delta.dot(syn1.T)
l1_delta = l1_error * nonlin(l1, deriv=True)
#update weights7synapses
syn1 = syn1 + l1.T.dot(l2_delta)
syn0 = syn0 + l0.T.dot(l1_delta)
print('Output after training is done')
print(l2)
line 43 there are 3 opening brackets but only 2 closing brackets, yet his code runs…this is amazing.
Good training
Why would someone mixup l with 1, they’re so far apart!! Doesn’t make any sense.
Is that potato 🥔?
why is siraj so dumb in this video
All this is cool, but can someone explain what does this actually mean? Like what does the x and y array represent?
I'm learning more on this I'm just getting started I've learned alot if python
https://iamtrask.github.io/2015/07/12/basic-python-network/
numpy isnt from scratch
Very good Video
Can i get a copy of the code please?
at least we know the first words post-singularity already
in a somber tone: “holy shit it worked”
Why deriv is x*(1-x)? Filipe from brazilll
Wow this is really bad.
3:08 "Tokyo sex without a condom"
Hellllllll yeah Siraj… get that pussy bro
after breaking my head for entire day behind logic of dot product of synapses and input and the order they should be multiplied i really dont know whether i am right or wrong but what i understand is when initializing dimensions for synapses suppose for layer 1 i.e syn0, dimensions should be (4,3).It make sense to me as(4,3) as for each row denoting each neuron in layer 2 having 3 columns as 3 input features but it does not output single value at the end of all layes but now i realizes(thank you i got it now ) vice versa is right because if finalizes to single value. Thank you SIRAJ indirectly affecting millions of amatures like me… ; >)
Can someone have a look on my code and tell me why memory overflow happens after 3 loops?
https://paste.ofcode.org/HCPyVfbTkXNZne7biuxamV
code is ugly, explanations are wrong, and using numpy is not "from scratch"
its siraj
from basic neural network to tensorflow to rnn to cnn. this helps a lot! thanks!
RE: the first example. This is great. The code compiles, and I get the same numbers. What I'm a bit confused about is why there is no learning rate and is this applying back propagation?
مجموعة الذكاء الصناعي و تقنيات تعليم الألة و التعليم العميق Deepmind and Machine learning. في الفيسبوك
https://www.facebook.com/groups/1293764990678452/
People talking shit about Siraj, listen, nobody is perfect. Have you ever programmed? Errors come and go and the process takes time. It's heard to stand there in front of an audience and do live programming, it's just hard, imagine it. He tried to explain and his audience was not noobs. He at least tried, what you didn't so stay cool and study the basics again, come back here and see the difference. He is an awesome teacher, and he is an inspiration to a lot of people and he is the teacher so instead of pointing negatives, point positives. That's what you're supposed to do when learning.
3:37 neural network tutorial
I was wondering why everybody is trying to teach neural nets starts with XOR and end with the same shit XOR! Is there anything else you can use it? After all there is no point repeating the same problem over and over again. Please change the subject.
I believe this guy memorize the code, not understand how it works. Look at his typo and shallow explanation for every step..
Awesome Video! Fun and exciting and yes not perfect.
Well now, numpy isnt like os or random or time, etc. This is a package that doesnt come as default on a python installation. So importing it without explanation about how it use it, where to get it, how to install it, you really make a lot of leaps here and take a lot for granted. And you really truly are building on top of other peoples work much more profoundly than using default packages. In fact, not everyone uses python so this commentary about "abstracting everything" is just bullshit – this is a specific implementation.
WHERE MAH PEP8 @ BRO
The typing is terrible and his explanations are total crap.
YeS SiR !!!!!!!!!!!!!!!!
Yes he is siraj, because that code in the video is also on the video uploaded by siraj
How do you draw the node/layers picture based off the input and output matrices?
Audio is so BAD
if I change the x matrix, the output has big error, can anyone solve the problem?