Building our Neural Network – Deep Learning and Neural Networks with Python and Pytorch p.3




[ad_1]

In this tutorial, we’re going to focus on actually creating a neural network

Text-based tutorials and sample code: https://pythonprogramming.net/building-deep-learning-neural-network-pytorch/
Linode Cloud GPUs $20 credit: https://linode.com/sentdex

Channel membership: https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ/join
Discord: https://discord.gg/sentdex
Support the content: https://pythonprogramming.net/support-donate/
Twitter: https://twitter.com/sentdex
Instagram: https://instagram.com/sentdex
Facebook: https://www.facebook.com/pythonprogramming.net/
Twitch: https://www.twitch.tv/sentdex

Source


[ad_2]

Comment List

  • sentdex
    November 16, 2020

    a bit confused by the output, I thought it should be values between 0 and 1 adding up to 1.

  • sentdex
    November 16, 2020

    Appreciate ur effort bro.
    have a error while a coding and it says name 'self' is not defined when i execute this line of code : X = torch.rand(28, 28)
    X = X.view(-1, 28*28) could u help me please?

  • sentdex
    November 16, 2020

    This is what I exactly wanted…nice presentation sir

  • sentdex
    November 16, 2020

    Snowden is the best teacher of DL on Youtube

  • sentdex
    November 16, 2020

    10:25 idk if someone already said this in the comments, but if you just don't use an activation function, the NN just becomes a huge linear regression. Mathematically, Having multiple fc layers with no activation function is equivalent to have one HUGE fc layer

  • sentdex
    November 16, 2020

    as far as I know the 784 comes from flattening the image. where the image resolution is 28X28 ,and multiplying 28*28 = 784, so we get a [784,1] matrix in this case. this value may vary according to my datasets input image resolution.

  • sentdex
    November 16, 2020

    I don't get what nn.Linear(64,64) returns object wise. Is fc1 an instance of a class now? If so, why does it behave as a function later, with self.fc1(x)?

  • sentdex
    November 16, 2020

    yes, dim=1. You are putting out a flat list of digits, right. dim=2 for 2 dimensional image? Isn't it that simple?

  • sentdex
    November 16, 2020

    I didn't understand why the output for each fully connected layer is 64? Can the value be anything? or it must be 64? Does our prediction change with this value. I'm confused!

  • sentdex
    November 16, 2020

    I’m just 17 and I understand everything you say! You are one of the most amazing teachers here in YouTube thanks!

  • sentdex
    November 16, 2020

    The way that you are showing the loss and concluding that it is "decreasing" was merely lucky. You are showing the loss of the that particular batch. In this context, the loss of the epoch must be considered.

    epochs = 10

    for epoch in range(epochs):

    epoch_loss = 0.0

    for batch in trainset:

    # the inputs

    x, y = batch

    # zero the parameter gradients

    self.zero_grad()

    # Foward

    outputs = self.feed(x.view(-1, 28 * 28))

    # For [0, 1, 0, 0] vectors, use mean squared error, for scalar values use nll_loss

    loss = F.nll_loss(outputs, y)

    # Back propagate the loss

    loss.backward()

    # Adjust the weights

    optimizer.step()

    # Calculate epoch loss

    epoch_loss += outputs.shape[0] * loss.item()

    print("Epoch loss: ", epoch_loss / len(trainset))

  • sentdex
    November 16, 2020

    The little off-shoot of init and super was the most clear and concise explanation I've seen so far.

  • sentdex
    November 16, 2020

    NotImplementedError Traceback (most recent call last)
    <ipython-input-80-be58ba250973> in <module>
    —-> 1 output =net(x)

  • sentdex
    November 16, 2020

    actually if you put dim=0 you can pass the neural net a 28 * 28 Tensor and it will work.

  • sentdex
    November 16, 2020

    Nobody:

    sentdex at 8:28 : "For our For.. or For our Feed Forward…"

    sighs

    "That's a lot of F-words."

  • sentdex
    November 16, 2020

    What! It passes for me even without the view thing there..
    X = torch.rand((28*28));
    net = Net()
    net(X)
    tensor([[-0.1178, 0.2787, -0.2603, 0.0700, 0.2106, 0.0351, 0.0335, -0.0772,
    -0.0831, 0.2576]], grad_fn=<AddmmBackward>)

  • sentdex
    November 16, 2020

    thanks

  • sentdex
    November 16, 2020

    16:40 dim=1 may be explained by "which axis contains all the RVs of the discrete distribution "

  • sentdex
    November 16, 2020

    Great videos! Thanks!

  • sentdex
    November 16, 2020

    Thank you so much! I spent hours in the pytorch docs, and there were lots of things that I just didnt understood were they came from. Thank for clearing them out for me. Awesome teacher.

  • sentdex
    November 16, 2020

    Why did u use shuffle =False in testset?

  • sentdex
    November 16, 2020

    why output does not sum to 1 here?

  • sentdex
    November 16, 2020

    very good tutorial thanks!!!!!

  • sentdex
    November 16, 2020

    I have a doubt, why are we using an activation function while passing data from the input layer to the first hidden layer?

  • sentdex
    November 16, 2020

    Hello everyone. I am new to deep learning and to python in some way, so I need some guidance from this lovely community. Can someone explain the hierarchy of the PyTorch framework? I am confused about what torchvision is in relation to PyTorch as well as other modules. Please help or refer me to some useful resource. Thanks

  • sentdex
    November 16, 2020

    dis dude dope

  • sentdex
    November 16, 2020

    man .. you are REALLY REALLY BAD at explaining things … you give concepts for grantes, … video is messy with your own opinions comments … out of context thought … please, do some edit … that really distracts people watching.
    These are not simple things to go on, especially with duck-typed python + your messy explanations make all this extremely understandable … people get more confused watching this.

  • sentdex
    November 16, 2020

    The __init__() method of the super class already calls the forward method that you' ve created?

  • sentdex
    November 16, 2020

    Wow these are really good!

  • sentdex
    November 16, 2020

    12:40 Shouldn't you be using the activation function on layers 2, 3 and 4? I thought the input is supposed to feed into the first hidden layer multiplied by the weights and is then sent through the activation function which acts as the output for that neuron.

  • sentdex
    November 16, 2020

    you are amazing in explaining

  • sentdex
    November 16, 2020

    Dim=1 was pretty neatly explained actually. I haven't come across a clearer explanation than this. "What we want to sum to 1" is gonna stick with me 🙂

  • sentdex
    November 16, 2020

    "it's a lot of f-words" 😂😂

  • sentdex
    November 16, 2020

    can you please tell me about super().__init__()

  • sentdex
    November 16, 2020

    Around 3:10, you address super (parent class inheritance). However, here https://github.com/pytorch/examples/blob/master/reinforcement_learning/reinforce.py and here https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html, PyTorch recommends super(Net, self).__init__(). Whereas, you have super() empty. What is the effective difference between going with super(Net, self).__init__() vs super().__init__()?

Write a comment