Neural Network For Handwritten Digits Classification | Deep Learning Tutorial 7 (Tensorflow2.0)




[ad_1]

In this video we will build our first neural network in tensorflow and python for handwritten digits classification. We will first build a very simple neural network with only input and output layer. After that we will add a hidden layer and check how the performance of our model changes.

Github link for code in this tutorial: https://github.com/codebasics/py/blob/master/DeepLearningML/1_digits_recognition/digits_recognition_neural_network.ipynb

Next video: https://www.youtube.com/watch?v=icZItWxw7AI&list=PLeo1K3hjS3uu7CxAacxVndI4bE_o3BDtO&index=8

Previous video: https://www.youtube.com/watch?v=z-ZR_8BZ1wQ&list=PLeo1K3hjS3uu7CxAacxVndI4bE_o3BDtO&index=6

Deep learning playlist: https://www.youtube.com/playlist?list=PLeo1K3hjS3uu7CxAacxVndI4bE_o3BDtO

Prerequisites for this series:   
1: Python tutorials (first 16 videos): https://www.youtube.com/playlist?list=PLeo1K3hjS3uv5U-Lmlnucd7gqF-3ehIh0    
2: Pandas tutorials(first 8 videos): https://www.youtube.com/playlist?list=PLeo1K3hjS3uuASpe-1LjfG5f14Bnozjwy
3: Machine learning playlist (first 16 videos): https://www.youtube.com/playlist?list=PLeo1K3hjS3uvCeTYTeyfe0-rN5r8zn9rw  

Website: http://codebasicshub.com/
Facebook: https://www.facebook.com/codebasicshub
Twitter: https://twitter.com/codebasicshub
Patreon: https://www.patreon.com/codebasics

Source


[ad_2]

Comment List

  • codebasics
    December 11, 2020

    Hi ,

    When I write this code, the accuracy is very low .. can you help ?

    import tensorflow as tf

    from tensorflow import keras

    import matplotlib.pyplot as plt

    %matplotlib inline

    import numpy as np

    import pandas as pd

    (x_train , y_train),(x_test,y_test) = keras.datasets.mnist.load_data()
    x_train_f = x_train.reshape(len(x_train),28*28)

    x_test_f = x_test.reshape(len(x_test),28*28)

    model =keras.Sequential([

    keras.layers.Dense(10,input_shape=(784,),activation='sigmoid')

    ])

    model.compile(

    optimizer='adam',

    loss ='sparse_categorical_crossentropy',

    metrics =['accuracy']

    )

    model.fit(x_train_f,y_train,epochs=5)

    Epoch 1/5

    1875/1875 [==============================] – 1s 428us/step – loss: nan – accuracy: 0.0991

    Epoch 2/5

    1875/1875 [==============================] – 1s 417us/step – loss: nan – accuracy: 0.0987

    Epoch 3/5

    1875/1875 [==============================] – 1s 401us/step – loss: nan – accuracy: 0.0987

    Epoch 4/5

    1875/1875 [==============================] – 1s 407us/step – loss: nan – accuracy: 0.0987

    Epoch 5/5

    1875/1875 [==============================] – 1s 419us/step – loss: nan – accuracy: 0.0987

  • codebasics
    December 11, 2020

    Hi Sir, thankyou so much for your perfect work. When I follow your code exactly, I have a problem about confusion_matrix
    cm = tf.math.confusion_matrix(labels=y_test,predictions=y_predicted_labels)
    If I just do this, I can't print the 2-d array. I googled, and try below, it works:
    sess = tf.Session()
    with sess.as_default():
    print(sess.run(cm))

    And also for
    import seaborn as sn
    plt.figure(figsize = (10,7))
    sn.heatmap(cm, annot=True, fmt='d')
    plt.xlabel('Predicted')
    plt.ylabel('Truth')

    It also doens't work, but I try below, it works:
    sn.heatmap(cm.eval(session=sess), annot=True, fmt='d')

    But I really don't understand, could you pls give some hints when you are convenient. Thanks a lot again.

  • codebasics
    December 11, 2020

    Thanks for the video ……i just changed epochs=10 then i got loss: 0.0123 – accuracy: 0.9962

  • codebasics
    December 11, 2020

    Hi sir…first of all thank you alot for these videos..these videos are really amazing. Sir i have a project on radial basis neural network..if possible can you make a video for that?

  • codebasics
    December 11, 2020

    None of the loss function works except sparse categorical cross entropy
    Please make a video on how to use a loss function

    With SGD and nadam optimizer found almost same accuracy as adam

  • codebasics
    December 11, 2020

    Thanks so much for great lecture you have made

  • codebasics
    December 11, 2020

    sir i have questions for you, hope you can answer me : Is there a way to choose the best option for our model? In machine learning, we have GridSearchCV and using it we can come up with a dictionary list of models and choose the best result from that (like you did in ML tutorials). Is there a similar way in Deep Learning? cause i think we can save time instead of fix 1 by 1 functions

  • codebasics
    December 11, 2020

    Sir , I got error when the model.fit(x_train_flattened, y_train, epochs=5) method , the error is value error,input arrays should have the same numbers as target arrays. Found 10000 input sample and 60000 target sample

  • codebasics
    December 11, 2020

    You're god for the Beginner !!! The way you explain is way different than anyone else… SUPER AMAZING… HIGHLY RECOMMENDED

  • codebasics
    December 11, 2020

    love the have you figure out the issue of accuracy, That makes a huge difference.

  • codebasics
    December 11, 2020

    77.66 Adadelta sparse_categorical_crossentropy Accuracy

    98.9 Adagrad sparse_categorical_crossentropy Accuracy

    99.7 Adam sparse_categorical_crossentropy Accuracy

    99.95 Adamax sparse_categorical_crossentropy Accuracy

    11.24 Ftrl sparse_categorical_crossentropy Accuracy

    97.39 Nadam sparse_categorical_crossentropy Accuracy

    98.71 RMSprop sparse_categorical_crossentropy Accuracy

    99.3 SGD sparse_categorical_crossentropy Accuracy

  • codebasics
    December 11, 2020

    How would we deal with continuous training with live data?

  • codebasics
    December 11, 2020

    Best teacher is here!!!!!

  • codebasics
    December 11, 2020

    No module named '_pywrap_tensorflow_internal'

    getting this error while importing tensorflow on jupyter notebook

  • codebasics
    December 11, 2020

    Excellent work ! Really appreciate your work !
    Please make more videos on CNN and LSTM as well . Thank you ! 🙂

  • codebasics
    December 11, 2020

    Sir.. thank u for the tutorial. I got an accuracy of 99.62%..

  • codebasics
    December 11, 2020

    Hi Dhaval,

    It is a very good tutorial. I tried installing Anaconda , Keras and TensorFlow. I am getting the following error when i try to execute the script.

    ImportError: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via `pip install tensorflow`

    Version details:-

    Anaconda for Python 3.8.5
    Keras 2.4.3
    TensorFlow 2.3.0

    Can you please tell me how can i solve this issue?

  • codebasics
    December 11, 2020

    ——————————————————————
    loss: 0.0145 – accuracy: 0.9985
    ——————————————————————
    model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(100, activation='relu', kernel_initializer = 'he_uniform'),
    keras.layers.Dense(50, activation='relu', kernel_initializer = 'glorot_uniform'),
    keras.layers.Dense(10, activation='sigmoid')
    ])

    model.compile(
    optimizer = 'adam',
    loss = 'sparse_categorical_crossentropy',
    metrics = ['accuracy']
    )

    model.fit(X_test, y_test, epochs=10)

  • codebasics
    December 11, 2020

    Thank you so much!!Sir!

  • codebasics
    December 11, 2020

    Your videos is a gold mine of knowledge.

  • codebasics
    December 11, 2020

    Hey Dhaval, you made DataScience easy for me by all these wonderful playlist!! Thank you so much for all your efforts. Got accuracy: 0.1059 for loss = 'mean_squared_error'.

  • codebasics
    December 11, 2020

    Very well planned and simply explained tutorial.
    But I have a question, I have read that for multi-class classification problem, softmax activation is good, why did you use sigmoid activation here? Thanks.

  • codebasics
    December 11, 2020

    Hi sir I always appreciate your work and efforts, keep doing this. I have a question why you use activation function sigmoid any other function we can use or not?

  • codebasics
    December 11, 2020

    how to predict for (X_test_flattened[0]) and not for (X_test_flattened) ? Please someone answer…

Write a comment