Recurrent Neural Networks (RNN) – Deep Learning w/ Python, TensorFlow & Keras p.7
[ad_1]
In this part we’re going to be covering recurrent neural networks. The idea of a recurrent neural network is that sequences and order matters. For many operations, this definitely does.
Text tutorials and sample code: https://pythonprogramming.net/recurrent-neural-network-deep-learning-python-tensorflow-keras/
Discord: https://discord.gg/sentdex
Support the content: https://pythonprogramming.net/support-donate/
Twitter: https://twitter.com/sentdex
Facebook: https://www.facebook.com/pythonprogramming.net/
Twitch: https://www.twitch.tv/sentdex
G+: https://plus.google.com/+sentdex
Source
[ad_2]
I want to classify anomaly detection using RNN keras.tf but I have a problem where the accuracy value increases but the val_accuracy value does not change and just remains constant at 50%. this is my complete code available on google colab https://colab.research.google.com/drive/1saoNuCxj08JCxZ_7taIjhp8sJEIV-T5U?usp=sharing
Can we have the codes?
Either way
I really don't get it, when I run the code each Epoch is 1875 samples, when you run it it is 60K..
Hey man, i like your video. but actually i love your text editor!! what's it?
@sentdex Question. Your code works beautifully. However, if I try to create the model by passing Sequential a list of the layers (i.e. `[LSTM, Dropout, LSTM, …]` instead of using `model.add`, I get an input shape error. Do you know why? (for reference, I have this on GitHub under Issue #42986)
LOVE your work sentdex!! Long time viewer. Just gotta say though…what's up with that coffee mug?! I busted up when I saw that! No pause, no mention at all. Just hold on guys, I'm gonna take a sip here…hahaha! I want that mug. You are epic!
How you set environment for deep learning in sublime text editor?
what is mnist??somebody pls answer
Can u help me the differences between conventional LSTM and Random Connectivity LSTM(RCLSTM) when it comes to code?
How to combine CNN and RNN for Detection problem?? anybody help..
You claim that RNNs are for time series but choose a normal supervised learning problem (MNIST dataset). What is then the difference between LSTM and other deep NN models such as CNNs? I watched this video to clear up this confusion and I am now even more confused. Aren't RNNs supposed to be for time series? I mean real-time series. I fail to see how 28×28 dimensional input is a time series. It's spatial data, not temporal.
Thank you for making this video basic.
I am very new to tensorflow and keras in general, just learnt it last 2 weeks.
Thank you.
Why do we use x_train.shape[1:] for the input statement. What does [1:] mean when it comes to shape?
I have a question. If the input is 28 by 28 and I am guessing that it will be flattened to (784,1) but the cells are 128, how does that work? are we putting x1, x2, x3,…x128 at the same time?
You sound a bit like Edward Snowden. Very good explanation (and drawings)!
Great work, thanks; could you post an example of RNN for NLP
Most people probably know this but when he normalizes the data and grabs "255.0" out of the sky it is because the mnist data-set is giving a 28×28 array of number digits in gray-scale by assigning a pixel shade of 0-255 ; 0 being black space and 255 being white; if you print(x_train[1]) you can tell it is a '0' and prove that by printing y_train[1]; dividing all pixels by 255 scales all image data between 0 and 1
Hi, would you please guide me?
what should I do when I get these errors:
2020-05-29 14:11:57.862761: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-05-29 14:11:58.365592: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-05-29 14:11:59.372621: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2020-05-29 14:11:59.372700: F tensorflow/core/kernels/cudnn_rnn_ops.cc:1624] Check failed: stream->parent()->GetRnnAlgorithms(&algorithms)
Have I done anything wrong with regarg to my code, or is this issue due to the installation of Tensorflow or etc.?
Moreover, I had ran my model with LSTM for about 1000 times till today, but when I used the CuDNNLSTM it didn't work and gave these errors.
Thanks, it was great video
I'm a little confused… When you run yours it has like x/60000 on the left when you run it but I only have 1875 anyone knows why? I did it exactly the same as him. Also with the newer version, you don't have to do the custom layer like he did for the GPU
I am trying this
But I am getting this type of warning and I am unable to us CuDNNLSTM
WARNING:tensorflow:Layer lstm will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
Please Help!!
Are there any updates now? For doing all the same stuff in better way? As you had made this video long back !
Can I use this for Image Classification?
Hello sentdex,
I'm working on my Bachelor final project and would like to implement RNN to train some sequential data. I've seen your videos on the topic but I still don't know how to organize the data. Could you help me with that?
Great videos by the way, keep on going like this!
You can simply say use COLAB but well done everything is clear to me
I have a variable length input, which is a signal. How to input variable input shape without using any padding all the time?
Note: Since Tensorflow 2.0 it will automatically take the CuDNN version if you specify no activation function
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available. But with conditions – must use default activation function 'tanh'.
I have a small doubt can we use Recurrent neural networks on tabular data such as Defect predication classification task
So the piece of code that classifies this as a RNN is 'return_sequences = True', am I right?
THANK YOU SO MUCHHH FOR THIS VIDEO!
I want to more about tensorflow.callbacks
Hello thans for helping is to understand RNN
please tell wich version of Tensorflow and keras
because i tried your code and i have some problem
like
TYpeError add() got unexpected argument 'activation'
This is really helpful. I was looking for a simple intro to RNN and LSTM, but couldn't find anywhere in tensorflow 2.1. But this one is simple, up-to-date version. Many thanks.
You look and sound like Edward Snowden.
Sorry, I don't understand what's the purpose of the RNN here. You're just feeding images of numbers into the RNN and then training on the labels. There isn't any order or anything. You can do the same with NN or CNN. It would have been cool if the network could fill in (predict) blank spots in the image. Like, if the image is damaged and some pixels are missing, you could use some model to fill in the gaps. I thought RNNs could do that.
Hey when you realized it was learning slow you “normalized” the data by dividing both x_train and x_test by 255.0. Why was that number chosen?
Which version of TF is this?
It would be great if you tell beforehand what we are going to do with the MNIST and maybe draw a rough diagram of the structure of RNN with MNIST as input.
If CUDNNLSTM uses tanh which goes from -1 to 1, shouldn't you renormalize the input by dividing by 127 and then subtracting 1 ?
One question I have is when using decay, in theory it should stop at a minimum. However in practice, I find after it reaches the min it bounces out wildly. Is this just due to pythons rounding errors at e-17?
my notebook frozze :c
how many cups do you have
He looks like Edward snowden
Thank you for the tutorial.
I have a question about the input dimensional. My input data are videos, is it true the sequences indicated the totally frames of the video and the elements indicated feature number of each frame?
why don't you try vscode
Why return sequences set to True?
where to get that cool coffee cup.
How does one learn to make these codes without seeing tutorials or videos like SENtdex?