Recurrent Neural Networks (RNN / LSTM )with Keras – Python
[ad_1]
In this tutorial, we learn about Recurrent Neural Networks (LSTM and RNN). Recurrent neural Networks or RNNs have been very successful and popular in time series data predictions. There are several applications of RNN. It can be used for stock market predictions , weather predictions , word suggestions etc.
SimpleRNN , LSTM , GRU are some classes in keras which can be used to implement these RNNs. The backend can be Theano as well as TensorFlow.
Find the codes here
GitHub : https://github.com/shreyans29/thesemicolon
Facebook : https://www.facebook.com/thesemicolon.code
Support us on Patreon : https://www.patreon.com/thesemicolon
Good Reads : http://karpathy.github.io/
Check out the machine learning, deep learning and developer products
USA: https://www.amazon.com/shop/thesemicolon
India: https://www.amazon.in/shop/thesemicolon
Source
[ad_2]
Hey everyone ! Created a video about Concept of RNN and LSTM.
https://www.youtube.com/watch?v=S0XFd0VMFss
I want to classify anomaly detection using RNN keras.tf but I have a problem where the accuracy value increases but the val_accuracy value does not change and just remains constant at 50%. this is my complete code available on google colab https://colab.research.google.com/drive/1saoNuCxj08JCxZ_7taIjhp8sJEIV-T5U?usp=sharing
You must have explained a little more on the concept of input shape and reshaping.
2:16 u mentioned one to many rnn model how can we create it?? I want a model which gives two output on a single input. can u suggest me some idea??
where do you specify no of input and hiddenlayer neurons?
How come almost 20% downvotes? Is there something explained wrong?
my csv was of size 2427
i took the first 100 values
but in the epochs the accuracy is showing in terms of e+000 .
what to do ?
hi, did you make chatbot vedio ?
Thank you for very useful video, one suggestion is to prepare meaningful small data to use (rather than creating it with loop) in your example to help batter understanding.
TL;DW: It's utterly useless in this example and is only accurate when tested on training data.
Please do make a video on Image Captioning implementation in Keras!
How do we train such a model? Architecture(CNN+RNN) is clear, but how does training happen on a sample image?
Great video. The most interesting part is on you have mentioned that determining the input shape is tricky. I looking for an additional lecture on "reshape MULTIVARIATE time-series data for LSTM forecasting". Any suggestions?
My read list is:
01 – Francois Chollet (Deep Learning with Python) book, chapter 6.3.1
02 – Jason Browlee, (LSTM with Python) book, chapter 3 (How to Prepare Data for LSTM)
03 – Jason Browlee machinelearningmastering tutorial on reshaping data for LSTM.
04 – Keras documentation
After all lecture, I still have questions about reshape data for LSTM input layers.
There is a semicolon detailed explanation on this topic?
Thanks for the great tutorial. Could you shed light on when we use model.add(Embedding()) in the context of text classification using word2vec models .
The funny thing is that the channel's name is SemiColon and it uses Python.
Great videos though. Respect!
What is epoch ?
If I talk about reshaping data, for text classification what would you recommend if the data shape is as below:
input_shape (12000,3) [here, 12000 rows and there are corresponding 3 labels]
should I reshape to (1,12000,3) or (12000,3,1) ?
Here, I am padding the sequence to 1000.
How can I use this to predict output based on past data and not providing new inputs specifically
Great video. Couple questions: First, what does the target refer to? What does that list supposed to represent?
And is there any way to view the actual array like you did around the 11:00 minute mark using Jupyter Notebook?
Thanks for explanation. I dont believe that 100 in LSTM(100, inputshape=(…)) shows the number of outputs. I could not find what does it mean yet but I have seen several examples that there was not any relation between this number and number of outputs.
Hi Good Video, I had a question regarding the LSTM (100, input_shape = …). Doesn't the 100 represent the number of neurons in your first hidden layer?
Nice video !!! Keep posting !
You seem more of a coder, you need to undertand the concepts deeply before implementing. All the best!
val_loss == validation set loss
Thanks for posting this. Very clear explanation. Contrary to what someone else posted, I found your English pretty easy to understand. I watched the video at 1.75x, too.
IDE ?
along with codes and architecture
hey can u mail me all the steps hoe to develop automatic caption generator software…plzzzz
Why would you use the test set as the validation set?
Hi thx for great video, but where is 101 predicted value?
In data.reshape((1,1,100)) first 1 represent the no. of sequences and 100 indicate the number of elements in a sequence. . But what is second 1 ???? u said it is height.. is it possible to put a vector in the place of each element in the sequence????
hello , it is a great work thank you a lot , is it possible to create an application with keras to deblur an image with deep learning ?? can you help me please it is possible or not and thank you
I follow the tutorial, but get the error:
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.
Hi I liked the video, I was wondering how you use LSTM with keras when you have multiple input variables.
Don't take this the wrong way… maybe write the scripts and find someone with a better accent to read them?
I am new in LSTM and Keras.
I have written a code based on LSTM and Keras. Basically, I want to stack one LSTM over another LSTM. But I am still confused what actually internally going on.
Here is part of my code
model_PLSTM = Sequential()
model_PLSTM.add(LSTM(5, input_shape=(3,6),implementation=2,return_sequences=True))
model_PLSTM.add(LSTM(7, input_shape=(3,6), implementation=2,return_sequences=True))
model_PLSTM.add(LSTM(8, input_shape=( 3,6), implementation=2))
Can you please explain this part of the code? I am really struggling to make a diagram of this code.
Thanks in advance.
This is very helpful. Thank you. I was wondering how do I tune the code to take inputs in the following form
time, x and y. i.e., at time t (datenum), the user had two identifications x and y. What I would like to predict at the end will be at a future time tnew, what is the user's x_new and y_new
short n sweet 🙂
Thanks for sharing! its helped me a lot !
You have mentioned that determining the input shape is tricky. If my dataset is colored images how can I use LSTM with appropriate input shape? '
Thank you !
This was really useful thanks
WOW!! The Semicolon and Python 😛 !!! Great videos BTW!!
why do you use accuracy for a regression model?
hii there ,I am new to Deep Learning , thanks for such a nice explaination , actually I want to train language model(bigram,trigram) with the help of LSTM neural network , could you please guide me in that
Could you create a video where the LSTM predicts serveral outputs. So far i only managed it to predict one timestep