Optimizing with TensorBoard – Deep Learning w/ Python, TensorFlow & Keras p.5
[ad_1]
Welcome to part 5 of the Deep learning with Python, TensorFlow and Keras tutorial series. In the previous tutorial, we introduced TensorBoard, which is an application that we can use to visualize our model’s training stats over time. In this tutorial, we’re going to continue on that to exemplify how you might build a workflow to optimize your model’s architecture.
Text tutorials and sample code: https://pythonprogramming.net/tensorboard-optimizing-models-deep-learning-python-tensorflow-keras/
Discord: https://discord.gg/sentdex
Support the content: https://pythonprogramming.net/support-donate/
Twitter: https://twitter.com/sentdex
Facebook: https://www.facebook.com/pythonprogramming.net/
Twitch: https://www.twitch.tv/sentdex
G+: https://plus.google.com/+sentdex
Source
[ad_2]
Why do you look like Edward Snowden's love child? :p
Enjoy your enthusiasm, buddy. 🙂
Ok mister! do you still mean what you say at 25:22? hahah 😀
God this is amazing!
Can you use grid search to do the same?
Nice video!!!
I have a question, since I am running a similiar for loop. Is it neccesary to reset weights or metrics?
My results look pretty strange: The models seem to learn very fast and show good accuracys after one epoch and metrics do not change significanlty for the next 20 epochs.
There is definitly more than enough data and no danger to overfit… May be someone knows a solution.
you have made me sin with this absolute mostrocity:
dense_layers = (tuple(), (32,), (64,),
(32, 32), (64, 64),
(32, 64), (64, 32))
conv_layers = (
(64, 64),
(128, 128),
(256, 256),
(64, 128),
(128, 256),
(64, 64, 64),
(128, 128, 128),
(256, 256, 256),
(64, 64, 128),
(128, 128, 256),
(64, 64, 128),
(128, 128, 256),
(64, 128, 128),
(128, 256, 256),
(64, 128, 64),
(128, 256, 128),
)
window_sizes = (2, 3, 4)
batch_sizes = (8, 16, 32, 64)
splits = (0.1, 0.15, 0.2, 0.25, 0.3)
for dense_layer_tuple in dense_layers:
for conv_layer_tuple in conv_layers:
for window_size in window_sizes:
for batch_size in batch_sizes:
for val_split in splits:
Where each layer tuple represents the amount/size of each conv/dense layer as I build them via iteratively by tuples.
This is over 6000 iterations of the model. All I wish for is that it is done by the time I'm awake again 😀
Hello there ! I practiced DL since 1 year to enhanced my tools in this domain for my Job. And I would like to thank you again Harrison, I've never heard of TensorBoard before. This things is awesome ! Any ideas out there if this kind of optimisation tools dynamique like this exists bur for machine learning type ? Like with Logistic regression or this kind of stuff. That's would be so great. And so comprehensible for someone who don't know that well Data Science.
If anyone know if this kind of tthing exist anywhere that's would b nice.
Again, thank you, you're explaining so well. 🙂
You only saw validation loss.. Is it only sufficient to select a layer?
How can I get those colored graphs?
I only have 2 graphs. The train and validation both show up in the same graph. This is fine when only one network has been trained, but with 27 networks it becomes overwhelming..
How to fix this?
any body know how to export those acc and loss by epoch to excel files, or read the logs by excel?
Train and Validation data are plotting on the same scalar for me.What is the solution to have 4 different scalars?
I guess a better option is a bayesian optimization. Some links below
https://www.youtube.com/watch?v=u6MG_UTwiIQ
https://www.youtube.com/watch?v=C5nqEHpdyoE&t=797s
can someone tell me where did he mention learning rate for this model please?
Hello, I have a question
I followed your tutorial from part 1, and I have a problem with the validation accuracy and validation loss.
My validation accuracy fluctuates a lot, and my validation loss tends to increase. Hence it gives me wrong prediction most of the time.
Could you please help me with it?
Thank You
For people watching this later: installing tensorflow-gpu just got a hell of a lot easier!!! Simply do conda install tensorflow-gpu, it will automatically install the CudaToolkit and CuDNN libraries inside the environment which you are installing in. That's all you need to do to use the GPU!
I believe there is a more systematic way for tuning.
https://www.tensorflow.org/tensorboard/hyperparameter_tuning_with_hparams
I'm getting ResourceExhaustedError after running through a few models
[ASKING – HELP]
where i can find your testing all of that model ..? and how ..?
i mean:
score = model(that all).evaluate(x, y, verbose=2)
print("Model Accuracy: %.2f%%" % (score[1]*100))
thank u for this video was really helpful, but how did u manage to show the val acc, loss etc. When i run my model with model.fit(metrics=['accuracy']) just shows the accuracy on the tensorboard scalar tab. How can i get it to show the val acc, loss etc?
Why does removing the last Dense layer improve the performance ?
do we have to use range on len or the list itself. pls clarify
can we have a tutorial for neon, I mean it would be a short one since it is quite similar to this one, but we could get an insight of both.
When I'm using it, I get train and Validation lines on the same plot, is there any way to get separate plots for both. BTW, I'm using tensorboard only for analysing the performance for only one plot
Haii i already got high acc and val acc without trying this. But i got confuse, because the model only predict cats ( in my case brocolli) since i train my own datasets and the other object is not detected. How can this happen?
Please give me some solution 🙁
Totally Bounced above my head🤦♂️
Great video, I was wondering what kind of machine & GPU do you have? It seems very fast !!
How do we even remember all of the syntax
fastai for pytorch – any idea, is it worth checking?
#sentdex At first thanks to ur great works 🙂 … When I open tensorboard I get epoch_accuracy and epoch_loss instead of acc, loss, val_acc and val_loss… After googling for hours I still can't figure out what's wrong 🙁 Can anyone help me with this?
Can we make another model that can predict the best configuration for input layers to produce a minimum validation loss, an optimal solution?
Dense(2) //does not mean we will have 2 output in the model
https://github.com/tensorflow/tensorboard/issues/2023#issuecomment-474165235
Does anybody know what kind of file is the "* .H-PC"?
Hi, while using for log_dir for multiple outputs, where dense out is 4.
I am getting this error "Attempting to use uninitialized value RMSprop 5lr"
When I open tensorboard I get epoch_accuracy and epoch_loss instead of acc, loss, val_acc and val_loss… After googling for hours I still can't figure out what's wrong 🙁 Can anyone help me with this?
When I run the model after putting dense_layers, conv_layers, and layer_size. The model is only running the 3 conv 128 nodes and 2 dense possibility and it left all the other possibilities. Can anybody help me on what went wrong, cause the code is pretty much the same. Thank you
can anybody tell me how to calculate sensitivity of model ?
In my experiment, changing kernel sizes of Conv2D to (4, 4) improves validation accuracy a lot.
I've heard that PyTorch is a nice middle ground between Keras and raw TensorFlow: gives you more flexibility without being quite as low level. Also, I've heard that it has some nice debugging features.
Is there any way to calculate precision and recall on Keras not Tf?
How can I get it to record more information than just scalars and graphs. Using just Keras, ideally.
Awesome tutorials!
I think you added the Dropout layer inside the dense layers for loop, when it was set to 0; so in the end, no Dropout layer was added 🙁