Data Augmentation and Handling Huge Datasets with Keras: A Simple Way | by Lucas Robinet | Nov, 2020

[ad_1]


Here we are going to talk about this Keras class, see the modifications on the image that we can perform, how to train a model using data augmentation and all the code that goes with it. For all our first experiments, we are going to use this very nice photo from Robert Woeger

Photo : Robert Woeger on Unsplash

This ImageDataGenerator class allows to generate batches of tensor image data with real-time data augmentation. The data will be looped over (in batches).

Since this class is a Python Generator, if you are not familiar with this, I invite you to look at the RealPython tutorial. Now let’s take a closer look at how it works [2] :

Some arguments are self-explanatory but for others it may be interesting to look at their effects and this is what we will do right now. Let’s start doing Data Augmentation!

Vertical and Horizontal Shift Augmentation

To perform shifts, the width_shift_range and height_shift_range arguments must be adjusted and set. The value of this parameter is the percentage of shift along the chosen dimension. Let’s look at the code to perform 20% shifts augmentation.

Let’s go through the code together.

We start by importing the necessary packages, here the Keras functions to load an image and convert it into a numpy array format.

Line 9: we extend the dimension of our image, our ImageDataGenerator class expects to receive data with the shape (None, height, width, depth).

The flow() method is then used to generate real-time data augmentation.
We also indicate the folder to store the image with the argument save_to_dir and the format for our new training instance (here jpg). The argument save_prefix is used to sign the necessary images by adding this term before the generic name created by the Keras class.

Knowing that imGen is a Python generator, you can generate a new image by calling the next() method. Here we generate 6 new images.

You can easily visualize these new images with the following code.

And here is the result !

Rotation Augmentation

A rotation augmentation randomly rotates the image by ± the value in degrees set to the rotation_range parameter. For instance, for a ± 30 degrees rotation, the code looks quite similar

Brightness Augmentation

This method allows an image to be augmented by either randomly darkening or brightening it. It is all controlled by the brightness_range argument. Values less than 1.0 darken the image while values larger than 1.0 brighten the image.

You know what happens next.

And the result !

Other parameters

It’s quite complicated to go through all the possibilities of this class. However, here are a few examples [3]:

  • Vertical and horizontal image flipping: quite self-explanatory. Simply set horizontal_flip and vertical_flip arguments to True.
  • Image zooming: quite self-explanatory also. The zoom_range argument allows our image to be zoomed by a factor randomly between [1-zoom_range, 1+zoom_range]
  • Image shearing: controls the maximum angle (in radians) in the counterclockwise direction in which our image will be randomly sheared.
  • Feature Standardization: this is an important notion that we have not studied yet because it didn’t make sense until now since we were working on a single image. However, it is possible to standardize each pixel across an entire dataset for each feature. To do this you just have to set the featurewise_center and featurewise_std_normalization arguments to True.
  • ZCA whitening: a whitening transform allows better visualization and understanding of the structure and features of our data. Explaining the BCA operation in detail is outside the scope of this tutorial. However, if you are interested, feel free to read Learning Multiple Layers of Features from Tiny Images by Alex Krizhevsky.

Recommended parameters

Whether there is a magic set of arguments that would work in any case is a question you are entitled to ask yourself. Unfortunately, the answer is no, as you expected the Data Augmentation must be performed according to the dataset. So, for example to recognize cars, allowing vertical flip would make no sense since the algorithm will never see flipped cars (unless you are starring in Inception right now, but that’s unlikely). However, if the objective is to know if jellyfish are presents within an image, then vertical flip augmentation remains a useful tool.

However, here is a simple example that you can set up to start your Data Augmentation.

Train a model with real-time Data Augmentation

Now that we know how to manipulate the Keras’ ImageDataGenerator class and are familiar with most of the Data Augmentation possibilities, let’s look at how to train a network with real-time Data Augmentation and training instances that change at each epoch. For this part, I took the code from the Keras Documentation [2] which treats the CIFAR-10 dataset.

First, we load the dataset into memory and encode our labels. Then we define our ImageDataGenerator object with the Data Augmentation we want to achieve.

If you want to perform standardization or a ZCA whitening, you have to use the fit method on your generator (see Line 9).

Finally, we use the fit() method of our model with the standard syntax in order to apply image augmentation. The generated images aren’t generated at the same time, they are generated in batches for flexibility (i.e 32 in our case).
So at each batch, 32 images will be randomly generated. We also have to define the steps_per_epoch argument, it determines the number of iterations of training per epoch. We set it naturally to (dataset size/batch size) in this case so the network is trained on the same number of images, as there are initially in the dataset, at each epoch.
Then the parameters are the same as for classic training.

[ad_2]




Source link

Write a comment