Early Stopping: In this method, we track the loss on the validation set during the training phase and use it to determine when to stop training such that the model is accurate but not overfitting.
Image Augmentation: Artificially boosting the number of images in our training set by applying random image transformations to the existing images in the training set.
Dropout: Removing a random selection of a fixed number of neurons in a neural network during training.
However, these are not the only techniques available to prevent overfitting. You can read more about these and other techniques in the link below:
Memorizing is not learning! — 6 tricks to prevent overfitting in machine learning