Chapter 7 of the book “Deep Learning for Coders” is about Training a State-of-the-Art Model. This chapter discusses about methods to further improve the accuracy of our models.
While the improvements of each method were small (about ~1% more accurate for each method), if you use them all, your model could be 5% more accurate.
However, you should only use them after picking the right model, algorithm, batch size, and learning rate first because they affect your model’s accuracy a lot more.
I’m using fastai to create this model. The final accuracy for the model is ~ 95.74%.
Self-supervised learning: Training a model using labels that are naturally part of the input data, rather than requiring separate external labels.
Self-supervised learning is often used to pre-train models. By fine-tuning a pre-trained model, we can very quickly get state-of-the-art results with very little data.
Supervised learning (data that requires human annotators) are expensive, labor-intensive, and time-consuming. Transfer learning lowers the barrier for Machine Learning, making it accessible for everyone.
Let’s take a look at some examples of self-supervised learning!
Context Encoders: a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings.
…
Ensemble learning is the process by which multiple models, such as classifiers or experts, are strategically generated and combined to solve a particular computational intelligence problem.
Ensembling works because it averages the errors of many models. The kinds of errors that each model make would be different from the others, so the average of their predictions should be better than either one’s individual predictions.
Ensembling works better when the models are diverse, so many ensemble methods promote diverse models to combine.
For example, in Chapter 9 of Fastai’s course, we learned to create a random forest model to predict tabular…
Full stack web developer. Deep Learning Student.