Learning Approaches – As technology advances, so do the ways we teach machines to learn and adapt. In this article, we’ll explore various learning methods, such as supervised, weakly supervised, weakly semi-supervised, and semi-supervised learning.
Each approach has its own unique benefits and applications, we’ll break down the basics to help understand how they work and when to use them. Whether you’re new to machine learning or looking to expand your knowledge, you’re in the right place! Let’s dive in.
Table of Contents
Supervised Learning
In practical machine learning, supervised learning is the go-to method. In this type of learning, we have a set of input variables, say (x), and an output variable (Y), and our goal is to find the mapping function (f) that maps the inputs to the outputs.
How Does it Work?
Here’s how the function looks like:
Y = f(x)
The aim is to learn this mapping function so well that when we feed new data into the model, it can accurately predict the corresponding variables (Y).
Why “Supervised”?
Imagine you’re in a classroom with a teacher guiding you through a lesson. In this case, the teacher is the algorithm and you’re the student. The algorithm learns from a set of training data where the correct answer is known. It makes predictions on this data and the teacher corrects any mistakes. This process repeats until the algorithm achieves the desired performance.
Weakly Supervised Learning
In fully supervised, for each input image, you provide the desired output that you want to see out of your model. The algorithm maps the inputs to the outputs.
In weakly supervised learning, you have indirect supervision. The annotations are provided, but they do not tell the full story of the desired output that you want to have. An example of this can be partial annotations of a few points in the image as shown below:
How Does it Work?
You have your input variables (x) and your output variable (Y), but the Y labels are not as precise or complete as in supervised learning. The algorithm needs to learn from this imperfect data to find the mapping function (f) between inputs and outputs.
Y = f(x)
In the weakly supervised algorithm, the algorithm may have access to data with incomplete or noisy labels, or it might only know the labels for a subset of the data. Despite this limitation, the goal remains the same: to approximate the mapping function well enough that it can make accurate predictions on new data.
Why “Weakly Supervised”?
Think of it like a less hands-on teaching approach. The algorithm is still learning, but it’s doing so with less direct guidance from the teacher. It needs to infer patterns and relationships from the imperfect labels it has access to.
Semi-Supervised Learning
In semi-supervised learning, the algorithm learns from a combination of labeled data, which has known outputs, and unlabeled data, without any labels.
How Does it Work?
The goal in semi-supervised learning remains the same as in supervised learning: to find the mapping function (f) between input variables (x) and output variables (Y). However, by incorporating unlabeled data alongside labeled data, the algorithm can improve its understanding of the underlying patterns in the data:
Y = f(x)
Why “Semi-Supervised”?
Imagine a teacher handing out a mix of graded and ungraded assignments to a student. The student learns from both types of assignments to gain a deeper understanding of the subject matter.
Semi-supervised learning is particularly useful in situations where obtaining fully-labeled data is expensive and challenging. By leveraging both labeled and unlabeled data, machine learning models can be trained more efficiently and effectively.
Weakly Semi-Supervised Learning
In this type of learning, we combine elements of weakly supervised learning, with semi-supervised learning techniques. This means that in addition to dealing with partially labeled or noisy data, we also have access to a larger pool of unlabeled data.
How Does it Work?
In terms of data, we have incomplete or noisy labels for some of our data, along with labeled and unlabeled data that the algorithm can use for learning. The goal remains the same: to find the mapping function (f) between inputs and outputs. However, in weakly semi-supervised learning, the algorithm can leverage the unlabeled data to improve its understanding of the underlying patterns in the data.
Y = f(x)
Why “Weakly Semi-Supervised”?
It’s a hybrid approach that combines the advantages of both weakly supervised and semi-supervised learning. By using a combination of labeled and unlabeled data, the algorithm can potentially achieve better performance.
Weakly semi-supervised learning is particularly useful in scenarios where obtaining fully labeled data is difficult and expensive, but unlabeled data is plentiful. By making the most of the available data, we can train more robust machine-learning models with improved performance.
Once you get comfortable with different learning approaches. The next step should be to explore different image tasks that we encounter in computer vision, including classification, detection, and segmentation.
Further Reading
- wsl-eccv20.github.io – ECCV 2020 Tutorial on Weakly-Supervised Learning in Computer Vision
- Exploring Different Image Tasks For Your Next Project – MachineMindscape