5 Types of Machine Learning Algorithms

Machine learning is no small subject. It’s an intricate and diverse practice with many facets. And at its core, sits the ability to learn. But learning, itself, is quite complex. 

Learning is the ability to acquire knowledge or skill from experience. Since there are a variety of different ways to learn from the information around us, it’s important to explore the existing approaches that help ML better learn and utilise information.

Since there are a number of techniques that exist, we’ll be looking at five more that stand out. 

(In case you missed the first few)

It All Works With Statistical Inference

Machine learning models reach certain decisions through the data that you give them or that they have access to.

To reach a decision is an inference. 

An inference is also the making of a prediction or the fitting of a model in machine learning.

These are a number of ways that ML approaches learning:

Inductive Learning

Similar to inductive reasoning, where we use evidence or labeled information to make categorisations or determine an outcome, inductive learning is an artificial recreation of that.

You can think of them as the reasoning parameters that traditional supervised machine learning will follow.

This requires building and training a machine learning model around an existing labelled training dataset. After training this model, you use it to predict the labels of unlabelled testing datasets. 

Take for example a collection of 10,000 images of cats and dogs that you’ve never seen before. You want to differentiate between them and labelling each one will take up too much time. So you train the model on 40 labelled pictures, 20 of a dog and 20 of a cat. 

Through inductive training, the model is able to identify and categorise each image.

With the same reasoning you use to look at each image of a dog or a cat and make the conclusion that it is actually a dog or cat, the model that you build will follow a similar pattern of reasoning. That is induction.

Deductive Inference

The exact opposite of induction, deductive inference uses the outcomes of inductive learning to make predictions. 

Making those predictions is essentially an act of deduction. 

Deduction follows the reasoning that all of the premises or outcomes have to be met before a conclusion can be made.

Take the 10,000 images of the cats and dogs. Once the inductive learning has occurred and those images have been categorised, you’re able to make predictions or deduce insights based on those findings.

Maybe 8,000 of the images were of cats and 2,000 of them were of dogs. You might make the prediction (or deduction) that you will receive more cat images over dog images in future. Or maybe that cats are the preferred pet due to the influx of images.

Transductive Learning

Transductive learning, or transduction, is based on identifying patterns or structures within unlabeled data sets in order to improve the accuracy of an existing model.

This approach requires training and testing data to improve on the correlations or clustering of unlabeled data. So in the absence of testing data, an inductive learning approach would need to be taken.

Simply put, transduction focuses on finding specific correlations in data for predictions. 

Going back to the 10,000 images and the 40 labelled pictures that you have, transduction would look at specifics like the similarities in behaviour and interactions, then output predictions specific to those features.

Hybrid Learning Problems

Because supervised and unsupervised learning are separate concepts and handle data differently, they are limited to the problems that they solve. That said, there are a number of hybrid approaches that take influence from each of these concepts:

5) Semi-Supervised Learning

Based on supervised learning, semi-supervised learning draws on elements from unsupervised learning models to solve problems.

While supervised and semi-supervised models contain training data with few examples of labeled examples and a large amount of unlabeled examples, supervised learning tends to only make good use of labelled data. 

Semi-supervised learning models, on the other hand, make better use of all of the available data, including unlabeled data. 

To make effective use of unlabeled data, engineers require unsupervised machine learning methods. Take clustering as an example, where a model will group related data that could be of potential use.

With supervised learning as a foundation, labels will then be applied to these new unlabeled datasets that can be used for prediction, insights, etc.

4) Self-Supervised Learning

With self-supervised learning being rooted in unsupervised learning, it’s capable of accurately solving pretext tasks based on supervised learning models. The big difference being that it requires only unlabeled data to come up with solutions.

A pretext task creates issues with your data for the algorithm to fix, e.g. miscolour or remove pixels on images, rotate text or images, adjust language for translation, etc. 

Supervised learning algorithms can solve pretext tasks with the use of labeled data and output a model that can be used to solve the original problem.

One of the most common examples of self-supervised learning is computer vision. This is where a collection of unlabeled images aid in training models that solve particular image-based problems.

For example, turning images to grayscale and having the model predict the best colours to represent it. Or removing pieces from an image and having the model fill in the missing parts.

3) Multi-Task Learning

Another type of supervised learning, multi-task learning, involves using one model to address multiple related problems. 

Typically, engineers train models to do a single task. But this can be inefficient and time-consuming if you’re able to handle other related tasks with the same set of data.

By creating and training a model on multiple related tasks, you’re able to improve the performance and efficiency of that model is. Resulting in better predictions. 

Say that we want to not only categorise those cat and dog images, but we also want to do object detection and maybe background scene classification. Instead of creating three separate models for those outcomes, you’re able to create one.

2) Active Learning

Active learning is where things start getting crazy. This is a technique where the model has the ability to ask, or query, a human operator to help clear up confusion by resolving any ambiguity.

An active learning model autonomously collects training examples and learns by itself. It does this by asking it’s source of creation for labels to new points.

“The key idea behind active learning is that a machine learning algorithm can achieve greater accuracy with fewer training labels if it is allowed to choose the data from which it learns.” – Active Learning Literature Survey.

This learning process is highly useful for cases where data is either scarce or expensive to collect and label. This tends to work very well in things like computational biology applications.

1) Online Learning

Online learning uses available data and directly updates the model before making predictions or observations.

This type of approach solves the problem of outputting predictions based on dynamic observations. While constantly being fed new, streaming data, the model will distribute observations over time according to the frequent changes in information.

Having fluid access to new data is critical in industries that see constant behavioural changes in customers.

Take e-commerce for instance, where almost every product has the potential to change in popularity based on public perception. Because this can affect demand and thus profits, it’s important that the data being fed into a model is always recent.

More in the Blog

Stay informed on all things AI...

< Get the latest AI news >

Join Our Webinar Cloud Migration with a twist

Aug 18, 2022 03:00 PM BST / 04:00 PM SAST