Fact-checked by Rafael Tabasca, Full Stack Developer @ Capicua.
Machine Learning can play chess better than Garry Kasparov vs. Magnus Carlsen. It recognizes human faces and identifies suspicious patterns. Moreover, it can analyze preferences to give personalized recommendations. Unsurprisingly, knowing code isn't enough to create such powerful tools. As you may know, ML entails complex math operations. These depend on the complexity of the program you want to build. This article will focus on the leading math concepts ML Engineers use to build stunning apps. These apps include well-known names like ChatGPT and OpenArtAI. Are you ready to talk about math in Machine Learning?
Linear functions are fundamental in Machine Learning. They are present in a wide variety of popular algorithms. A linear function denotes a linear combination of inputs in the coordinate plane. The coefficients of the inputs serve as the function's parameters. A linear function, in other words, responds to the equation y = wx + b. Here, y is the output, x is the input, w is the weight, and b is an offset (bias).
Despite the simplicity, linear functions perform well on many real-world problems. Plus, they are helpful in Machine Learning for several reasons. The first is Linear Regression. Here, supervised-learning algorithms predict a continuous outcome (y) based on input variables (x). Also, there's Linear Classification. These algorithms allow you to classify inputs into one of several predefined classes. Its most popular algorithm is the perceptron. It tags input data into one of two classes using a linear function to separate data into two regions.
Further, Linear Functions enclose Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The first one, PCA, has many uses in Machine Learning, like dimensionality reduction. It also included finding data directions that contain the majority of the variance. With it, it can project the data into a lower dimensional space. Likewise, LDA is a powerful reduction technique critical in supervised classification problems. Some of its uses include separating two or more classes and modeling differences.
"Algebra is the intellectual instrument that has been created for rendering clear the quantitative aspects of the world" — Alfred North Whitehead.
Algebra is the branch that deals with math symbols and the rules that govern them. Scientists use it to investigate mathematical structures like equations, polynomials, and functions. Further, it's the foundation of several abstract concepts, like equations and variable operations. Also, it's the root of number properties, like associative, commutative, and distributive. Algebraic manipulation aids in the understanding of data patterns. Beyond Machine Learning, it's a necessary field for decision-making.
It's impossible to overstate the relevance of linear algebra in Machine Learning. It plays a foundational role in a wide range of ML algorithms. These go from simple regression analysis to more complex Deep Learning techniques. Many advanced and powerful Machine Learning models wouldn't exist without linear algebra!
The cornerstone for analyzing relationships between variables in data sets is linear algebra. Let's take a simple linear regression model as an example. Linear algebra's coefficients define how each predictor variable contributes to the response variable. Let's talk about the aspects of linear algebra used in Machine Learning!
These are single numbers defining a quantity without direction. For example, a scalar of "3" can stand for three apples, three years, or three miles. In Machine Learning, scalars help to define relationships between variables. Languages like Python offer four types of scalar variables: int, float, None, and bool. NumPy's documentation holds a lot of info about scalars.
A vector is a mathematical structure that has both size and direction. It often represents physical quantities such as velocity, force, or acceleration. You can think of it as a list of numbers. In other words, a single-dimensional array of numbers. That would be a horizontal vector. Here's how you can create one using Python and Pandas.
One quick way to create a vertical vector is by using the methods reshape(-1, 1) and shape:
vr_vector = my_vector.reshape(-1, 1)
These are rectangular representations of columns and rows. Matrixes can present as 2-dimensional arrays to represent and manage large datasets. In turn, they perform various operations, like multiplication, inversion, and decomposition.
Matrixes' use in supervised learning relates to training data's features and labels. Likewise, it helps depict similarities or dissimilarities between different examples of unsupervised learning. Matrix elements also apply to the optimization and regularization of Machine Learning models. With Python, you can create matrices as a 2-dimensional list (a list of lists).
This operation results in a 3x3 matrix. Each inner list represents a row, and the integers within each internal list represent the row's elements. We can also create a matrix in Python by using the NumPy library, which has additional functionality for working with matrices:
A tensor is a multidimensional array of numbers. In ML, tensors represent various data types, such as images, videos, and audio. Tensors can have any number of dimensions and represent data in multiple formats and types. This variety includes scalars, vectors, matrixes, and higher-dimensional arrays. While a one-dimensional tensor can be a vector, a two-dimensional tensor can be a matrix.
Tensors' use in Deep Learning depicts data flowing through the network. This process encloses input data, intermediate representations, and output predictions. It also has applications in network computations. Examples surround matrix multiplication and non-linear activation functions.
Statistics explores data collection, organization, analysis, interpretation, and presentation. It goes from probability theory, mathematical analysis, and algorithms to conclude large datasets. Further, it provides fundamental concepts and methods for comprehending and analyzing data.
The science of statistics is often used in ML to model or predict outcomes from given inputs. Statistical models allow predictions based on previous observations, allowing for new insights. Machine Learning algorithms identify input data patterns and use them to predict events. To do so, it includes regression analysis, clustering, and classification. As a result, it offers greater accuracy than traditional methods. Statistics' main types are descriptive and inferential.
The descriptive type presents summaries of datasets' properties without conclusions or predictions. Its techniques enclose histograms, boxplots, scatter plots and bar charts. Further, these enable exploratory data analysis and the narrowing of large datasets.
The inferential ones focus on predictions or generalizations based on a sample's data. This type of analysis helps researchers draw conclusions about a population. In ML, these forecast with incomplete data or extrapolate trends from small datasets.
This statistical value describes a dataset's variation or diversity. The most common examples are range, interquartile range, variance, and standard deviation. The measure of spread is essential in ML because it provides insight into the data distribution. With it, specialists can make informed decisions on feature scaling and model selection. Furthermore, they can identify outliers and unusual values that may need special treatment.
This method focuses on making inferences about a population based on a sample of data. It entails developing a null and alternative hypothesis. Then, it applies statistics to define the chance of getting data if the null hypothesis is true. The test results determine if researchers should reject the null hypothesis or not.
This statistic measures the likelihood of an event happening. Events have a number between 0 and 1, where 0 is an impossible event, and 1 is a certain one. Probability Theory is a fundamental concept used in a wide range of fields. Instances include decision-making, risk management, and of course, Machine Learning. In ML, it makes predictions and estimates performance. Many ML algorithms, like Bayesian, Markov, and Gaussian, rely on these models.
There are some statistics concepts to know before diving into Machine Learning algorithms.
● Data. It's the info collected and analyzed to draw conclusions or make inferences. Data can be numerical or categorical; researchers gather it with several techniques.
● Population. It refers to all objects or measurements whose properties are under study. The entire set of observations or data points draws a sample.
● Sampling. The concept refers to selecting a subset from a larger population. The most common sampling types include Stratified, Cluster, and Multistag.
● Parameters & Variables. A parameter is a metric used to represent a population trait. Meanwhile, variables are metrics of interest for each entity in a population.
Math is an essential subject in Machine Learning. You need to handle a few math concepts to understand how ML algorithms work and how to create models. These topics include linear functions, linear algebra, and multivariate calculus. Statistics is also a pillar of Machine Learning. It allows ML to understand and make sense of the underlying structure of the data. All the prior is vital to building accurate, robust models forecasting success.