Every person has to take decisions in their lives. These decisions are situation-dependent. Taking the right decision helps face a situation in the best manner, solving the problem in the most straightforward way. In childhood, most of your decisions would revolve around what you eat and things related to your school.
Best Machine Learning and AI Courses Online
As you grow up, your decisions start having a more serious implication on not only your life but also those of others too. At some point in your life, you will be taking decisions concerning your career or business. This analogy is to introduce you to the concept of a decision tree in machine learning.
Get Machine Learning Certification from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.
What is a decision tree?
To start with, let us tell you that a decision tree is a predictive model or tool that supports decisions. It is known to deliver accurate inferences by using designs, design models, or representations that follow a tree-like structure. The primary objective of this model or machine learning model is to consider certain attributes of a target, and then make decisions on the basis of those attributes.
In-demand Machine Learning Skills
Most of the decisions in a decision tree follow conditional statements – if and else. For a decision tree model to be better than others, it will have a deeper structure and more complex rules governing it. It is one of the most preferred supervised learning models in machine learning and is used in a number of areas. It could appear like a flowchart that is designed keeping in mind algorithmic techniques to ensure that the splitting is done according to conditions.
The structure of this flowchart is quite simple. It has a root node that serves as the foundation of the building of the model. Then, some internal nodes and branches show features or tests and outcomes of tests, respectively. The leaf node represents a group with values that are similar to those values that are achieved when decisions on related attribute are made.
Decisions trees primarily find their uses in classification and regression problems. They are used to create automated predictive models that serve more than a few applications in not only machine learning algorithm applications but also statistics, data science, and data mining amongst other areas. These tree-based structures deliver some of the most accurate predictive models that are both easily interpretable and more stable than most of the other predictive models.
Unlike linear models that are only good for a certain number of problems, models based on decision trees can be used in mapping non-linear relationships, too. No wonder decision trees are so popular. One very important reason for this is how easy to understand the final decision tree model is. It can quite clearly describe what all was behind a prediction. They are also the basis of the more advanced collaborative or ensemble methods, including gradient boosting, bagging, and random forests amongst others.
How do you define a decision tree?
Now that we have developed a basic understanding of the concept let us define it for you. A decision tree is a supervised machine learning algorithm that can be used to solve both classification-based and regression-based problems. Let us see how it is used for classification.
Let us assume there is a data set that we are currently working on. We create a 2D plan that can be divided into different areas such that the points in each area are designated to the same class. The divisions or splits are denoted by a unique character. This is a binary tree that we are working on here.
Now, there are different things of this decision tree that don’t have a prior representation but are created using the training data provided to us. These things include the number of nodes that this tree will have, its edge positioning, and its structure. We won’t be creating the tree from scratch here. We will only be moving forwards, considering that our tree is already there.
Now, how can we classify new input points? We just have to move down the tree to do it. While traversing, we will continue putting up a question about the data point on reaching every node. For instance, when we ask this question at the root node, the answer would either let us branch right or left. The general rule is if the question asked is true of the condition put up in the condition is met, we have to branch left. If it isn’t true, we have to branch right. If our condition takes us to a left node, we would know what class an input point has to be assigned.
When it comes to how a decision tree is demonstrated, there are a few things that should never be forgotten. There is no rule or necessity that says that we have to alternate between the two coordinates of a decision tree while traversing it. We can choose to go with just a single feature or dimension. We need to keep in mind that decision trees can be used with a data set of any dimension. We have taken 2D data in our example, but that doesn’t mean that decision trees are just for two-dimension data sets.
Checkout: Types of Binary Tree
Have you ever been involved in a Twenty Questions contest? It is quite similar to how decision trees work. Let us find out how? The ultimate objective of the Twenty Questions game is to find out the object that the person answering the questions is thinking of while answering the questions. The questions can only be answered in a yes or a no.
As you move ahead in the game, you will know from the previous answers what specific questions to ask in order to get to the right answer before the game ends. A decision tree is your series of questions that helps you get to the ultimate answer by guiding you to ask more relevant questions.
Do you remember how you are directed to the person you want to speak to in a company by voicemail? You first speak to the computerized assistant and then press a series of buttons on your phones and enter a few details about your account before you reach the person you wanted to speak to in the first place. This could be a troublesome experience for you but this is how most companies use decision trees to help their customers reach the right department or talk to the right person. Also read 6 types of supervised learning you must know about.
How does a decision tree work?
Thinking about how to create a perfect decision tree? As we alluded to earlier, decision trees are a class of algorithms that are used to solve machine learning problems that belong to classification and regression types. It can be used for both categorical as well as continuous variables.
This algorithm has a simple way of moving forward – it partitions the data set or sample data into different sets of data with each data set grouped together sharing the same attributes. Decision trees employ a number of algorithms for different purposes – identify the split, most important variables, and the best result value that can produce more subdivisions going further.
Typically, the workflow of a decision tree involves the division of data into training and test data set, application of algorithms, and evaluation of model performance. Let’s understand how it works with a very simple example. Suppose we want to check whether a person is right for a job or not. This will be the root of the tree.
Now we move onto the features or attributes of the tree, which will constitute the internal nodes. Based on those attributes, decisions will be taken – the formation of branches of the tree. Let us make another assumption here. The parameter for a person considered right for the job is their experience of 5 or more years. The first division will take place on this parameter that we have just set.
We need more parameter sets for further splitting. These parameters could be about them belonging to a certain age group or not, carrying a certain degree or not, and so on. The results are depicted by the leaves of the tree, other than roots and branches. Leaves never split and depict the decisions. This tree will help you decide whether a candidate is right for the job or not.
As already mentioned, a decision tree has its own peculiar representation that enables it to solve a problem for us. It has roots, internal nodes, branches, and leaves, each serving a specific purpose or doing a specific job. These steps will help you make tree representation:
- The root of the tree features the optimized version of the best attribute
- Split the sample data into subsets using appropriate attributes. Ensure that the new subsets or groups of data don’t carry different values for the same attribute
- Repeat the above two steps until you have the leaves for every branch in your decision tree
Classification or regression tree (CART)
Let us take an example. Imagine we are given the task to classify job candidates on the basis of some pre-defined attributes to ensure that only deserving candidates are selected at the end of the process. The decision to select a candidate would depend on a real-time or possible event. All we need is a decision tree to find the right criteria for classification. The results would depend on how the classification is done.
Classification, as we all know, contains two steps. The first step involves building a random model on the sample data set. The second step involves prediction – the model trained in the first step is implemented to make a prediction regarding the response for given data.
Now, there are certain situations in which the target variable is a real number, or decisions are made on continuous data. You may be asked to make a prediction regarding the price of an item based on the cost of labour. Or you may be asked to decide the salary of a candidate based on their previous salary, skill set, experience, and other relevant information.
The value of the target value in these situations will either be some real value or value associated with a continuous data set. We will use the regression version of a decision tree to solve these problems. This tree will consider the observations made on an object’s features and train the model to make predictions and provide a continuous output that makes absolute sense.
Let us now talk about a few similarities and differences between classification and regression decision trees. Decision trees are used as classification models in situations where target variables are categorical in nature. The value that the training data set gets right at the culmination of a terminal node is equal to the value received when we take a mode of the observations for that particular section. In case any new observation is added to that section of the tree, we will replace it by the mode value, and then make the prediction.
On the other hand, decision trees are used as regression models when target variables are a part of a continuous data set. The value received at the same point that we discussed for classification trees, is the mean value of the observations in that section when it comes to regression trees.
There are a few similarities too. Both decision tree models use a recursive binary approach and divide independent variables into regions that don’t overlap with each other and are definite. In both these trees, division starts at the top of the tree, and the observations lie in one region. These observations split the variables into two branches. This division is a continuous process that gives way to a fully grown tree.
Read: Machine Learning Project Ideas
Importance of Decision Trees in Machine Learning
If you need an efficient method for making decisions, decision trees in machine learning can help. They make it easy to make decisions by highlighting the problem and describing the potential outcomes. It helps developers to evaluate the different consequences of a decision.
The decision tree algorithm in machine learning provides access to more data. As a result, it can help with predicting results for future data. Decision trees can simplify the process of developing outcomes through visual representation.
Key Terminologies Associated with Decision Trees
A few key terminologies associated with decision trees in machine learning are as follows:
- Decision node: Also known as the internal node, it is found within a decision tree where the node segments into two or more variables.
- Root node: It refers to the highest node of a decision tree that throws light on the decision or message.
- Leaf node: the leaf node denotes the terminal or external node. Since it comes last in the decision tree, it is far from the root node. Therefore, the leaf node does not have any children.
- Pruning: It involves reducing a decision tree to contain only the crucial nodes and outcomes.
- Splitting: It involves dividing one node into multiple parts. This is where the decision tree gets divided into variables.
Types of Decision Trees in Machine Learning
Decision trees can be classified into classification and regression trees. Together, they are referred to as the CART. The functions of these decision trees revolve around classifying and predicting. The two types of decision trees in machine learning are as follows:
Classification Trees
A decision tree classifier figures out whether some event has happened or not. It usually provides a “yes” or “no” outcome. A few examples of using classification trees are as follows:
Example: Homeownership Based on Income and Age
A decision tree classifier splits data sets according to the variables. In this example, we have two age and income as two variables for determining whether an individual is purchasing a home. Suppose you find out that 60% of people over the age of 30 have bought a house.
Here, the data will get split, and age will become the first node in the decision tree. The split makes the data pure to a certain extent. The second node will start addressing the income from where the first node ends.
Regression Trees
Regression trees are useful for predicting continuous values according to information sources from the past. It often helps programming algorithms predict what can happen while considering past trends or behavior.
Example: Predicting Housing Prices
Regression trees are extremely useful for predicting the price of a house by plotting on a graph. The regression model will help predict housing prices in the upcoming years with the help of prices in previous years.
This is linear regression because the price of houses will keep increasing with age. Machine learning helps to predict specific prices according to a true variable series from the past.
How to learn a CART model?
There are a few important things that you are required to do to create a CART model. These include choosing input variables as well as points of divisions in a way that the tree is properly constructed. The greedy algorithm that reduces the cost function is used to choose the input variables as well as the points of division.
The constriction of the tree is terminated with the help of the stopping criterion, which is defined in advance. The stopping criterion could mention anything, such as how many training instances are assigned to the tree’s leaf nodes.
1. Greedy algorithm: The input space has to be split correctly to build a binary tree. Recursive binary splitting is the greedy algorithm used for this purpose. It is a numerical method that involves lining up of different values. A cost function is then used to try and test several points of division. The division point with the minimum cost is chosen. This method is used to evaluate all points of division as well as input variables.
2. Tree pruning: Stopping criterion improves the performance of your decision tree. To make it even better, you can try pruning the tree after learning. The number of divisions a decision tree has tells a lot about how complex it is. Everyone prefers trees that are simpler than others. They don’t overfit data, and they are easily decipherable.
The best way to prune a tree is to look at every leaf node and find out how removing it will impact the tree. The removal of leaf nodes takes place when this action warrants a drop in the cost function. When you think that there is no way you can improve the performance further, you can stop this removal process. The pruning methods you can use include
3. Stopping criterion: The greedy splitting method mentioned that we talked about earlier has to have a stop command or condition to know when to stop. A common criterion is to take the number of instances that every leaf node has been assigned. If that number is reached, the division won’t happen, and that node will be considered the final one.
For example, let’s say that the predefined stopping criterion is mentioned as five instances. This number also says a lot about the exactness of the tree according to the training data. If it’s too precise or exact, it will result in overfitting, which means poor performance.
How to avoid overfitting in a decision tree?
Most decision trees are exposed to overfitting. We can build a decision tree that is capable of classifying the data in an ideal manner, or we can have a situation where we don’t have any attributes for the division. This won’t work too well with the testing data set; however, it would suit the training data set. You can follow any one of the two approaches that we are going to mention to avoid this situation.
You can either prune the tree if it is too large or stop its growth before it reaches that state of overfitting. In most cases, there is a limit defined to control the growth of the tress that mentions the depth, number of layers, and other things that it can have. The data set on which the tree needs to be trained will be divided into a test data set and a training data set. Both these data sets will have maximum depths on the basis of the training data set and will be tested against the testing data set. You can also use cross-validation along with this approach.
When you choose to prune the tree, you test the pruned editions of the tree against the original version. If the pruned tree does better than its version when it comes to testing against the test data set, leaves won’t be available to the tree as long as this situation persists.
Know more about: Decision Tree in R
Advantages of the decision trees approach
- It can be used with continuous as well as categorical data.
- It can deliver multiple outputs
- It can interpret precise results, and you can quantify and trust the reliability of trees
- With this method, you can explore data, find important variables, and find relationships between different variables for strengthening target variables and build new features in a lot less time.
- It is easy to understand and explain to others
- It is helpful in cleaning data. In comparison to other methods, it doesn’t take too much time as there is no impact of missing values and outliers on it after a certain point
- The efficiency and performance of decision trees are not affected by non-linear relationships between features
- It doesn’t take much time to prepare data as it doesn’t need missing value replacement, data normalization, and more.
- It is a non-parametric approach. It has nothing to do with designing and space arrangement of classifiers
Disadvantages of decision trees
- Some users can build decision trees that are too complex, even for their own liking. These trees don’t generalize the data as simpler trees do.
- Biased trees are often created due to the domination of certain classes. This is why it is very important to balance the sample data before it is used
- Sometimes these trees are not too stable. Data variations can result in the creation of a tree that doesn’t fit the bill. This anomaly is referred to as variance. It can be dealt with by using boosting and bagging.
- You can’t expect to get the best decision tree with greedy algorithms. To do away with this problem, you can train multiple trees.
Popular AI and ML Blogs & Free Courses
Conclusion
This blog discusses all the important things that a learner needs to know about decision trees. After reading this blog, you will have a better understanding of the concept, and you will be in a better position to implement it in real life.
If you’re interested to learn more about machine learning & AI, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.
What is the decision tree algorithm used for?
A part of the family of supervised learning algorithms, decision trees are one of the most widely used classification algorithms. It is very easy to understand as well as interpret, which accounts for its popularity. Decision trees can be employed to develop training models that can predict values of target variables based on simple decision instructions derived from historical training data. The best thing about the decision trees algorithm is it can be efficiently used to solve classification and regression problems, which other supervised learning algorithms cannot be applied to. Different kinds of decision trees can be used based on the type of target variable.
What are the applications of Decision Tree Algorithm?
In AI, the decision trees algorithm comes with a wide array of applications. Some of the most interesting applications of decision trees include evaluating potential growth opportunities for companies on the basis of historical data. For this, historical sales data can help decision trees indicate possible routes for further business expansion and growth. Decision trees can also be used to find potential clients using demographic information. Besides, financial institutions can also apply decision trees to create predictive models for assessing the creditworthiness of customers and defaulters in loans.
What other algorithms are used in Artificial Intelligence?
Algorithms used in Artificial Intelligence can be broadly categorized into three parts – Regression Algorithms, Classification Algorithms, and Clustering Algorithms. Classification algorithms are used to classify data sets in a particular way. Clustering algorithms are applied to entire sets of data to find differences and similarities between specific data points. It can be used to point out people of the same age among a large group of customers. Regression algorithms are helpful in forecasting future outcomes depending on the input data. For instance, regression algorithms can be used to design models for predicting the weather.