Decision Tree algorithm is one of the most used supervised Machine Learning Algorithm in use today. Although it can be used for both classification and regression problem but mostly it is in use with the obstacles related to classification only.

The structure of a decision tree is quite simple, where internal nodes represent the features of the dataset, branches represent the decision rules and each leaf nodes describe the outcomes.

There are two nodes in every decision tree. One is decision node and the other is leaf node. Decision nodes have many branches and leaf nodes are the resultant / outcome of the decisions that were taken to reach to its point. These decisions are made based on a given dataset.

A decision tree algorithm is named as a decision tree because it starts with a root node and it expands into many branches and forms a structure like that of a tree. It simply asks a question and based on Yes / No answer, it expands into subtrees.

To build a tree, we use CART algorithm, which stands for Classification and Regression Tree Algorithm.

The diagram below is a graphical representation of a Decision Tree Algorithm

Why should be use Decision Tree Algorithm

There are different Machine Learning algorithms, supervised and unsupervised, to be implemented by the data scientists. However, the problem is choosing which algorithm to be chosen in under which scenario. Below are two reasons for choosing Decision Tree algorithm:

  • Decision Trees usually mimic human thinking ability while deciding, so it is easy to understand.
  • The logic behind the decision tree can be easily understood because it shows a tree-like structure.

Decision Tree Terminologies

  • Root Node: Root node is from where the decision tree starts. It represents the entire dataset, which further gets divided into two or more homogeneous sets.
  • Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated further after getting a leaf node.
  • Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes according to the given conditions.
  • Branch/Sub Tree: A tree formed by splitting the tree.
  • Pruning: Pruning is the process of removing the unwanted branches from the tree.
  • Parent/Child node: The root node of the tree is called the parent node, and other nodes are called the child nodes.

How does this Algorithm work?

Before implementation of any algorithm, it is highly favorable for any data professional that he should know the internal working or at least a basic understanding of this algorithm’s mechanism.

The algorithm starts from the root node of the tree. This algorithm compares the values of root attribute with the record (real dataset) attribute and based on the comparison, follows the branch and jumps to the next node.

For the next node, the algorithm again compares the attribute value with the other sub-nodes and move further. It continues the process until it reaches the leaf node of the tree. 

One can understand the working by this algorithm:

  • Step 1: Begin the algorithm at the Root node, says S, which contains the complete Dataset.
  • Step 2: Find the best attribute in the dataset using ASM.
  • Step 3: Divide the S into subsets that contains values for the best attributes.
  • Step 4: Generate the decision tree, which contains decision nodes based on the best attributes.
  • Step 5: Recursively make new decision trees using the subsets of the dataset created in step -3. Continue this process until a stage is reached where you cannot further classify the nodes and called the final node as a leaf node.


Suppose there is a job candidate, who is offered a job in a company and he wants to decide whether he should accept the offer or not. The root node splits into different decision nodes and he only needs to answer the questions based upon different factors / attributes that are important to him. Anyways, the leaf nodes keep splitting up until the required decision is made.

The following diagram will easily make you understand how you can graphically represent the DTA:

Advantages of the Decision Tree

  • It is amazingly simple for anyone to understand as it depicts the way how a person makes any decision in his life.
  • It can be useful in solving decision-related problems.
  • It helps us to think about all the outcomes for a problem which one may have.
  • There is less amount of data wrangling activities as compared to other algorithms.

Disadvantages of the Decision Tree

  • The decision tree algorithm contains a lot of layer, which makes it a bit too complex for a non-technical person to comprehend to.
  • It may have an overfitting issue, which can be resolved using the Random Forest algorithm.
  • For more class labels, the computational complexity of the decision tree may increase.


  1. Very nice post. I just stumbled upon your weblog and wished to say that I have
    truly enjoyed surfing around your blog posts. In any case I will be subscribing to your
    rss feed and I hope you write again soon!


Please enter your comment!
Please enter your name here