Companies are increasingly relying on data to learn more about their customers. Thus, data analysts have a bigger responsibility to explore and analyze large blocks of raw data and glean meaningful customer trends and patterns out of it. This is known as data mining. Data analysts use data mining techniques, advanced statistical analysis, and data visualization technologies to gain new insights.
These can help a business develop effective marketing strategies to improve business performance, scale-up sales, and reduce overhead costs. Although there are tools and algorithms for data mining, it is not a cakewalk, as real-world data is heterogeneous. Thus, there are quite a few challenges when it comes to data mining. Learn data science if you want to gain expertise in data mining.
One of the common challenges is that, usually, databases contain attributes of different units, range, and scales. Applying algorithms to such drastically ranging data may not deliver accurate results. This calls for data normalization in data mining.
It is a necessary process required to normalize heterogeneous data. Data can be put into a smaller range, such as 0.0 to 1.0 or -1.0 to 1.0. In simple words, data normalization makes data easier to classify and understand.
Why is Normalization in Data Mining Needed?
Data normalization is mainly needed to minimize or exclude duplicate data. Duplicity in data is a critical issue. This is because it is increasingly problematic to store data in relational databases, keeping identical data in more than one place. Normalization in data mining is a beneficial procedure as it allows achieving certain advantages as mentioned below:
- It is a lot easier to apply data mining algorithms on a set of normalized data.
- The results of data mining algorithms applied to a set of normalized data are more accurate and effective.
- Once the data is normalized, the extraction of data from databases becomes a lot faster.
- More specific data analyzing methods can be applied to normalized data.
Read: Data Mining Techniques
3 Popular Techniques for Data Normalization in Data Mining
There are three popular methods to carry out normalization in data mining. They include:
Min Max Normalization
What is easier to understand – the difference between 200 and 1000000 or the difference between 0.2 and 1. Indeed, when the difference between the minimum and maximum values is less, the data becomes more readable. The min-max normalization functions by converting a range of data into a scale that ranges from 0 to 1.
Min-Max Normalization Formula
To understand the formula, here is an example. Suppose a company wants to decide on a promotion based on the years of work experience of its employees. So, it needs to analyze a database that looks like this:
|Employee Name||Years of Experience|
- The minimum value is 8
- The maximum value is 20
As this formula scales the data between 0 and 1,
- The new min is 0
- The new max is 1
Here, V stands for the respective value of the attribute, i.e., 8, 10, 15, 20
After applying the min-max normalization formula, the following are the V’ values for the attributes:
- For 8 years of experience: v’= 0
- For 10 years of experience: v’ = 0.16
- For 15 years of experience: v’ = 0.58
- For 20 years of experience: v’ = 1
So, the min-max normalization can reduce big numbers to much smaller values. This makes it extremely easy to read the difference between the ranging numbers.
Our learners also read: Top Python Courses for Free
Decimal Scaling Normalization
Decimal scaling is another technique for normalization in data mining. It functions by converting a number to a decimal point. Normalization by decimal scaling follows the method of standard deviation. In decimal scaling normalization, the decimal point of values of the attributes is moved. The movement of the decimal points in decimal scaling normalization is dependent upon the maximum values amongst all values of the attribute.
Decimal Scaling Formula
- V’ is the new value after applying the decimal scaling
- V is the respective value of the attribute
Now, integer J defines the movement of decimal points. So, how to define it? It is equal to the number of digits present in the maximum value in the data table. Here is an example:
Suppose a company wants to compare the salaries of the new joiners. Here are the data values:
Now, look for the maximum value in the data. In this case, it is 25,000. Now count the number of digits in this value. In this case, it is ‘5’. So here ‘j’ is equal to 5, i.e 100,000. This means the V (value of the attribute) needs to be divided by 100,000 here.
upGrad’s Exclusive Data Science Webinar for you –
How to Build Digital & Data Mindset
Explore our Popular Data Science Courses
After applying the zero decimal scaling formula, here are the new values:
|Name||Salary||Salary after Decimal Scaling|
Thus, decimal scaling can tone down big numbers into easy to understand smaller decimal values. Also, data attributed to different units becomes easy to read and understand once it is converted into smaller decimal values.
Must Read: Data Mining Project Ideas & Topics
Z-Score value is to understand how far the data point is from the mean. Technically, it measures the standard deviations below or above the mean. It ranges from -3 standard deviation up to +3 standard deviation. Z-score normalization in data mining is useful for those kinds of data analysis wherein there is a need to compare a value with respect to a mean(average) value, such as results from tests or surveys. Thus, Z-score normalization is also popularly known as Standardization.
The following formula is used in the case of z-score normalization on every single value of the dataset.
New value = (x – μ) / σ
- x: Original value
- μ: Mean of data
- σ: Standard deviation of data
Below is an example of how to perform z score normalization on a given dataset.
Suppose we have the following dataset:
Therefore, we can find that the mean of this dataset is 21.2 also the standard deviation is 29.8.
If we have to perform z score normalization on the first value of the dataset,
Then according to the formula it will be,
New value = (x – μ) / σ
New value = (3-21.2)/ 29.8
∴ New value = -0.61
By performing z score normalization on each of the value of the dataset, we will get the following chart.
Top Data Science Skills to Learn
|Data||Z score normalized value|
The mean of this normalized dataset is 0 and the standard deviation is 1.
For example, a person’s weight is 150 pounds. Now, if there is a need to compare that value with the average weight of a population listed in a vast table of data, Z-score normalization is needed to study such values, especially if someone’s weight is recorded in kilograms.
Difference between Min Max normalization and Z Score Normalization:
|Min Max normalization||Z Score Normalization|
Drawbacks of doing data Normalization:
Even though there are quite a few benefits of Normalization by decimal scaling, there are also some downsides of doing it.
- Due to its very nature of compartmentalizing the data, it creates a longer task, as there are now more tables that need to be joined. This increases the length of the task and makes it more mundane and slower. Also, the database becomes harder to comprehend.
- Tables that will be generated will have codes instead of real data. This is due to the fact that repeated data is stored as lines of code instead of normal data. Thus, there is always a need to go through the lookup table, which makes the entire process yet again slow.
- Making queries become difficult once normalization is applied to the dataset. It is because the SQL it contains is built dynamically and is usually made up of desktop-friendly query tools. Therefore, it becomes difficult to propose a database model without knowing the customer’s needs.
- The analysis and designing become more detailed and strenuous. Normalizing data is already complex and difficult, on top of that knowing the purpose of the database and then adjusting everything according to it becomes even more difficult. If an expert poorly normalizes a database, it will perform inadequately and might not be able to store the required data.
What is Denormalization?
In simple words, denormalization is quite literally the opposite of normalization which is also used in databases for varied reasons. As the name suggests, denormalization means reversing the normalization or not normalizing, thus, the process is often done after normalization has been applied.
In denormalization, data are combined together to execute the queries quickly. In this method, redundancy is added which plays a major part in executing the queries much faster. The pros of using denormalization include fast retrieval of data as fewer joins need to be done, and query solving is quicker, therefore, less likely to have bugs.
However, unlike normalization, data integrity is not maintained in this process, as a large variety of data is clubbed together. Also, by doing so, the number of tables generated reduces significantly, which is quite the opposite of normalization. Also, the updates and inserts are quite expensive comparatively and also make them harder to write.
Read our popular Data Science Articles
As data comes from different sources, it is very common to have different attributes in any batch of data. Thus, normalization in data mining is like pre-processing and preparing the data for analysis.
If you are curious to learn about data science, check out IIIT-B & upGrad’s Executive PG Programme in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.
What is meant by Normalisation in Data mining?
Normalization is the process of scaling an attribute's data such that it falls within a narrower range, like -1.0 to 1.0 or 0.0 to 1.0. It is beneficial for classification algorithms in general. Normalization is typically necessary when dealing with characteristics on various scales; otherwise, it may dilute the efficacy of an equally significant attribute on a lower scale due to other attributes having values on a greater scale. In other words, when numerous characteristics exist but their values are on various scales, this might result in inadequate data models when doing data mining activities. As a result, they are normalized to put all of the characteristics on the same scale.
What are the different types of Normalization?
Normalization is a procedure that should be followed for each database you create. Normal Forms refers to the act of taking a database architecture and applying a set of formal criteria and rules to it. The normalization process is classified as follows: First Normal Form (1 NF), Second Normal Form (2 NF), Third Normal Form (3 NF), Boyce Codd Normal Form or Fourth Normal Form ( BCNF or 4 NF), Fifth Normal Form (5 NF), and Sixth Normal Form (6 NF) (6 NF).
What is Min-Max Normalisation?
One of the most prevalent methods for normalizing data is min-max Normalization. For each feature, the minimum value is converted to a 0, the highest value is converted to a 1, and all other values are converted to a decimal between 0 and 1. For example, if the minimum value of a feature was 20 and the highest value was 40, 30 would be converted to about 0.5 since it is halfway between 20 and 40. One significant drawback of min-max Normalization is that it does not handle outliers well. For example, if you have 99 values ranging from 0 to 40, and one of them is 100, all 99 values will be converted to values ranging from 0 to 0.4.