Top 20 HDFS Commands You Should Know About [2023]

Hadoop is an Apache open-source structure that enables the distributed processing of large-scale data sets over batches of workstations with simple programming patterns. It operates in a distributed storage environment with numerous clusters of computers with the best scalability features. Read more about HDFS and it’s architecture.

Goals of HDFS

1. It Provides a Large-Scale Distributed File System

10k nodes, 100 million files, and 10 PB

2. Optimization of Batch Processing                            

Provides very comprehensive aggregated capacity

3. Assume Commodity Hardware                                 

It detects hardware failure and recovers it

Possibilities of consuming the existing file if the hardware fails

4. Best Smart Client Intelligence Solution

The client can find the location of the scaffolds

The client can access the data directly from the data nodes      

5. Data Consistency                                                                                             

The client can append to the existing files

It is the Write-once-Read-many access model 

6. Chunks of File Replication and Usability 

Files can be a break in multi-nodes blocks in the 128 MB-block sizes and reuse it

7. Meta-Data in Memory      

The entire Meta-data is stored in the main memory

Meta-data is in the list of files, a list of blocks, and a list of data-nodes

Transaction-logs, it records file creation and file-deletions

Explore our Popular Software Engineering Courses

8. Data-Correctness

It uses the checksum to validate and transform the data.

Its client calculates the checksum per 512 bytes. The client retrieves the data and its checksum from the nodes

If validations fail, the client can use the replica-process.

9. Data-Pipelining Process    

Its client begins the initial step of writing from the first nodes

The first data-nodes transmit the data to the next data node to the pipeline

When all models are written, the client moves on to the next step to write the next block in the file

HDFS Architecture

Hadoop Distributed File System (HDFS) is structured into blocks. HDFS architecture is described as a master/slave one. Namenode and data node make up the HDFS architecture.

  1. Namenode: It functions as a master server for managing the file system namespace and also provides the right access approach to the clients.
  • It provides all the data nodes comprising data blocks for a particular file. With the help of this, when the system starts, it restores the data from the data nodes every time. 
  • HDFS incorporates a file method namespace that is executed with the Namenode for common operations like file “opening, closing, and renaming”, and even for catalogue.
  1. Datanode: It is the second technique specification in the HDFS cluster. It usually works one per node in the HDFS cluster. 
  • DataNodes are the methods that perform like slaves, stay on each computer in a cluster mode, and implement the original storage. They serve, read, and write requests for the clients.

In-Demand Software Development Skills

HDFS Top 20 Commands

Here is a list of all the HDFS commands:

1. To get the list of all the files in the HDFS root directory

  • Command: Usage: hdfs dfs [generic options] -ls [-c] [-h] [-q] [-R] [-t] [-S] [-u] [<path>…]   
  • Note: Here, choose the path from the root, just like the general Linux file system. -h in Green Mark shows that it is in human-readable sizes, as recommended. -R in Blue Mark shows that it is different from numerous one to practice into subdirectories.

2. Help

  • Command: fs –help        
  • Note: It prints the long output which prints all the commands

3. Concatenate all the files into a catalogue within a single file

  • Command: hdfs dfs [generic options] -getmerge [-nl] <src> <localdst>
  • Note: This will generate a new file on the local system directory which carries all files from a root directory and concatenates all together. -nl option, which is marked in Red, combines newlines among the files. With the help of this command, you can combine a collection of small records within a selection for a different operation.  

 4. Show Disk Usage in Megabytes for the Register Directory: /dir

  •   Command: hdfs dfs [generic options] -du [-s] [-h] <path> …
  • Note: The -h, which is marked in Blue gives you a readable output of size, i.e., Gigabytes.

5. Modifying the replication factor for a file

  • Command: hadoop fs -setrep -w 1 /root/journaldev_bigdata/derby.log
  • Note: It is for replication factors, which count by a file, which can be replicated in each Hadoop cluster. 

6. copyFromLocal

  • Command: hadoop fs -copyFromLocal derby.log /root/journaldev_bigdata
  • Note: This command is for copy of a file from Local file System to Hadoop FS

7.-rm -r

  • Command: hadoop fs -rm -r /root/journaldev_bigdata
  • Note:  With the help of rm-r command, we can remove an entire HDFS directory

8. Expunge

  • Command: hadoop fs -expunge
  • Note: This expunge performs fragments empty.

9. fs -du

  • Command:  hadoop fs -du /root/journaldev_bigdata/
  • Note: This command helps to disk usage of files under HDFS in a directory.


  • Command:hadoop fs -mkdir /root/journaldev_bigdata
  • Note: This command is used for checking the health of the files.


  • Command: hadoop fs -text <src>
  • Note: This command is used to visualise the .“sample zip” file in text format.

12. Stat 

  • Command: hadoop fs -stat [format] <path>
  • Note:  This stat command is used to print the information about the ‘test’ file present in the directory.

13. chmod : (Hadoop chmod Command Usage)

  • Command: hadoop fs -chmod [-R] <mode> <path>
  • Note: This command is used for changing the file permission on “testfile”.

14. appendToFile

  • Command: hadoop fs -appendToFile <localsrc> <dest>
  • Note: This command can be used for appending the localfile1, localfile2 instantly in the local filesystem into the file specified as ‘appendfile’ in the catalog.
  1. Checksum
  • Command: hadoop fs -checksum <src>
  • Note: This is the shell command which returns the checksum information.
  1. Count
  • Command: hadoop fs -count [options] <path>
  • Note: This command is used for counting the numbers of files, directories, and bytes from the specified path of the given file.

Explore Our Software Development Free Courses

  1. Find
  • Command: hadoop fs -find <path> … <expression>
  • Note: This command is used for finding all files which match the mentioned expression.
  1. getmerge
  • Command: hadoop fs -getmerge <src> <localdest>
  • Note: This command is used for “MergeFile into Local”.

19. touchz

  • Command: hadoop fs –touchz /directory/filename
  • Note: This command generates a file in HDFS with a file size corresponding to 0 bytes. 
  1. fs -ls
  • Command: hadoop fs -ls
  • Note: This command generates a list of  available files and subdirectories under default directory.

Read: Hadoop Ecosystem & Components


Hopefully, this article helped you with understanding HDFS commands to execute operations on the Hadoop filesystem. The article has described all the fundamental HDFS commands.

If you are interested to know more about Big Data, check out our PG Diploma in Software Development Specialization in Big Data program which is designed for working professionals and provides 7+ case studies & projects, covers 14 programming languages & tools, practical hands-on workshops, more than 400 hours of rigorous learning & job placement assistance with top firms.

Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.

What is HDFS and how does it work?

The Hadoop Distributed File System (HDFS) is the storage system for Hadoop spread out over multiple machines as a means to reduce cost and increase reliability. HDFS exposes a file system namespace and user data to be stored in files. Hadoop works on multiple machines simultaneously and huge data is processed across a cluster of commodity servers. With Hadoop, work can be done on multiple machines simultaneously. It stores the data while MapReduce processes the data and YARN divides the tasks. Namenode and datanode make up the Hadoop architecture. In short, we can say that first, the client submits data, HDFS stores the data, and MapReduce processes the data.

How are files stored in HDFS?

In HDFS, data is stored in blocks, the smallest unit of data that the file system stores. Files are divided into blocks, and each block is stored on a DataNode. Multiple DataNodes are linked to the master node in the cluster which is called the NameNode. For all files of HDFS, the storage type is defined in the data store. HDFS is also efficient in storing very large files across machines in a large cluster. Each file is stored as a sequence of blocks.

What is Apache Pig and its applications?

Pig is a high-level platform which is used to process large datasets. To process the data which is stored in the HDFS, the programmers write the scripts using the Pig Latin Language. With the help of a multi-query approach in Apache Pig, we can reduce the time of development. Pig is also beneficial for programmers who are not from a Java background. It is easy to learn, read, and write, and can handle the analysis of both structured and unstructured data. Pig is used for exploring large datasets and collecting large amounts of datasets in the form of search logs. It is used where the analytical insights are needed using the sampling.

Want to share this article?

Master The Technology of the Future - Big Data

Leave a comment

Your email address will not be published. Required fields are marked *

Our Popular Big Data Course

Get Free Consultation

Leave a comment

Your email address will not be published. Required fields are marked *

Get Free career counselling from upGrad experts!
Book a session with an industry professional today!
No Thanks
Let's do it
Get Free career counselling from upGrad experts!
Book a Session with an industry professional today!
Let's do it
No Thanks