Hadoop Cluster Interview Questions & Answers

  1. Question 1. Explain About The Hadoop-core Configuration Files?

    Answer :

    Hadoop core is specified by two resources. It is configured by two well written xml files which are loaded from the classpath:

    1. Hadoop-default.xml – Read-only defaults for Hadoop, suitable for a single machine instance.
    2. Hadoop-site.xml – It specifies the site configuration for Hadoop distribution. The cluster specific information is also provided by the Hadoop administrator.
  2. Question 2. Explain In Brief The Three Modes In Which Hadoop Can Be Run?

    Answer :

    The three modes in which Hadoop can be run are:

    1. Standalone (local) mode – No Hadoop daemons running, everything runs on a single Java Virtual machine only.
    2. Pseudo-distributed mode – Daemons run on the local machine, thereby simulating a cluster on a smaller scale.
    3. Fully distributed mode – Runs on a cluster of machines.
  3. Python Interview Questions

  4. Question 3. Explain What Are The Features Of Standalone (local) Mode?

    Answer :

    In stand-alone or local mode there are no Hadoop daemons running,  and everything runs on a single Java process. Hence, we don’t get the benefit of distributing the code across a cluster of machines. Since, it has no DFS, it utilizes the local file system. This mode is suitable only for running MapReduce programs by developers during various stages of development. Its the best environment for learning and good for debugging purposes.

  5. Question 4. What Are The Features Of Fully Distributed Mode?

    Answer :

    In Fully Distributed mode, the clusters range from a few nodes to ‘n’ number of nodes. It is used in production environments, where we have thousands of machines in the Hadoop cluster. The daemons of Hadoop run on these clusters. We have to configure separate masters and separate slaves in this distribution, the implementation of which is quite complex. In this configuration, Namenode and Datanode runs on different hosts and there are nodes on which task tracker runs. The root of the distribution is referred as HADOOP_HOME.

  6. Python Tutorial

  7. Question 5. Explain What Are The Main Features Of Pseudo Mode?

    Answer :

    In Pseudo-distributed mode, each Hadoop daemon runs in a separate Java process, as such it simulates a cluster though on a small scale. This mode is used both for development and QA environments. Here, we need to do the configuration changes.

  8. Hadoop Interview Questions

  9. Question 6. What Are The Hadoop Configuration Files At Present?

    Answer :

    There are 3 configuration files in Hadoop:

    1. conf/core-site.xml:


    hdfs: //localhost:9000    

     2. conf/hdfs-site.xml:

     dfs.replication 1

     3. conf/mapred-site.xml:

     mapred.job.tracker local host: 9001 

  10. Question 7. Can You Name Some Companies That Are Using Hadoop?

    Answer :

    Numerous companies are using Hadoop, from large Software Companies, MNCs to small organizations. Yahoo is the top contributor with many open source Hadoop Softwares and frameworks. Social Media Companies like Facebook and Twitter have been using for a long time now for storing their mammoth data. Apart from that Netflix, IBM, Adobe and e-commerce websites like Amazon and eBay are also using multiple Hadoop technologies.

  11. Hadoop Tutorial
    Java Interview Questions

  12. Question 8. Which Is The Directory Where Hadoop Is Installed?

    Answer :

    Cloudera and Apache have the same directory structure. Hadoop is installed in cd /usr/lib/hadoop-0.20/.

  13. Question 9. What Are The Port Numbers Of Name Node, Job Tracker And Task Tracker?

    Answer :

    The port number for Namenode is ’70′, for job tracker is ’30′ and for task tracker is ’60′.

  14. Apache Hive Interview Questions

  15. Question 10. Tell Us What Is A Spill Factor With Respect To The Ram?

    Answer :

    Spill factor is the size after which your files move to the temp file. Hadoop-temp directory is used for this. Default value for io.sort.spill.percent is 0.80. A value less than 0.5 is not recommended.

  16. Java Tutorial

  17. Question 11. Is Fs.mapr.working.for A Single Directory?

    Answer :

    Yes, fs.mapr.working.dir it is just one directory.

  18. Hadoop MapReduce Interview Questions

  19. Question 12. Which Are The Three Main Hdfs-site.xml Properties?

    Answer :

    The three main hdfs-site.xml properties are:

    1. Dfs.name.dir which gives you the location on which metadata will be stored and where DFS is located – on disk or onto the remote.
    2. Dfs.data.dir which gives you the location where the data is going to be stored.
    3. Fs.checkpoint.dir which is for secondary Namenode.
  20. Python Interview Questions

  21. Question 13. How To Come Out Of The Insert Mode?

    Answer :

    To come out of the insert mode, press ESC,

    Type: q (if you have not written anything) OR

    Type: wq (if you have written anything in the file) and then press ENTER.

  22. Apache Hive Tutorial

  23. Question 14. Tell Us What Cloudera Is And Why It Is Used In Big Data?

    Answer :

    Cloudera is the leading Hadoop distribution vendor on the Big Data market, its termed as the next-generation data management software that is required for business critical data challenges that includes access, storage, management, business analytics, systems security, and search.

  24. Question 15. We Are Using Ubuntu Operating System With Cloudera, But From Where We Can Download Hadoop Or Does It Come By Default With Ubuntu?

    Answer :

    This is a default configuration of Hadoop that you have to download from Cloudera or from eureka’s Dropbox and the run it on your systems. You can also proceed with your own configuration but you need a Linux box, be it Ubuntu or Red hat. There are installations steps present at the Cloudera location or in Eureka’s Drop box. You can go either ways.

  25. Apache Pig Interview Questions

  26. Question 16. What Is The Main Function Of The ‘jps’ Command?

    Answer :

    The jps’ command checks whether the Datanode, Namenode, tasktracker, jobtracker, and other components are working or not in Hadoop. One thing to remember is that if you have started Hadoop services with sudo then you need to run JPS with sudo privileges else the status will be not shown.

  27. Hadoop MapReduce Tutorial

  28. Question 17. How Can I Restart Namenode?

    Answer :

    1. Click on stop-all.sh and then click on start-all.sh OR
    2. Write sudo hdfs (press enter), su-hdfs (press enter), /etc/init.d/ha (press enter) and then /etc/init.d/hadoop-0.20-namenode start (press enter).
  29. Hadoop Administration Interview Questions

  30. Question 18. How Can We Check Whether Namenode Is Working Or Not?

    Answer :

    To check whether Namenode is working or not, use the command /etc/init.d/hadoop- 0.20-namenode status or as simple as jps’.

  31. Hadoop Interview Questions

  32. Question 19. What Is “fsck” And What Is Its Use?

    Answer :

    “fsck” is File System Check. FSCK is used to check the health of a Hadoop Filesystem. It generates a summarized report of the overall health of the filesystem. 

    Usage:  hadoop fsck /

  33. Apache Pig Tutorial

  34. Question 20. At Times You Get A ‘connection Refused Java Exception’ When You Run The File System Check Command Hadoop Fsck /?

    Answer :

    The most possible reason is that the Namenode is not working on your VM.

  35. Hadoop Distributed File System (HDFS) Interview Questions

  36. Question 21. What Is The Use Of The Command Mapred.job.tracker?

    Answer :

    The command mapred.job.tracker is used by the Job Tracker to list out which host and port that the MapReduce job tracker runs at. If it is “local”, then jobs are run in-process as a single map and reduce task.

  37. Question 22. What Does /etc /init.d Do?

    Answer :

    /etc /init.d specifies where daemons (services) are placed or to see the status of these daemons. It is very LINUX specific, and nothing to do with Hadoop.

  38. Question 23. How Can We Look For The Namenode In The Browser?

    Answer :

    If you have to look for Namenode in the browser, you don’t have to give localhost: 8021, the port number to look for Namenode in the browser is 50070.

  39. Java Hadoop Developer Interview Questions

  40. Question 24. How To Change From Su To Cloudera?

    Answer :

    To change from SU to Cloudera just type exit.

  41. Java Interview Questions

  42. Question 25. Which Files Are Used By The Startup And Shutdown Commands?

    Answer :

    Slaves and Masters are used by the startup and the shutdown commands.

  43. Question 26. What Do Masters And Slaves Consist Of?

    Answer :

    Masters contain a list of hosts, one per line, that are to host secondary namenode servers. Slaves consist of a list of hosts, one per line, that host datanode and task tracker servers.

  44. Hadoop Testing Interview Questions

  45. Question 27. What Is The Function Of Hadoop-env.sh? Where Is It Present?

    Answer :

    This file contains some environment variable settings used by Hadoop; it provides the environment for Hadoop to run. The path of JAVA_HOME is set here for it to run properly. Hadoop-env.sh file is present in the conf/hadoop-env.sh location. You can also create your own custom configuration file conf/hadoop-user-env.sh, which will allow you to override the default Hadoop settings.

  46. Apache Hive Interview Questions

  47. Question 28. Can We Have Multiple Entries In The Master Files?

    Answer :

    Yes, we can have multiple entries in the Master files.

  48. Question 29. In Hadoop_pid_dir, What Does Pid Stands For?

    Answer :

    PID stands for ‘Process ID’.

  49. Question 30. What Does Hadoop-metrics? Properties File Do?

    Answer :

    Hadoop-metrics Properties is used for ‘Reporting‘purposes. It controls the reporting for hadoop. The default status is ‘not to report‘.

  50. Question 31. What Are The Network Requirements For Hadoop?

    Answer :

    The Hadoop core uses Shell (SSH) to launch the server processes on the slave nodes. It requires password-less SSH connection between the master and all the slaves and the Secondary machines.

  51. Question 32. Why Do We Need A Password-less Ssh In Fully Distributed Environment?

    Answer :

    We need a password-less SSH in a Fully-Distributed environment because when the cluster is LIVE and running in Fully Distributed environment, the communication is too frequent. The job tracker should be able to send a task to task tracker quickly.

  52. Question 33. What Will Happen If A Namenode Has No Data?

    Answer :

    If a Namenode has no data it cannot be considered as a Namenode. In practical terms, Namenode needs to have some data.

  53. Hadoop MapReduce Interview Questions

  54. Question 34. What Happens To Job Tracker When Namenode Is Down?

    Answer :

    Namenode is the main point which keeps all the metadata, keep tracks of failure of datanode with the help of heart beats. As such when a namenode is down, your cluster will be completely down, because Namenode is the single point of failure in a Hadoop Installation.

  55. Question 35. Explain What Do You Mean By Formatting Of The Dfs?

    Answer :

    Like we do in Windows, DFS is formatted for proper structuring of data. It is not usually recommended to do as it format the Namenode too in the process, which is not desired.

  56. Question 36. We Use Unix Variants For Hadoop. Can We Use Microsoft Windows For The Same?

    Answer :

    In practicality, Ubuntu and Red Hat Linux are the best Operating Systems for Hadoop. On the other hand, Windows can be used but it is not used frequently for installing Hadoop as there are many support problems related to it. The frequency of crashes and the subsequent restarts makes it unattractive. As such, Windows is not recommended as a preferred environment for Hadoop Installation, though users can give it a try for learning purposes in the initial stage.

  57. Apache Pig Interview Questions

  58. Question 37. Which One Decides The Input Split – Hdfs Client Or Namenode?

    Answer :

    The HDFS Client does not decide. It is already specified in one of the configurations through which input split is already configured.

  59. Question 38. Let’s Take A Scenario, Let’s Say We Have Already Cloudera In A Cluster, Now If We Want To Form A Cluster On Ubuntu Can We Do It. Explain In Brief?

    Answer :

    Yes, we can definitely do it. We have all the useful installation steps for creating a new cluster. The only thing that needs to be done is to uninstall the present cluster and install the new cluster in the targeted environment.

  60. Question 39. Can You Tell Me If We Can Create A Hadoop Cluster From Scratch?

    Answer :

    Yes, we can definitely do that.  Once we become familiar with the Apache Hadoop environment, we can create a cluster from scratch.

  61. Question 40. Explain The Significance Of Ssh? What Is The Port On Which Port Does Ssh Work? Why Do We Need Password In Ssh Local Host?

    Answer :

    SSH is a secure shell communication, is a secure protocol and the most common way of administering remote servers safely, relatively very simple and inexpensive to implement. A single SSH connection can host multiple channels and hence can transfer data in both directions. SSH works on Port No. 22, and it is the default port number. However, it can be configured to point to a new port number, but its not recommended. In local host, password is required in SSH for security and in a situation where password less communication is not set.

  62. Hadoop Administration Interview Questions

  63. Question 41. What Is Ssh? Explain In Detail About Ssh Communication Between Masters And The Slaves?

    Answer :

    Secure Socket Shell or SSH is a password-less secure communication that provides administrators with a secure way to access a remote computer and data packets are sent across the slave. This network protocol also has some format into which data is sent across. SSH communication is not only between masters and slaves but also between two hosts in a network.  SSH appeared in 1995 with the introduction of SSH – 1. Now SSH 2 is in use, with the vulnerabilities coming to the fore when Edward Snowden leaked information by decrypting some SSH traffic.

  64. Question 42. Can You Tell Is What Will Happen To A Namenode, When Job Tracker Is Not Up And Running?

    Answer :

    When the job tracker is down, it will not be in functional mode, all running jobs will be halted because it is a single point of failure. Your whole cluster will be down but still Namenode will be present. As such the cluster will still be accessible if Namenode is working, even if the job tracker is not up and running. But you cannot run your Hadoop job.

  65. Hadoop Distributed File System (HDFS) Interview Questions