HBase Interview Questions & Answers

  1. Question 1. Explain What Is Hbase?

    Answer :

    Hbase is a column-oriented database management system which runs on top of HDFS (Hadoop Distribute File System). Hbase is not a relational data store, and it does not support structured query language like SQL.

    In Hbase, a master node regulates the cluster and region servers to store portions of the tables and operates the work on the data.

  2. Question 2. What Are The Different Commands Used In Hbase Operations?

    Answer :

    There are 5 atomic commands which carry out different operations by Hbase.Get, Put, Delete, Scan and Increment.

  3. Hadoop Interview Questions

  4. Question 3. Explain Why To Use Hbase?

    Answer :

    • High capacity storage system
    • Distributed design to cater large tables
    • Column-Oriented Stores
    • Horizontally Scalable
    • High performance & Availability
    • Base goal of Hbase is millions of columns, thousands of versions and billions of rows
    • Unlike HDFS (Hadoop Distribute File System), it supports random real time CRUD operations
  5. Question 4. How To Connect To Hbase?

    Answer :

    A connection to Hbase is established through Hbase Shell which is a Java API.

  6. Hadoop Tutorial

  7. Question 5. Mention What Are The Key Components Of Hbase?

    Answer :

    Zookeeper: It does the co-ordination work between client and Hbase Maser

    1. Hbase Master: Hbase Master monitors the Region Server
    2. RegionServer: RegionServer monitors the Region
    3. Region: It contains in memory data store(MemStore) and Hfile.
    4. Catalog Tables: Catalog tables consist of ROOT and META
  8. Java Interview Questions

  9. Question 6. What Is The Role Of Master Server In Hbase?

    Answer :

    The Master server assigns regions to region servers and handles load balancing in the cluster.

  10. Question 7. Explain What Does Hbase Consists Of?

    Answer :

    • Hbase consists of a set of tables
    • And each table contains rows and columns like traditional database
    • Each table must contain an element defined as a Primary Key
    • Hbase column denotes an attribute of an object
  11. Java Tutorial
    Apache Solr Interview Questions

  12. Question 8. What Is The Role Of Zookeeper In Hbase?

    Answer :

    The zookeeper maintains configuration information, provides distributed synchronization, and also maintains the communication between clients and region servers.

  13. Question 9. Mention How Many Operational Commands In Hbase?

    Answer :

    Operational command in Hbases is about five types:

    • Get
    • Put
    • Delete
    • Scan
    • Increment
  14. Hadoop MapReduce Interview Questions

  15. Question 10. When Do We Need To Disable A Table In Hbase?

    Answer :

    In Hbase a table is disabled to allow it to be modified or change its settings. .When a table is disabled it cannot be accessed through the scan command.

  16. Apache Solr Tutorial

  17. Question 11. Explain What Is Wal And Hlog In Hbase?

    Answer :

    WAL (Write Ahead Log) is similar to MySQL BIN log; it records all the changes occur in data. It is a standard sequence file by Hadoop and it stores HLogkey’s. These keys consist of a sequential number as well as actual data and are used to replay not yet persisted data after a server crash. So, in cash of server failure WAL work as a life-line and retrieves the lost data’s.

  18. Apache Pig Interview Questions

  19. Question 12. What Are The Different Types Of Filters Used In Hbase?

    Answer :

    Filters are used to get specific data form a Hbase table rather than all the records.

    They are of the following types.

    • Column Value Filter
    • Column Value comparators
    • KeyValue Metadata filters.
    • RowKey filters.
  20. Hadoop Interview Questions

  21. Question 13. In Hbase What Is Column Families?

    Answer :

    Column families comprise the basic unit of physical storage in Hbase to which features like compressions are applied.

  22. Hadoop MapReduce Tutorial

  23. Question 14. Name Three Disadvantages Hbase Has As Compared To Rdbms?

    Answer :

    1. Hbase does not have in-built authentication/permission mechanism
    2. The indexes can be created only on a key column, but in RDBMS it can be done in any column.
    3. With one HMaster node there is a single point of failure.
  24. Question 15. Explain What Is The Row Key?

    Answer :

    Row key is defined by the application. As the combined key is pre-fixed by the rowkey, it enables the application to define the desired sort order. It also allows logical grouping of cells and make sure that all cells with the same rowkey are co-located on the same server.

  25. Machine learning Interview Questions

  26. Question 16. Is Hbase A Scale Out Or Scale Up Process?

    Answer :

    Hbase runs on top of Hadoop which is a distributed system. Haddop can only scale uo as and when required by adding more machines on the fly. So Hbase is a scale out process.

  27. Apache Pig Tutorial

  28. Question 17. Explain Deletion In Hbase? Mention What Are The Three Types Of Tombstone Markers In Hbase?

    Answer :

    When you delete the cell in Hbase, the data is not actually deleted but a tombstone marker is set, making the deleted cells invisible. Hbase deleted are actually removed during compactions.

    Three types of tombstone markers are there:

    • Version delete marker: For deletion, it marks a single version of a column
    • Column delete marker: For deletion, it marks all the versions of a column
    • Family delete marker: For deletion, it marks of all column for a column family
  29. NoSQL Interview Questions

  30. Question 18. What Are The Step In Writing Something Into Hbase By A Client?

    Answer :

    In Hbase the client does not write directly into the HFile. The client first writes to WAL(Write Access Log), which then is accessed by Memdtore. The Memstore Flushes the data into permanent memory from time to time.

  31. Java Interview Questions

  32. Question 19. Explain How Does Hbase Actually Delete A Row?

    Answer :

    In Hbase, whatever you write will be stored from RAM to disk, these disk writes are immutable barring compaction. During deletion process in Hbase, major compaction process delete marker while minor compactions don’t. In normal deletes, it results in a delete tombstone marker- these delete data they represent are removed during compaction.

    Also, if you delete data and add more data, but with an earlier timestamp than the tombstone timestamp, further Gets may be masked by the delete/tombstone marker and hence you will not receive the inserted value until after the major compaction.

  33. MongoDB Tutorial

  34. Question 20. What Is Compaction In Hbase?

    Answer :

    As more and more data is written to Hbase, many HFiles get created. Compaction is the process of merging these HFiles to one file and after the merged file is created successfully, discard the old file.

  35. MongoDB Interview Questions

  36. Question 21. Explain What Happens If You Alter The Block Size Of A Column Family On An Already Occupied Database?

    Answer :

    When you alter the block size of the column family, the new data occupies the new block size while the old data remains within the old block size. During data compaction, old data will take the new block size. New files as they are flushed, have a new block size whereas existing data will continue to be read correctly. All data should be transformed to the new block size, after the next major compaction.

  37. Question 22. What Are The Different Compaction Types In Hbase?

    Answer :

    There are two types of compaction. Major and Minor compaction. In minor compaction, the adjacent small HFiles are merged to create a single HFile without removing the deleted HFiles. Files to be merged are chosen randomly.

    In Major compaction, all the HFiles of a column are emerged and a single HFiles is created. The delted HFiles are discarded and it is generally triggered manually.

  38. Lucene Tutorial

  39. Question 23. What Is A Cell In Hbase?

    Answer :

    A cell in Hbase is the smallest unit of a Hbase table which holds a piece of data in the form of a tuple{row,column,version}

  40. Question 24. What Is The Scope Of A Rowkey In Hbase?

    Answer :

    Rowkeys are scoped to ColumnFamilies. The same rowkey could exist in each ColumnFamily that exists in a table without collision.

  41. Apache Solr Interview Questions

  42. Question 25. What Is The Role Of The Class Hcolumndescriptor In Hbase?

    Answer :

    This class is used to store information about a column family such as the number of versions, compression settings, etc. It is used as input when creating a table or adding a column.

  43. Question 26. What Is A Namespace In Hbase?

    Answer :

    A Namespace is a logical grouping of tables . It is similar to a database object in a Relational database system.

  44. Question 27. What Is The Lower Bound Of Versions In Hbase?

    Answer :

    The lower bound of versions indicates the minimum number of versions to be stored in Hbase for a column. For example If the value is set to 3 then three latest version wil be maintained and the older ones will be removed.

  45. Hadoop MapReduce Interview Questions

  46. Question 28. What Is Hotspotting In Hbase?

    Answer :

    Hotspotting is a situation when a large amount of client traffic is directed at one node, or only a few nodes, of a cluster. This traffic may represent reads, writes, or other operations. This traffic overwhelms the single machine responsible for hosting that region, causing performance degradation and potentially leading to region unavailability.

  47. Question 29. What Is Ttl (time To Live) In Hbase?

    Answer :

    TTL is a data retention technique using which the version of a cell can be preserved till a specific time period.Once that timestamp is reached the specific version will be removed.

  48. Question 30. Why Do We Pre-create Empty Regions?

    Answer :

    Tables in HBase are initially created with one region by default. Then for bulk imports, all clients will write to the same region until it is large enough to split and become distributed across the cluster. So empty regions are created to make this process faster.

  49. Question 31. Does Hbase Support Table Joins?

    Answer :

    Hbase does not support table joins. But using a mapreduce job we can specify join queries to retrieve data from multiple Hbase tables.

  50. Question 32. Which File In Hbase Is Designed After The Sstable File Of Bigtable?

    Answer :

    The HFile in Habse which stores the Actual data(not metadata) is designed after the SSTable file of BigTable.

  51. Question 33. What Is A Hbase Store?

    Answer :

    A Habse Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region.

  52. Apache Pig Interview Questions

  53. Question 34. What Are The Two Types Of Table Design Approach In Hbase?

    Answer :

    They are:

    1. Short and Wide
    2. Tall and Thin
  54. Question 35. When Do We Do Manual Region Splitting?

    Answer :

    The manual region splitting is done we have an unexpected hotspot in your table because of many clients querying the same table.

  55. Question 36. In Which Scenario Should We Consider Creating A Short And Wide Hbase Table?

    Answer :

    The short and wide table design is considered when there is

    • There is a small number of columns
    • There is a large number of rows
  56. Machine learning Interview Questions

  57. Question 37. In Hbase What Is Log Splitting?

    Answer :

    When a region is edited, the edits in the WAL file which belong to that region need to be replayed. Therefore, edits in the WAL file must be grouped by region so that particular sets can be replayed to regenerate the data in a particular region. The process of grouping the WAL edits by region is called log splitting.

  58. Question 38. How Does Hbase Support Bulk Data Loading?

    Answer :

    There are two main steps to do a data bulk load in Hbase:

    1. Generate Hbase data file(StoreFile) using a custom mapreduce job) from the data source. The StoreFile is created in Hbase internal format which can be efficiently loaded.
    2. The prepared file is imported using another tool like comletebulkload to import data into a running cluster. Each file gets loaded to one specific region.
  59. Question 39. Why Multiwal Is Needed?

    Answer :

    With a single WAL per RegionServer, the RegionServer must write to the WAL serially, because HDFS files must be sequential. This causes the WAL to be a performance bottleneck.

  60. Question 40. How Does Hbase Provide High Availability?

    Answer :

    Hbase uses a feature called region replication. In this feature for each region of a table, there will be multiple replicas that are opened in different RegionServers. The Load Balancer ensures that the region replicas are not co-hosted in the same region servers.

  61. NoSQL Interview Questions

  62. Question 41. How Does Wal Help When A Regionserver Crashes?

    Answer :

    The Write Ahead Log (WAL) records all changes to data in HBase, to file-based storage. if a RegionServer crashes or becomes unavailable before the MemStore is flushed, the WAL ensures that the changes to the data can be replayed.

  63. Question 42. What Is Hregionserver In Hbase?

    Answer :

    HRegionServer is the RegionServer implementation. It is responsible for serving and managing regions. In a distributed cluster, a RegionServer runs on a DataNode.

  64. MongoDB Interview Questions

  65. Question 43. What Are The Different Block Caches In Hbase?

    Answer :

    HBase provides two different BlockCache implementations: the default on-heap LruBlockCache and the BucketCache, which is (usually) off-heap.