Cassandra is an open source data storage system developed at Facebook for inbox search and designed for storing and managing large amounts of data across commodity servers. It can server as both Real time data store system for online applications Also as a read intensive database for business intelligence system
The other components of Cassandra are
- Data Center
- Commit log
- Bloom Filter
Column family in Cassandra is referred for a collection of Rows.
Cassandra Data Model consists of four main components:
- Cluster: Made up of multiple nodes and keyspaces
- Keyspace: a namespace to group multiple column families, especially one per partition
- Column: consists of a column name, value and timestamp
- ColumnFamily: multiple columns with row key reference.
Similar to table, memtable is in-memory/write-back cache space consisting of content in key and column format. The data in memtable is sorted by key, and each ColumnFamily consist of a distinct memtable that retrieves column data via key. It stores the writes until it is full, and then flushed out.
Source command is used to execute a file consisting of CQL statements.
Unlike relational databases, Cassandra does not support ACID transactions.
- ALL: Highly consistent. A write must be written to commitlog and memtable on all replica nodes in the cluster
- EACH_QUORUM: A write must be written to commitlog and memtable on quorum of replica nodes in all data centers.
- LOCAL_QUORUM:A write must be written to commitlog and memtable on quorum of replica nodes in the same center.
- ONE: A write must be written to commitlog and memtableof at least one replica node.
- TWO, Three: Same as One but at least two and three replica nodes, respectively
- LOCAL_ONE: A write must be written for at least one replica node in the local data center ANY
- SERIAL: Linearizable Consistency to prevent unconditional updates
- LOCAL_SERIAL: Same as Serial but restricted to local data center
The main components of Cassandra Data Model are
- Column & Family
Unlike traditional or any other database, Apache Cassandra delivers near real-time performance simplifying the work of Developers, Administrators, Data Analysts and Software Engineers.
- Instead of master-slave architecture, Cassandra is established on peer-to-peer architecture ensuring no failure.
- It also assures phenomenal flexibility as it allows insertion of multiple nodes to any Cassandra cluster in any datacenter. Further, any client can forward its request to any server.
- Cassandra facilitates extensible scalability and can be easily scaled up and scaled down as per the requirements. With a high throughput for read and write operations, this NoSQL application need not be restarted while scaling.
- Cassandra is also revered for its strong data replication capability as it allows data storage at multiple locations enabling users to retrieve data from another location if one node fails. Users have the option to set up the number of replicas they want to create.
- Shows brilliant performance when used for massive datasets and thus, the most preferable NoSQL DB by most organizations.
- Operates on column-oriented structure and thus, quickens and simplifies the process of slicing. Even data access and retrieval becomes more efficient with column-based data model.
- Further, Apache Cassandra supports schema-free/schema-optional data model, which un-necessitate the purpose of showing all the columns required by your application.
- Cassandra writes the data to a in memory structure known as Memtable
- It is an in-memory cache with content stored as key/column
- By key Memtable data are sorted
- There is a separate Memtable for each ColumnFamily, and it retrieves column data from the key
While adding a column you need to take care that the
- Column name is not conflicting with the existing column names
- Table is not defined with compact storage option
ALTER KEYSPACE can be used to change properties such as the number of replicas and the durable_write of a keyspace.
In Cassandra, composite type allows to define key or a column name with a concatenation of data of different type. You can use two types of Composite Type
- Row Key
- Column Name
There are various Cqlsh shell commands in Cassandra. Command “Capture”, captures the output of a command and adds it to a file while, command “Consistency” display the current consistency level or set a new consistency level.
Cassandra performs the write function by applying two commits-first it writes to a commit log on disk and then commits to an in-memory structured known as memtable. Once the two commits are successful, the write is achieved. Writes are written in the table structure as SSTable (sorted string table). Cassandra offers speedier write performance.
- All data stored as bytes
- When you specify validator, Cassandra ensures those bytes are encoded as per requirement
- Then a comparator orders the column based on the ordering specific to the encoding
- While composite are just byte arrays with a specific encoding, for each component it stores a two byte length followed by the byte encoded component followed by a termination bit.
Using CQL (Cassandra Query Language).Cqlsh is used for interacting with database.
Yes, but keeping in mind the following processes.
- Do not forget to clear the commitlog with ‘nodetool drain’
- Turn off Cassandra to check that there is no data left in commitlog
- Delete the sstable files for the removed CFs
Compaction refers to a maintenance process in Cassandra , in which, the SSTables are reorganized for data optimization of data structure son the disk. The compaction process is useful during interactive with memtable. There are two type sof compaction in Cassandra:
- Minor compaction: started automatically when a new sstable is created. Here, Cassandra condenses all the equally sized sstables into one.
- Major compaction : is triggered manually using nodetool. Compacts all sstables of a ColumnFamily into one.
Cassandra-Cqlsh is a query language that enables users to communicate with its database. By using Cassandra cqlsh, you can do following things
- Define a schema
- Insert a data and
- Execute a query
Tunable Consistency is a phenomenal characteristic that makes Cassandra a favored database choice of Developers, Analysts and Big data Architects. Consistency refers to the up-to-date and synchronized data rows on all their replicas. Cassandra’s Tunable Consistency allows users to select the consistency level best suited for their use cases. It supports two consistencies -Eventual and Consistency and Strong Consistency.
The former guarantees consistency when no new updates are made on a given data item, all accesses return the last updated value eventually. Systems with eventual consistency are known to have achieved replica convergence.
For Strong consistency, Cassandra supports the following condition:
R + W > N, where
- N – Number of replicas
- W – Number of nodes that need to agree for a successful write
- R – Number of nodes that need to agree for a successful read
While creating a table primary key is mandatory, it is made up of one or more columns of a table.
Replication Factor is the measure of number of data copies existing. It is important to increase the replication factor to log into the cluster.
Windows and Linux.
While a node is a single machine running Cassandra, cluster is a collection of nodes that have similar type of data grouped together. DataCentersare useful components when serving customers in different geographical areas. You can group different nodes of a cluster into different data centers.
Since Cassandra is a Java application, it can successfully run on any Java-driven platform or Java Runtime Environment (JRE) or Java Virtual Machine (JVM). Cassandra also runs on RedHat, CentOS, Debian and Ubuntu Linux platforms.
Cassandra concatenate changed data to commitlog
- Commitlog acts as a crash recovery log for data
- Until the changed data is concatenated to commitlog write operation will be never considered successful
Data will not be lost once commitlog is flushed out to file.
With a strong requirement to scale systems when additional resources are needed, CAP Theorem plays a major role in maintaining the scaling strategy. It is an efficient way to handle scaling in distributed systems. Consistency Availability and Partition tolerance (CAP) theorem states that in distributed systems like Cassandra, users can enjoy only two out of these three characteristics.
One of them needs to be sacrificed. Consistency guarantees the return of most recent write for the client, Availability returns a rational response within minimum time and in Partition Tolerance, the system will continue its operations when network partitions occur. The two options available are AP and CP.
Cassandra writes data in three components
- Commitlog write
- Memtable write
- SStable write
Thrift is a legacy RPC protocol or API unified with a code generation tool for CQL. The purpose of using Thrift in Cassandra is to facilitate access to the DB across the programming language.
Cassandra Super Column is a unique element consisting of similar collections of data. They are actually key-value pairs with values as columns. It is a sorted array of columns, and they follow a hierarchy when in action: keystore> column family> super column> column data structure in JSON.
Similar to row keys, super column data entries contains no independent values but are used to collect other columns. It is interesting to note that super column keys appearing in different rows do not necessarily match and will not ever.
Associated with SSTable, Bloom filter is an off-heap (off the Java heap to native memory) data structure to check whether there is any data available in the SSTable before performing any I/O disk operation.
Cassandra was designed to handle big data workloads across multiple nodes without any single point of failure. The various factors responsible for using Cassandra are
- It is fault tolerant and consistent
- Gigabytes to petabytes scalabilities
- It is a column-oriented database
- No single point of failure
- No need for separate caching layer
- Flexible schema design
- It has flexible data storage, easy data distribution, and fast writes
- It supports ACID (Atomicity, Consistency, Isolation, and Durability)properties
- Multi-data center and cloud capable
- Data compression
As the name suggests, ColumnFamily refers to a structure having infinite number of rows. That are referred by a key-value pair, where key is the name of the column and value represents the column data. It is much similar to a hashmap in java or dictionary in Python. Rememeber, the rows are not limited to a predefined list of Columns here. Also, the ColumnFamily is absolutely flexible with one row having 100 Columns while the other only 2 columns.
In Cassandra Column, basically there are three values
- Column Name
- Time Stamp
Using get_range_slices. You can start iteration with the empty string and after each iteration, the last key read serves as the start key for next iteration.
SSTable expands to ‘Sorted String Table,’ which refers to an important data file in Cassandra and accepts regular written memtables. They are stored on disk and exist for each Cassandra table. Exhibiting immutability, SStables do not allow any further addition and removal of data items once written. For each SSTable, Cassandra creates three separate files like partition index, partition summary and a bloom filter.
Yes, but it will require running repair to alter the replica count of existing data.
Tombstone is row marker indicating a column deletion. These marked columns are deleted during compaction. Tombstones are of great significance as Cassnadra supports eventual consistency, where the data must respond before any successful operation.
DataStaxOpsCenter: internet-based management and monitoring solution for Cassandra cluster and DataStax. It is free to download and includes an additional Edition of OpsCenter
SPM primarily administers Cassandra metrics and various OS and JVM metrics. Besides Cassandra, SPM also monitors Hadoop, Spark, Solr, Storm, zookeeper and other Big Data platforms. The main features of SPM include correlation of events and metrics, distributed transaction tracing, creating real-time graphs with zooming, anomaly detection and heartbeat alerting.
CQL is Cassandra Query language to access and query the Apache distributed database. It consists of a CQL parser that incites all the implementation details to the server. The syntax of CQL is similar to SQL but it does not alter the Cassandra data model.
A cluster is a container for keyspaces. Cassandra database is segmented over several machines that operate together. The cluster is the outermost container which arranges the nodes in a ring format and assigns data to them. These nodes have a replica which takes charge in case of data handling failure.
In Cassandra, a keyspace is a namespace that determines data replication on nodes. A cluster consist of one keyspace per node.
Both elements work on the principle of tuple having name and value. However, the former‘s value is a string while the value in latter is a Map of Columns with different data types.
Unlike Columns, Super Columns do not contain the third component of timestamp.
The default settings state that Cassandra uses 7000 ports for Cluster Management, 9160 for Thrift Clients, 8080 for JMX. These are all TCP ports and can be edited in the configuration file: bin/Cassandra.in.sh
SSTables are immutable and cannot remove a row from SSTables. When a row needs to be deleted, Cassandra assigns the column value with a special value called Tombstone. When the data is read, the Tombstone value is considered as deleted.
Apache Cassandra Questions and Answers for Interview
Preparing for Apache Cassandra job interview and whether you’re experienced or fresher & don’t know what kind of questions will be asked in Apache Cassandra interview, then go through the above 150+ Top Apache Cassandra Interview Questions and Answers to crack your job interview.