Introduction to Apache Cassandra

  •        0

We aggregate and tag open source projects. We have collections of more than one million projects. Check out the projects section.

Apache Cassandra was designed by Facebook and was open-sourced in July 2008. It is regarded as perfect choice when the users demand scalability and high availability without any impact towards performance. Apache Cassandra is highly scalable, high-performance distributed database designed to handle large voluminous amounts of data across many commodity servers with no failure.  As compared to other popular distributed databases like Riak, HBase and Voldemort, Cassandra offers a robust and expressive interface for modeling and querying data. Cassandra is fully NoSQL style database engine, and as compared to traditional databases, it is capable for storing and accessing largely unstructured data.

Some of the unique points surrounding Apache Cassandra are:

  • Scalable, consistent and fully fault tolerant database
  • Column-Oriented and has distributed design based on Amazon’s Dynamo and data model is based on Google’s Bigtable.
  • Implements a Dynamo-style replication model with no point of failure and adds more powerful “column family” data model.
  • Provides high write and read throughput and Cassandra cluster has no special nodes i.e. the cluster has no masters, no slaves or elected leaders.


The following are the top features of Apache Cassandra:

  1. Elastic Scalability: It is one of the primary features surrounding Cassandra, as it supports easy scale-up or scale-down of cluster and provides strong flexibility for adding or deleting any number of nodes without any hiccup and even no need for restarting the server and provides high throughput for the highest number of nodes.
  2. High Availability and Fault Tolerance: Cassandra features high availability and fault tolerance due to strong data replication, which means, if any one node fails, the data is available at another nodes depending on replication nodes. It provides advanced back-up and recovery options.
  3. Transaction Support: Cassandra supports properties like Atomicity, Consistency, Isolation and Durability (ACID).
  4. Column-Oriented: Its data model is column-oriented and columns are stored based on column names. So, there are number of columns contained in rows.
  5. Tunable Consistency: Cassandra provides tunable consistency i.e. users can determine the consistency level by tuning it via read and write operations. Eventual consistency often conjures up fear and doubt in the minds of application developers. It is important to note, that reaching a consistent state often takes microseconds.
  6. Gossip Protocol: Cassandra uses a gossip protocol to discover node state for all nodes in a cluster. Nodes discover information about other nodes by exchanging state information about themselves and other nodes they know about. This is done with a maximum of 3 other nodes. Nodes do not exchange information with every other node in the cluster in order to reduce network load. They just exchange information with a few nodes and over a period of time state information about every node propagates throughout the cluster. The gossip protocol facilitates failure detection.
  7. Linear Scaling and Design Time schema: Due to its multi master architecture, Cassandra is linearly scalable, doubling the number of nodes in a cluster can handle twice the writes. Cassandra requires defining schema and data types at design time. It enables the users to define schema first.


Apache Cassandra V/s Traditional Relational Database Management Systems

The following Table highlights the differences between Apache Cassandra and Traditional RDBMS systems:

Basis of Difference

Apache Cassandra

Traditional RDBMS

Data Types

Deals with Unstructured data and can handle data including sound, video and images. As based on NoSQL DB, it can support huge volumes of Data

It deals with Structured data, just text, characters or numbers with moderate amount.


Highly-scalable and Flexible. Also known as schema-less

Fixed Schema and generally lots of limitations in data storage

Table Dimension

In Cassandra, Table dimension is: Row x Column Key x Column Value. Row is unit of replication, Column is unit of storage, Relationships are represented using collections.

In RDBMS, Table dimension is: Row x Column. Row is an individual record, Column represents attributes of a relation and there is concept of Foreign Keys, joins etc.


Handle large data and Keyspace is the outermost storage unit and data transfer rate is extremely fast cum automatic data distribution.

Handles moderate data and database is the outermost storage area and data transfer rate is slow and manual distribution of data is possible in RDBMS.

Misc. Features

Decentralized Deployments

Transactions written in many locations

Deployed in Horizontal fashion

Centralized deployments

Transactions are written in one location

Deployed in vertical fashion


Cassandra Architecture

The primary objective of Cassandra is to handle large data workloads across multiple nodes without any failure. Cassandra has peer-to-peer distributed system across its nodes, and data is distributed among all the nodes in a cluster.

  • Every node in the cluster plays the same role and every node functions independently at the same point of time interconnected with other nodes.
  • Every cluster node can accept read/write requests, without any limitation of data location in cluster.
  • In case of any node failure, the data can be read from other nodes in the network.

Writing and Reading Data

Data written to a Cassandra node is first recorded in on-disk commit log and then written to memory-based structure called memtable. When memtable’s size exceeds a configurable threshold, the data is written to an immutable file on disk called an SSTable. Buffering writes in memory in this way allows writes always to be a fully sequential operation, with many megabytes of disk I/O happening at the same time, rather than one at a time over a long period.

cassandra write procedure

Reading data from Cassandra involves a number of processes that can include various memory caches and other mechanisms designed to produce fast read response times. For a read request, Cassandra consults an in-memory data structure called a Bloom filter that checks the probability of an SSTable having the needed data. The Bloom filter can tell very quickly whether the file probably has the needed data, or certainly does not have it. 

cassandra read procedure


Data Distribution and Replication

Data Distribution

Cassandra automatically distributes and maintains data across a cluster, freeing developers and architects to direct their energies into value-creating application features.

Cassandra has an internal component called a partitioner, which determines how data is distributed across the nodes that make up a database cluster.

Cassandra also automatically maintains the balance of data across a cluster even when existing nodes are removed or new nodes are added to a system.

Data Replication

Cassandra features a replication mechanism that is very easy to configure and administer. A Cassandra cluster can have one or more keyspaces. Replication is configured at the keyspace level, allowing different keyspaces to have different replication models. Cassandra is able to replicate data to multiple nodes in a cluster, which helps ensure reliability, continuous availability, and fast I/O operations. Cassandra automatically maintains that replication even when nodes are removed, added, or fail.

Multi-Data Center and Cloud Support

Cassandra’s replication support multiple data centers and cloud availability zones. Users can easily set up replication so that data is replicated across geographically diverse data centers, with users being able to read and write to any data center they choose and the data being automatically synchronized across all locations.


Cassandra Query Language (CQL)

The Cassandra Query Language (CQL) is the primary language for communicating with the Cassandra database. CQL is purposefully similar to Structured Query Language (SQL) used in relational databases like MySQL and Postgres. 

The most basic way to interact with Cassandra is using the CQL shell, cqlsh. CQLSH is a platform that allows the user to launch the Cassandra query language (CQL). The user can perform many operations using cqlsh. Some of them include: defining a schema, inserting and altering data, executing a query etc. It basically is a coding platform for Cassandra. CQL adds an abstraction layer that hides implementation details of this structure and provides native syntaxes for collections and other common encodings.

Common ways to access CQL are:

  • Start cqlsh, the Python-based command-line client, on the command line of a Cassandra node.
  • For developing applications, use one of the C#, Java, or Python open-source drivers.

CQL Schema:

Creating Table:

  CREATE (TABLE | COLUMNFAMILY) <tablename> ('<column-definition>' , '<column-definition>') 
(WITH <option> AND <option>


Inserting Data into Table

 Insert into KeyspaceName.TableName(ColumnName1, ColumnName2, ColumnName3 . . . .) values (Column1Value, Column2Value, Column3Value . . . .


Updating Data into Table

   Update KeyspaceName.TableName

          Set ColumnName1=new Column1Value,

          ColumnName2=new Column2Value,

          ColumnName3=new Column3Value,

          Where ColumnName=ColumnValue


Deleting Data from Table

   Delete from KeyspaceName.TableName Where ColumnName1=ColumnValue


Selecting Data from Table

    Select ColumnNames from KeyspaceName.TableName Where ColumnName1=Column1Value AND


CQL prevents the following:

  • No arbitrary WHERE clause – Apache Cassandra prevents arbitrary predicates in a WHERE statement. Where clauses must have columns specified in your primary key.
  • No JOINS – You cannot join data from two Apache Cassandra tables.
  • No arbitrary GROUP BY – GROUP BY can only be applied to a partition or cluster column. Apache Cassandra 3.10 added GROUP BY support to SELECT statements.
  • No arbitrary ORDER BY clauses – Order by can only be applied to a clustered column.



Cassandra is fully replicated distributed database. There is no master, no slave. It's always on, its performant and these are some of the features and characteristics of Cassandra that make it a fantastic solution to the big data challenge.



Apache Cassandra

Cassandra client libraries




Dr. Anand Nayyar is an Academician, Researcher, Author, Writer, Inventor, Innovator, Scientist, Consultant and Orator. He is currently working as Professor, Researcher and Scientist in Graduate School at Duy Tan University, Vietnam. He can be reached at YouTube: Gyaan with Anand Nayyar

Subscribe to our newsletter.

We will send mail once in a week about latest updates on open source tools and technologies. subscribe our newsletter

Related Articles

An Introduction to the UnQLite Embedded NoSQL Database Engine

  • database nosql embedded key-value-store

UnQLite is an embedded NoSQL database engine. It's a standard Key/Value store similar to the more popular Berkeley DB and a document-store database similar to MongoDB with a built-in scripting language called Jx9 that looks like Javascript. Unlike most other NoSQL databases, UnQLite does not have a separate server process. UnQLite reads and writes directly to ordinary disk files. A complete database with multiple collections is contained in a single disk file. The database file format is cross-platform, you can freely copy a database between 32-bit and 64-bit systems or between big-endian and little-endian architectures.

Read More

An introduction to MongoDB

  • mongodb database document-oriented-databse no-sql c++ data-mining

MongoDB is the most exciting SQL-free database currently available in the market. The new kid on the block, called MongoDB is a scalable, high-performance, open source, schema free and document oriented database that focuses on the ideas of NoSQL Approach. Written in C++, it has taken rapid strides since its emergence into the public sphere as a popular way to build your database applications.

Read More

Lucene / Solr as NoSQL database

  • lucene solr no-sql nosql document-store

Lucene and Solr are most popular and widely used search engine. It indexes the content and delivers the search result faster. It has all capabilities of NoSQL database. This article describes about its pros and cons.

Read More

Connect to MongoDB and Perform CRUD using Java

  • java mongodb database programming

MongoDB is a popular and widely used open source NoSQL database. MongoDB is a distributed database at its core, so high availability, horizontal scaling, and geographic distribution is quite possible. It is licensed under Server Side Public License. Recently they moved to Server Side Public License, before that MongoDB was released under AGPL. This article will provide basic example to connect and work with MongoDB using Java.

Read More

An introduction to LucidWorks Enterprise Search

  • lucene solr search engine enterprise

Lucidworks Enterprise search solution is built on top of Apache Solr. It scales seamlessly w/sub-second response times under extreme query loads for multi-billion document collections. It has user friendly UI, which does all the job of configuration and search.

Read More

Light4j Cookbook - Rest API, CORS and RDBMS

  • light4j sql cors rest-api

Light 4j is a fast, lightweight and cloud-native microservices framework. In this article, we will see what and how hybrid framework works and integrate with RDMS databases like MySQL, also built in option of CORS handler for in-flight request.

Read More

Whats new in Lucene / Solr 4.0

  • lucene solr new-release

The release 4.0 is one of the important milestone for Lucene and Solr. It has lot of new features and performance important. Few important ones are highliggted in this article.

Read More

Desktop Apps using Electron JS with centralized data control

  • electronjs couchdb pouchdb desktop-app

When there is a requirement for having local storage for the desktop application context and data needs to be synchronized to central database, we can think of Electron with PouchDB having CouchDB stack. Electron can be used for cross-platform desktop apps with pouch db as local storage. It can sync those data to centralized database CouchDB seamlessly so any point desktop apps can recover or persist the data. In this article, we will go through of creation of desktop apps with ElectronJS, PouchDB and show the sync happens seamlessly with remote CouchDB.

Read More

PySpark: Installation, RDDs, MLib, Broadcast and Accumulator

  • pyspark spark python rdd big-data

We knew that Apace Spark- the most famous parallel computing model or processing the massive data set is written in Scala programming language. The Apace foundation offered a tool to support the Python in Spark which was named PySpark. The PySpark allows us to use RDDs in Python programming language through a library called Py4j. This article provides basic introduction about PySpark, RDD, MLib, Broadcase and Accumulator.

Read More

Exonum Blockchain Framework by the Bitfury Group

  • blockchain bitcoin hyperledger blockchain-framework

Exonum is an extensible open source blockchain framework for building private blockchains which offers outstanding performance, data security, as well as fault tolerance. The framework does not include any business logic, instead, you can develop and add the services that meet your specific needs. Exonum can be used to build various solutions from a document registry to a DevOps facilitation system.

Read More

How to install and setup Redis

  • redis install setup redis-cluster

Redis is an open source (BSD licensed), in-memory data structure store, used also as a database cache and message broker. It is written in ANSI C and works in all the operating systems. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. This article explains about how to install Redis.

Read More

Data dumping through REST API using Spring Batch

  • spring-batch data-dump rest-api java

Most of the cloud services provide API to fetch their data. But data will be given as paginated results as returning the complete data will overshoot the response payload. To discover the complete list of books or e-courses or cloud machine details, we need to call the API page-wise till the end. In this scenario, we can use Spring Batch to get the data page by page and dump it into a file.

Read More

LucidWorks Vs SearchBlox - Enterprise Search Solution

  • lucene solr searchblox lucidworks enterprise-search

Enterprise search software should be capable to search the data available in the entire organization or personnel desktop. The data could be in File system, Web or in Database. It should search contents of Emails, file formats like doc, xls, ppt, pdf and lot more. There are many commercial products available but LucidWorks and SearchBlox are best and free.

Read More

Introduction to Light 4J Microservices Framework

  • light4j microservice java programming framework

Light 4j is fast, lightweight, secure and cloud native microservices platform written in Java 8. It is based on pure HTTP server without Java EE platform. It is hosted by server UnderTow. Light-4j and related frameworks are released under the Apache 2.0 license.

Read More

8 Best Open Source Searchengines built on top of Lucene

  • lucene solr searchengine elasticsearch

Lucene is most powerful and widely used Search engine. Here is the list of 7 search engines which is built on top of Lucene. You could imagine how powerful they are.

Read More

How to create SEO friendly url

  • seo url searchengine

SEO friendly URL is recommended for any website which wants to be indexed and wants its presence in search results. Searchengine mostly index the static URL. It will avoid the URL which has lot of query strings. Almost all websites generate content dynamically then how could the URL be static. That is the job of the programmer.

Read More

Cache using Hazelcast InMemory Data Grid

  • hazelcast cache key-value

Hazelcast is an open source In-Memory Data Grid (IMDG). It provides elastically scalable distributed In-Memory computing, widely recognized as the fastest and most scalable approach to application performance.&nbsp;Hazelcast makes distributed computing simple by offering distributed implementations of many developer-friendly interfaces from Java such as Map, Queue, ExecutorService, Lock and JCache.

Read More

JHipster - Generate simple web application code using Spring Boot and Angular

  • jhipster spring-boot angular web-application

JHipster is one of the full-stack web app development platform to generate, develop and deploy. It provides the front end technologies options of React, Angular, Vue mixed with bootstrap and font awesome icons. Last released version is JHipster 6.0.1. It is licensed under Apache 2 license.

Read More

Advantages and Disadvantages of using Hibernate like ORM libraries

  • database orm

Traditionally Programmers used ODBC, JDBC, ADO etc to access database. Developers need to write SQL queries, process the result set and convert the data in the form of objects (Data model). I think most programmers would typically write a function to convert the object to query and result set to object. To overcome these difficulties, ORM provides a mechanism to directly use objects and interact with the database.

Read More

Best situation to use Column database

  • column database reporting

Column oriented database or datastore as the name sounds it stores the data by column rather than by row. It has some advantages and disadvantages over traditional RDBMS. Developer should know the typical situation to choose column oriented database.

Read More

We have large collection of open source products. Follow the tags from Tag Cloud >>

Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.