Wednesday, July 10, 2013

Hadoop Map reduce with Cassandra Cql through Pig

One of the main disadvantage of using PIG is that, Pig always raise all the data from Cassandra Storage, and after that it can filter by your choose. It's very easy to imagine how the workload will be if you have a tons of million rows in your CF. For example, in our production environment we have always more than 300 million rows, where only 20-25 millions of rows is unprocessed. When we are executing pig script, we have got more than 5000 map tasks with all the 300 millions of rows. It's time consuming and high load batch processing we always tried to avoid but in vain. It's could be very nice if we could use CQL query in pig scripts with where clause to select and filter our data. Here benefit is clear, less data will consume, less map task and a little workload.


Still in latest version of Cassandra (1.2.6) this feature is not available. This feature is planned in next version Cassandra 1.2.7. However patch is already available for this feature, with a few efforts we can make a try.
First we have to download the source code of the Cassandra from the branch 1.2. Also we should have a configured Hadoop cluster with Pig.
1) Download the Cassandra source code from branch 1.2
git clone -b cassandra-1.2 http://git-wip-us.apache.org/repos/asf/cassandra.git
assume that we already familiar with git.
and also apply the patch fix_where_clause.patch

Now compile the source code and setup the cluster. For testing purpose i am using my single node Hadoop 1.1.2 + Cassandra 1.2.7 + Pig 0.11.1 cluster.
2) To setup single node cluster please see here A single node Hadoop + Cassandra + Pig setup
3) Create a CF as follows:
CREATE TABLE test (
  id text PRIMARY KEY,
  title text,
  age int
);
and insert some dummy data
insert into test (id, title, age) values('1', 'child', 21);
insert into test (id, title, age) values('2', 'support', 21);
insert into test (id, title, age) values('3', 'manager', 31);
insert into test (id, title, age) values('4', 'QA', 41); 
insert into test (id, title, age) values('5', 'QA', 30); 
insert into test (id, title, age) values('6', 'QA', 30); 
4) Execute the following pig script
rows = LOAD 'cql://keyspace1/test?page_size=1&columns=title,age&split_size=4&where_clause=age%3D30' USING CqlStorage();
dump rows;
you should get following result on pig console
((id,5),(age,30),(title,QA))
((id,6),(age,30),(title,QA))
Lets check the Hadoop job history page

Map input records equals 2.
With this new feature we can use where clause to select our desired data from Cassandra storage. You can also check the jira issue tracker to drill down much more.
All the credits goes for the Alex Lui, who implemented this feature.

4 comments:

BHUPENDRA KUMAR said...

hi shamim,

i did the same as you instructed but when i go to the address localhost:50070 it shows 0 live node

working on VMware workstation
on RHEL_5x64

thank you

sundara rami reddy said...

Fabulous, what a website it is on hadoop! This website presents helpful data to us, keep it up.
Hadoop Training in hyderabad

Lennart said...

Hi,

Does that where_clause patch still applies to version 2.0.9 of Cassandra?

I'm having trouble query cassandra through pig with cql and a where_clause.

Am passing part of a partition key resulting in an exception.
Also passing part of a cluster key will result in an exception.

please look at http://stackoverflow.com/questions/24912852/composite-key-in-cassandra-with-pig-and-where-clause-for-part-of-the-key-in-the for more information about my problem.

Really hope you can help me out in some way, or at least point me in a direction.

Regards,
Lennart Weijl

Shamim Bhuiyan said...

Hello,
As far as I know, this patch is not required any more and it was added in source last year.
Regards
Shamim ahmed