Get Latest Final Year Computer Projects in your Email

Your Email ID:
FYP.in Subs

Reusing Results in Big Data Frameworks

Download Project:

Fields with * are mandatory

Big Data analysis has been a very hot and active research during the past few years. It is getting hard to efficiently execute data analysis task with traditional data warehouse solutions.

Parallel processing platforms and parallel dataflow systems running on top of them are increasingly popular. They have greatly improved the throughput of data analysis tasks. The trade-off is the consumption of more computation resources. Tens or hundreds of nodes run together to execute one task.

However, it might still take hours or even days to complete a task. It is very important to improve resource utilization and computation efficiency. According to research conducted by Microsoft, there exists around 30% of common sub-computations in usual workloads. Computation redundancy is a waste of time and resources.

Apache Pig is a parallel dataflow system runs on top of Apache Hadoop, which is a parallel processing platform. Pig/Hadoop is one of the most popular combinations used to do large scale data processing.

This project proposed a framework which materializes and reuses previous computation results to avoid computation redundancy on top of Pig/Hadoop. The idea came from the materialized view technique in Relational Databases. Computation outputs were selected and stored in the Hadoop File System due to their large size.

The execution statistics of the outputs were stored in MySQL Cluster. The framework used a plan matcher and rewriter component to find the maximally shared common-computation with the query from MySQL Cluster, and rewrite the query with the materialized outputs. The framework was evaluated with the TPC-H Benchmark.

The results showed that execution time had been significantly reduced by avoiding redundant computation. By reusing sub-computations, the query execution time was reduced by 65% on average; while it only took around 30 ˜ 45 seconds when reuse whole computations. Besides, the results showed that the overhead is only around 25% on average.
Source: KTH
Author: Shang, Hui

Download Project

Big Data, Hadoop and Cloud Computing Project Topics, Ideas and Downloads

>> Big Data Projects for Computer Engineering Students

Download Project:

Fields with * are mandatory