My course on 'Distributed Computing' (MGS 655 at the University at Buffalo) requires students to do a hands-on project. They are allowed to propose a topic of their choice (of course pertaining to distributed computing!) and work in group(s) of not more than two students. Several interesting projects were designed this semester.
This post is the first of a series of blog-posts that will showcase the systems they built. Your comments are very welcome - both to motivate the students and also suggest ways by which they can improve / develop their work.
Abhinash Behera and Vijay Lakshmikanthan designed a large scale image/video search system. Here's a description of their project.
Description:
We built a search system that can retrieve appropriate video files based on user input. The user input is assumed to be in the form of an image. It is a Query By Image Content (QBIC) information retrieval system. Such a complex system can only be realized by the use of distributed computing for storage - analysis and retrieval are highly resource intensive processes. Since this is a search based system, the query processing time has to be minimal for achieving high ‘precision’ by returning highly relevant videos.
We have divided the project into three parts - (1) presentation tier (2) application/processing tier and the (3) database tier. To handle the demand for compute power, we have used four virtual machines including our own systems.
We developed the database tier based on MongoDB and were able to successfully store and retrieve the video files. The logic for processing the image and video chunks have been handled using the OpenIMAJ library, a Java based open source multimedia content analysis tool. We developed the application layer in two parts - the business layer using Java and analysis layer using Apache Hadoop.
A design representing the above description has been presented below.
Architecture Overview: We have used a decentralized architecture for designing our application. The business tier is divided into two parts - the Java implementation of the image comparison algorithm and the Apache Hadoop framework for frame processing. The primary advantage of this architecture has a separation of the analytical and processing tasks. The analytical layer of Hadoop helps in parallel processing the individual frames of the video thereby providing a boost to the existing processing power.
Algorithm Overview: We ingest the videos into the MongoDB GridFS system which helps in fast retrieval of the videos when requested by the client. Next the Hadoop system works on these videos and generates color histograms. The color histograms are stored in the MongoDB database. The business tier makes use of this information for image comparison and producing intermediate image matches. The videos containing these intermediate images are picked from the database and displayed on the UI.
This post is the first of a series of blog-posts that will showcase the systems they built. Your comments are very welcome - both to motivate the students and also suggest ways by which they can improve / develop their work.
Abhinash Behera and Vijay Lakshmikanthan designed a large scale image/video search system. Here's a description of their project.
Description:
We built a search system that can retrieve appropriate video files based on user input. The user input is assumed to be in the form of an image. It is a Query By Image Content (QBIC) information retrieval system. Such a complex system can only be realized by the use of distributed computing for storage - analysis and retrieval are highly resource intensive processes. Since this is a search based system, the query processing time has to be minimal for achieving high ‘precision’ by returning highly relevant videos.
We have divided the project into three parts - (1) presentation tier (2) application/processing tier and the (3) database tier. To handle the demand for compute power, we have used four virtual machines including our own systems.
We developed the database tier based on MongoDB and were able to successfully store and retrieve the video files. The logic for processing the image and video chunks have been handled using the OpenIMAJ library, a Java based open source multimedia content analysis tool. We developed the application layer in two parts - the business layer using Java and analysis layer using Apache Hadoop.
A design representing the above description has been presented below.
Algorithm Overview: We ingest the videos into the MongoDB GridFS system which helps in fast retrieval of the videos when requested by the client. Next the Hadoop system works on these videos and generates color histograms. The color histograms are stored in the MongoDB database. The business tier makes use of this information for image comparison and producing intermediate image matches. The videos containing these intermediate images are picked from the database and displayed on the UI.
No comments:
Post a Comment