Big Data Hadoop Projects Titles

These are the below Projects on Big Data Hadoop.

1) Twitter data sentimental analysis using Flume and Hive

2) Business insights of User usage records of data cards

3) Wiki page ranking with hadoop

4) Health care Data Management using Apache Hadoop ecosystem

5) Sensex Log Data Processing using BigData tools

6) Retail data analysis using BigData

7) Facebook data analysis using Hadoop and Hive

8) Archiving  LFS(Local File System) & CIFS  Data to Hadoop

9) Aadhar Based Analysis using Hadoop

10) Web Based Data Management of Apache hive

11) Automated RDBMS Data Archiving and Dearchiving  using Hadoop and Sqoop

12) BigData Pdf  Printer

13) Airline on-time performance

14) Climatic Data analysis using Hadoop (NCDC)

15) MovieLens  Data processing and analysis.

16) Two-Phase  Approach for Data Anonymization Using MapReduce

17) Migrating Different Sources To Bigdata And Its Performance

18) Flight History Analysis

19) Pseudo distributed hadoop cluster in script

HARDWARE REQUIREMENT FOR CLUSTER

  • 12-24 1-4TB hard disks in a JBOD (Just a Bunch Of Disks) configuration
  • 2 quad-/hex-/octo-core CPUs, running at least 2-2.5GHz
  • 64-512GB of RAM
  • Bonded Gigabit Ethernet or 10Gigabit Ethernet (the more storage density, the higher the network throughput needed)

SOFTWARE REQUIREMENT

  • FRONT END :           Jetty server, WebUI in JSP
  • BACK END :           Apache Hadoop, Apache FLUME, Apache HIVE, Apache PIG, JDK 1.6
  • OS       :           Linux-UBUNTU
  • IDE             :           ECLIPSE

A Signature-Based Indexing Method for Efficient Content-Based Retrieval of Relative data’s Report

 Rule discovery algorithms that are discovered in data mining helps generating an array of rules and patterns. Sometimes it also exceeded the size of the existing database and only a fraction of that proves to be useful for the users. In the process of knowledge discovery it is important to interpret the discovered rules and patterns. When there is a huge number of rules and patterns it become almost difficult to choose and analyze the most interesting among all .

For example it might not be a good idea to provide the user with an association rules list, raked by their support and confidence. This might not also be a good way of organizing the set of methods and rules and on another it can also overwhelm the users. It is also not important that all the rules and methods are interesting and it depends on a variety of reasons.

A useful data mining system must able to generate methods and rules feseability thus providing flexible tools for the purpose of rule selection. In the association of rule mining various approaches for the processing of the discovered rules are discussed before. Another approach is also made for grouping of similar rules that goes well for a moderate quantity of rules. Clusters can be created in case of too many numbers of rules and method.

A flexible approach allows to identify the rules that have special values for the users. It is done through union queries of data or templates. Moreover, this approach is just perfect for complementing the grouping rule approach. By the concept of inductive database the importance if data mining has been highlighted. It also allows the clients to query about the pattern and rules as well as about the data and the models extracted from it.

Port Knocking C++ Computer Science Project with Report

Introduction to Port Knocking C++ Project:

The Port Knocking is the communication system in the host to host where the data transfers from one closed port to another closed port. The port knocking process posses the different variants, the information can be encoded as the Port sequence or the Packet payload. Normally, the data transfers to the closed ports which is arrived to the testing host or monitoring daemon that cross checks the information by not sending the acknowledgement to the sender. 

The Port Knocking is the process of communication with in two or more computers or the client and the server where the information is encoded and encrypted in the chain of the port numbers. The chain or the sequence encoded and encrypted is called the Knock. t The work of the server is to monitor the client’s requests for the connections. The server will not provide the ports initially. The client requests connection trials to the server and send the SYN packets for the mentioned port present in the knock. So it is named as the Port Knocking. The server initially will not answer to the client knocking stage or requests. The server actually processes the port sequence or the monitoring of the SYN packets. The server first decodes the requests of knock and the then allows the client. 

However the authorized users are allowed on the firewall, the closed ports are available for the other users. Our Project is to find the ways of authentication service and found the Port Knocking. The Project considers the existing system and implemented the modification of the existing system by using of the novel port knocking architecture and introduces the high authentication service and highlighted the demerits of existing system. 

The Proposed System 

  1. The Port Knocking gives the highly secured authentication and the information transmission to the host without considering of the ports.
  2. The client is not aware of the server is performing the knocking sequence decoding.
  3. The server monitors the client request.
  4. The port is available to the requests for the specific time. 

Leader Election in Mobile AD HOC Network B.tech Final Year CSE Project Report for NIT Students

Introduction to  Leader Election in Mobile AD HOC Network Project:

We achieve pioneer decision ordered systems for portable impromptu systems. The contrivances guarantee that inevitably every associated part of topology diagram has precisely one pioneer. The ordered systems are dependent upon a schedule ordered system called TORA. The functional processes needed junctions to speak with just their present neighbors. The functional process is for a lone topology update. To bring about a go-to person race functional process portable specially appointed systems collecting that there is just a specific topology update in the system during that time frame.

A specially appointed is regularly demarcated as a foundation less grid, implication a system without the matter of course steering base like settled routers and tracking spines. Commonly the specially appointed junctions portable and the underlying correspondence medium is satellite. Every specially appointed junction may be equipped for of functioning as a router. Such specially appointed junctions or system may roll out in private region systems administration, gathering rooms and meetings, debacle alleviation and recover operations, arena operations and so forth.

Boss race is a convenient assembling square in conveyed frameworks, if wired or satellite particularly when disappointment can happen. Guide race can moreover be utilized as a part of correspondence methodologies, to decide on a revamped facilitator when the bunch enrollment updates. Improving distributive equations for specially appointed grids is a particularly testing assignment following the topology may update absolutely oftentimes and capriciously.  

To carry concerning a go-to individual race practical course of action conveyable extraordinarily named frameworks gathering that there is simply a particular topology redesign in the framework around the same time as that time span. A uniquely delegated is customarily outlined as an establishment less matrix, suggestion a framework without the expected result controlling base like settled routers and tracking spines. Usually the extraordinarily delegated intersections convenient and the underlying correspondence medium is satellite. Each extraordinarily delegated intersection may be outfitted for of working as a router.

Cache Compression in the Linux Memory Module NIT Computer Science Project Report

Introduction to Cache Compression in the Linux Memory Module Project:

This cache compression is used by configuration the RAM to catch the pages and the files which surely adds the great brand new version of the existing system. Here each every level of the cache compression attains the huge performance of the random disk memory and the Disk too. Here the current version is verified via the virtual part of the memory. This song has become the great attraction to the human’s eye. The working of the system is completely depended on the speed of the system computer that is based on the total memory of the RAM.

There are various documents and the files which are continuously present in the modules of the memory and its databases and disks. This files and the document where are present in the modeled memory is then compressed and then are permitted for the user to use in the required. 

Here there are some of the nodes which support the operating system like Linux. Linux is capable of generating the high memory for the hardware systems and configurations. Watermarks are the special standard images that are used to display the icon of the existing system here. The detailed information of the system is given in the reference books in detail.

The system is developed on the operating system which is the Linux because it works properly and gives more successful results when it works with the Linux operating system. There are various other zone in which are related to the existing system. The Zone that is related to the DMA is the zone which is the type of memory which has low physical ranges of the memory where compulsorily ISA application or device is required. Zone which is related the normal is the area where the files are internally mapped by the hardware like the kernel and many other substances.

 Download  Cache Compression in the Linux Memory Module NIT Computer Science Project Report .