Mesh Algorithm in Parallel Processing PPT

Parallel processing: When several processing takes place simultaneously with many processors work together then it is said parallel processing. It is used in powerful computers where different jobs occur at the same time. In this large problems are broken into small sub problems and that sub problems are solved simultaneously on different processors which reduce the work time. Pipelines are also used to increase the work time of computers/processors.

Computational model: The instructions which tells the computers which steps to be performed. They are of four types:

-single instruction stream, single data stream (SISD)

-multiple instruction stream, single data stream (MISD)

-single instruction stream, multiple data stream (SIMD)

-multiple instruction stream, multiple data stream (MIMD)

Mesh: it is a grid of (a*y) having processor at each point of grid which has some memory from RAM that used by processor to calculate arithmetic, logic operations etc.

Packet routing: single inter processor connections in fixed network, such that each processor information has to be sent to others known as packet routing.  Bandwidth of channel are limited to access and restricted,   so that we can use one packet of the channel at a time. Efficiency of packet route algorithm depends upon time ittake to run, and to complete the work, and how many packets are stored by the processor.

CONCLUSION: parallel processing reduces the work time of computer by dividing the task into small subtask by communicating with other task also. Mesh algorithm is the best example of it.

Download Mesh Algorithm in Parallel Processing PPT and seminar topic.

External Sorting PPT

Sorting of file that is already in the memory known as external sorting also called as tape sorting. It can be done in two phases: distribution phase and merge phase.When strings are generated one at time under some rules of convenient length by internal sorting then it is distribution phase and when these strings are merged to form long string for sorting then it merging.

 At beginning input and output string blocks are entered in the memory, particular block is exhausted then next block in taken in the memory in same area and then they are written in output tape . Total string should be equal to entered number otherwise dummy string is taken to match total number of string.

External sorting Algorithm Characteristics: Some data must be kept on tape or disk and there accessing cost is higher and there may be restrictions.

Advantage of external sorting: Records are accessed sequentially with stable sort and less block access.

External sorting techniques can be done in various methods:

Two way merge sorting: In this string are divided in two parts and then they are compared and exchanged if required and so on. This is easy technique in terms of program.

K-way balanced sorting: This is extended sorting of k-way, but in these in each pass string is incremented by k so after first sorting it will be n-km .

Cascade Merge sort: In this merge should be even and of order t/2 with t-tapes units. In every sorting in commences with(t-1) way

Poly phase merge sorting: This phase is similar to cascade merge but in polyphase merge is always restricted by (t-1) way.

Note: Cascade merge begins with (t-1) way and its order is decreased and finally copy operation is completed where polyphase each time performs only (t-1) way merge.

Conclusion: For higher set of data internal sorting is of no use, for that external sorting is used that reduce input output cost and less time required.

Download External Sorting PPT and seminar topic.

Distributed Cache Updating For Dynamic Source Routing Protocol Project Report

Security of the data and prevent the loss of data being sent via router  is very important .The main aim of the project Distributed Cache Updating for Dynamic Source Routing Protocol is eliminating the problem of loss of data packet during the packet are being transmitted from the router by using the protocol. In the existing system the TCP/IP performance is being lowered due to loss of data, router failures, congestion in the network due to bad weather and also no track of the information about the failures so it becomes difficult to recover the data.  In this case if failure occurs the data packet have to sent again and this increases the time for receiving and sending of data.

Lets have look how the proposed system helps us to eliminate all the above stated drawbacks. the proposed system is being designed to maintain a Cache Table by the routers which will contain the information of the routing path. This table will contain all the information regarding the routers through which the packet will travel through will sending and receiving the information.  Every router will maintain its own cache table and contain the information about its neighboring router. In case if any router fails during the transmission of the data the cache algorithm removes this router and routes the information to the destination router through other router in the network. This algorithm helps to reduce the time in sending and receiving the data. If at any stage any  important data packet is being lost and, cache algorithm helps to track the router from where the data packet is being lost and send that particular data packet again to the destination. The algorithm used contains some modules which are as follows.

a.)    Route Request

b.)    Message Transfer.

c.)    Route Maintenance.

Cache Updating

Download Distributed Cache Updating For Dynamic Source Routing Protocol Project Report.

Efficient And Robust Detection Of Duplicate Videos In A Large Database

Project Objective:

The main objective of the project Efficient and Robust Detection of the Videos is to check for the duplicate videos. This will helps user to maintain the security of his data or videos. The proposed system helps to check whether one video is same as the other video. The method being used to verify the similarity of the video is it checks the similarity of the frames being shot of both the videos and if they are found to be similar it tells us that the videos are similar. The project approach is mainly on finding the duplicate videos by calculating the distance between the frames have been shot but it will also consider the fact of the order of the frame and the alignment of the frames. Edit distance is the another factor that has been considered to check the similarity between the two videos. Edit distance tell us how many number of times the video has been edited and so that you can decide then similarity of video. The number of algorithm is being used to find the similarity of the videos but we will be using the query algorithm to check equate the video similarity.  This methods follows some steps which are as follows.

a.)    Extraction.

b.)    Mapping.

c.)    Refine search.

a.)    EXTRACTION:

In this first  step of the algorithm it extracts the sample video and the main video in frames as per the respective frame size  and then stores them in the location selected by the user.

b.)    MAPPING:

Mapping is the second step of the algorithm after the video images are extracted algorithm needs to compare the stored frame to equate the similarity. So in this step the first image of the sample video and the main video is being equated and check whether they are equal or not.

c.)    REFINE SEARCH:

Refine search is the third step of the algorithm. After the Extraction and Mapping the images from the previous step are filtered and then again compared for the similarity between them.

SOFTWARE AND HARDWARE REQUIRMENTS:

Software requirements are the IDE to be used for the project is Visual Studio.Net 2008. Language to be used is C#.net. the framework used will be .NET FRAMEWORK

 Hardware requirements for the project are processor required is Pentium IV 2.4 GHz. Storage devices required are RAM 1 GB and Hard disk 160GB.

 Download Efficient And Robust Detection Of Duplicate Videos In A Large Database .

A Signature-free Buffer Overflow Attack Blocker Project Report

For the purpose of protecting the buffer overflow attacks which is a serious threat in cyber security, this paper will suggest and recommend the sigfree, real-time, out of the box as well as signature free application layers. Sigfree is capable of filtering the messages related to out code injection buffers towards the internet services. Sigfree is capable of blocking different types of code injection attacks which is proved in the experiment done by us.

Existing System

There are many existing systems that are used for detection the different types of data flow anomalies that include static as well as dynamic methods in order to detect the anomalies present within the software. Static methods are not been applicable because they are performed with low speed and these dynamic methods are even not performed for original program executions. Without performing these two methods the system cannot detect the anomalies.

Proposed System

In proposed system, SigFree is highly used because it is a basic method in which there is no need of using pre patterns. In this proposed system a specific techniques and patters known as program slicing are been used for the purpose of evaluating the payload packers that has four rules.

Hardware Requirements

The hardware requirements of this system needs Pentium IV 2.4 GHz with 40GB hard disk along with Floppy drive of 1.44 MB and 15VGA color monitor and Logitech mouse with 512 MB Ram. 

Software Requirements

The software requirements of this system needs either Java or .net languages with windows operating system XP. 

Download A Signature-free Buffer Overflow Attack Blocker Project Report .