Intrusion Detection and Secure Routing CSE M.tech Project Report

Introduction to Intrusion Detection and Secure Routing Project:

This paper discussed about a secured routing protocol based on AODV over IPV6, which used for intrusion detection and response system for ad-hoc networks. This paper also discussed on how the new mechanisms help to detect and thwart malicious attacks. 

Overview:

 A secure routing protocol helps to prevent or minimize attacks against nodes in a MANET.  The possible attacks in MANET are routing disruption attacks, resource consumption attacks and attacks on data tariff. The SecAODV implements the common concept of both BASAR and SBRP.  The implementation of SecAODV follows Tuominen’s design and also uses two kernel modules like ip6 queue and ip6 NF AODV for its implementation.

  Route discovery and maintenance of local connectivity mechanisms are the two mechanisms implemented by an AODV protocol. An IDS strengthens the defense of a MANET.  Scalability, designing a platform for collaborative IDS and enabling protocol specific IDS are the basic design goals involved in designing an IDS. In IDS intrusions can be detected based on anomalous behavior of neighboring nodes.  Every node monitors a specific tariff activity within its radio-range. 

Conclusions:

There are significant inherent vulnerabilities of mobiles devices in MANET’s and there is more possibility of attacks on such devices.  This paper discussed about a secure routing protocol SecAODV and an IDS design and implementation.  Creating and maintaining routes is the major role of a routing protocol. 

Even though the network is getting protected from routing disruption attacks there is a possibility of occurring packet mangling attacks, denial of service attacks by using MAC vulnerabilities and grey holes. An IDS is needed to protect the network from such attacks. An IDS which deployed on a mobile device is always constrained to its radio range.  An IDS can monitor selected nodes and in turn increase the scalability and detection accuracy. 

Download  Intrusion Detection and Secure Routing CSE M.tech Project Report .

Robust And Secure Authentication Protocol In Group Oriented Distributed Applications Report

Introduction to Robust And Secure Authentication Protocol In Group Oriented Distributed Applications Project:

In group oriented distributed applications, there is a need for providing security services in order to provide group oriented communication privacy and data integrity.  Group communication data can be encrypted by a common secret key which is known to all group members.  In peer-to-peer network, the communication between group members is susceptible regarding common secret key as there is no previous agreement on the common secret key. So in order to maintain a secure and private communication among group members there is a need to establish distributed group key agreement and authentication protocol.

The protocol used for this dynamic communication group as per this paper is Tree-based Group Diffie-Hellman protocol.  Among rebuild, batch and Queue-batch algorithms, the best interval-based algorithm is Queue-batch algorithm. This algorithm has been proposed to perform rekeying operations and this algorithm has a capability to reduce the communication workload in a huge dynamic environment. Instance messaging technique is proposed to demonstrate the system’s strength. 

Overview: 

To provide the security to the group communications in distributed applications the things need to consider are protocol efficiency and group dynamics. Two-party communications has both starting and ending points but group communication has its own complexity as there is no one end point because members in the group joins and leaves at any time. Dynamic group tends to be small size when compared to that of multicast group so the focus is highly there on dynamic group. 

This group assumes many-to-many communication pattern. Dynamic communication group members are always located in a distributed fashion. The critical things need to be considered for secure group communication are secure, robust and efficient key management which helps to demonstrate the system’s strength.   

Conclusions: 

This paper discussed about TGDH protocols and tree management. Analysis of queue-batch algorithm based on its performance is also discussed in this paper.  The protocol proposed in this paper is Tree-based Group Diffie-Hellman protocol, which helps to achieve distributive and collaborative key management.  Interval-based rekey operations help to reduce the rekey complexity.

Download  Robust And Secure Authentication Protocol In Group Oriented Distributed Applications Report .

Parallel Virtual Machine a Computer Project Topic for Engineering Students

Introduction to Parallel Virtual Machine Project:

This paper discussed about Parallel Virtual Machine (PVM) which helps to create and access a parallel computing system and treat the resulting system as a single virtual machine. PVM is designed based on parallel programming’s ‘Message passing model’.

Overview:

Process-based consumption, explicit message passing model, user configured host pool and translucent access to hardware are the principles upon which PVM is developed. The PVM system on a single machine uses pvmd console command and then starts pvmd’s in other machines through a host file. Communication, Process control and user programming interface are few of the PVM components. PVM libraries can initiate and terminate processes, can broadcast messages, can synchronize and can change parallel virtual machine configuration. Programming PVM point-to-point communications takes place in both sending and receiving processes. All of the program executable of PVM are located in PVM architecture specified folder PVM_ARCH. 

So, in the machines where PVM program components need to execute in those machines there is a need to create this architecture specified subdirectory. The steps involved in program execution in PVM are installing and configure PVM, designing the application and preparing for PVM session, compiling application comments, creating PVM host file and $Home file, starting the Master PVM Daemon, executing the application and then at last exit the PVM.

PVM libraries pass the portable message so one of the benefits of PVM is its portability.  PVM is so flexible and easy to install and use. PVM supports scalable parallelism. PVM is a public domain software and available from NETLIB. PVM is bit slower in performance when comparing to that of other message passing systems because of its architecture and implementation. It also has lack of few message passing functionalities.

Conclusions:

PVM is a easy message passing system to implement.  At low cost PVM allows users to exploit the existing hardware to resolve larger problems. For parallel processing of data ,  PVM  provides a distributed computing environment upon the existing computers. High computational problems can get solved by PVM very quickly.

Download  Parallel Virtual Machine a Computer Project Topic for Engineering Students .

Privacy Preserving Data Mining CSE Project Report

There is happening a great increment in the data that are storing in the databases or various other applications which are bounded by the various technologies that are used in this generation on a large scale. Data processing and the data binding are the some of the types of the simple approach of the privacy sector and its department. In the previous years the mining of the data’s are also compressed to the sectors related to the privacy sectors. PPDM called as the privacy preserving and the data binding are the standard features that are used in the execution of the programs.

The work the responsibility that is related to the existing system is techniques that are targeted to the data and its new update and the construction to the current systems. The comparison of the fuzzy approach, the experimental results and the features of the fuzzy approach are some of the related attributes of the system here.

The total sharing of data from one system to another has become very useful and demand-able in today’s generation and also in the minds of the users and other people’s. This system is not only expressed in the sensitive case but also by the other user and the people related with the privacy and the protection. This system is tested and configured by the Apache server which successfully generates the proper expected results.

In the current work and the study of the existence of the new and the updated version of the system the developers use the FCM algorithm which is used as the parameters to the source codes at the code generation time duration. In the upcoming future this system can be even more extended by the data that are complexly related to the category.

 Download  Privacy Preserving Data Mining CSE Project Report .

NIT CSE Project Report for Packet Sniffer Project

Introduction to Packet Sniffer Project:

The current software permits the end user to skip or remove the data or related information from the interfaces of the networks. Here the user has the facility that they can even write the IP address so the data or the packets that they can easily capture the system machine that shares the data or packages. This data will be captured or caught in the device and the user then can easily view the file here. There is also some of the configuration that needs to be done and settings are applied to it. Changing the mode of the Ethernet card, Access to Ethernet card is the few configurations that are done here.

Now starting the Sniff Packets by capturing the images the packets are sub divided into three more types called the Network Numbers and Mask this network number and network mask are stored in the network card. Second is to capture the packet this is done to capture or grab the individual packets only.

Third is the Dumping data which are called the pcap dumper where this application is used to save the name of the file or documents. There are also filters that are connected to the system and there are also some types of the system like the pcap compile and set filter. Pcap compiler uses the compiler () are used to run the operations in the programs. Set filter sets the appropriate related filter to the program while execution.

The first version of the system was selected by the appropriate language by the developers. It was on the C language when the first version was launched in the early year. In the coming future this system can also be developed on JAVA for getting great demand. On depended on the existing system the designers are currently developing it on the Graphic User Interface. 

  Download  NIT CSE Project Report for Packet Sniffer Project .

Study on Peer-to-Peer Systems CSE Mini Project Report

Introduction to Study on Peer-to-Peer Systems Mini Project:

Peer-to-peer networks are called as the p2p networks in the development generation which was launched by the scientist names napster in the early 1999. This has become a great attractive device in the eyes of the users and also in the department of the networks communities. This gave new birth to the world o0f networking and the technology; from this the WEB 2.0 has become the application for the access of the internet services. From the year 2000 people everywhere in the world started making use of the peer-to-peer technology and later it became the backbone of the networking sectors.

 The various form of the categories like the Conferences papers, Journal articles, Books, and lastly the Technological results. This device has some of the standard search technologies and skills like the IEEE explorer, the Google Search bar, Google scholar, ACM standard digital library, and lastly the indexes that are related to this search department.

This system is the complete unique and a genuine project which has the capability of solving the most amount of the problems issues and the related system queries. This system after the testing of the execution of the device it was resulted that the device gave a successful output that was expected by it. This device also proved to be very useful in the networking communities. It also provides many valuable and important resources and services.

This system has done an introduction to the new department of the networking system. It works similar to the local client servers and does the job of the both clients and the servers. The demands for the peer-to-peer are ever growing in today’s generation and also with the application which are related to the networks. This also helps the user to share the resources and the services that are automatically and successfully stored in the central processing units.

 Download  Study on Peer-to-Peer Systems CSE Mini Project Report .

National Institute of Technology CSE Project Report on Intranet Caching Protocol

Introduction to Intranet Caching Protocol Project:

The existing system is used to download the files and important documents from the internet. These files can be downloaded from the intranet caching protocol where it becomes easy to download the files. The user sends the request to the server for the file download; the previous server sends the request to the LAN called as the Local Area Network. This LAN sends the request to the main server and then the file is completely downloaded. Motivation and the literature Survey are the two more sub parts of the existing development system. 

This system contains a search department where there are total three phases called as the First phase where when the button of the Graphic User Interface is clicked the code behind that button function gets executed and the program gets Builder, Second phase here an ftp client server is generated and the basics of the socket programs starts running in the other system too.

Third phase is where the main form called the MDI form interprets with the user to enter the required data in the Graphic user Interface form which will be further either stored in the database or can be used to search the requested data. 

The system that was developed using the network really reduced the time required for downloads of files. For the successful download and the quick connection wizard the Intranet Bandwidth must be reduced at the required period of time. The results of the system show that the performances issues of the system are very sufficient and also very good. The protocol here must have the same features and facilities too. The search facility also gives a great advantage to the users that are provided by the Local Area Networks. This system has the capability of accepting 20 requests at a same duration of time.

 Download  National Institute of Technology CSE Project Report on Intranet Caching Protocol  .

NIT Computer Science Project Report on Implementation of Code Optimization

Introduction to Implementation of Code Optimization Project:

The main achievement of the existing system is to generate the target code which seems to be similar to the code which is written by hands. This was not even possible by just making the forward compilation so this device was made to make the target to be achieved and also the time can be reduced and even the space required by it can also be reduced properly. The standard format that is used while developing the software is the SUIF called as the Stanford University Intermediate Format. This compilation is done in the SUIF itself only.

The SUIF has a standard architecture which consists of the Kernel which has two sub layers called the IO kernel and the SUIF kernel, the IO kernel targets the object which is not depended on the other system by making the combinations of the data called as the Meta data.

SUIF kernel defines the standard compiler for the system to run the programs in a proper and the successful way. Modules tell the numbers of department or sectors that are present in the existing system. Each class in this is completely identified by the C++ computing language. Modules are of two types which are immediate representation and the program analysis passing.

Code optimization is the important part in the existing device which is further sub divided into three levels called as the source code, the intermediate code and lastly the target code. This entire three are used to generate an optimism which is always used in the source code.

Here the developers have developed the Constant optimizer device by using the SUIF as the intermediate scheme formats. There are some of the SUIF scheme and SUIF architecture too which the creation or the generation of the CFG called as the Control Flow Graph has done. Here data flow called as the DFA, Data Flow Analysis is use to solve the problems that are created by the CFG and the SUIF. 

 Download  NIT Computer Science Project Report on Implementation of Code Optimization .

NIT CSE Project Report on Implementing a Temporal Database on Top of Conventional Database

The existing system is a device which is a simple database with some of the additional features. This database stores the data which is completely a time changing. In other words the database stores the time varying information. This also has the collection of the temporal events, recovery data etc… to allocate the features to the normal database a system called as the timestamps are made in addition to it. This database is then used and has the standard name called as the bitemporal databases which means there are two databases that has the capability to work together at the same duration of time.

The designing of the system is made in the Visual Interfaces which support the .Net Framework too. The system in its development has two important parts like the first for the analyzing the information in the database and secondly executing and retrieving the data that is been added to the database.

For the better working of the system there are some operations that were added to the source code lie the Insert operation which inserts the new data into the database whenever a request for the new entry is accepted, Update operation which updates the old present data into the database itself, Delete operation deletes the data from the database that is no longer required, Select operation select the specified data in the database that is commanded by the user.

The existing system develops the common database that completely supports the features of storing the temporary data into the database. Here more the better progress of the system the database management systems most advanced application called the Oracle is used. Oracle has the features that it even grabs the other related relational database and its management applications too which gives an extra advantage to the temporal database system. Other related applications and syntax like the DML and its user generated schema is also used and made a part of the developing system.

 Download  NIT CSE Project Report on Implementing a Temporal Database on Top of Conventional Database .

NIT Computer Science Project Report on Implementing a Linux Cluster

The existing system is the device which is lightly combined together computers that even work and process together so while observing it seems to be functioning like a single one computer only. The Linux clusters are the open source software product which is based on the Linux and the GNU operating systems. The system targets the results to be a complete open source always and also contains the scientific discoveries and also can even work faster. It also contains various other things like the Problem definition, system history, large performance of the systems, its technologies etc…

The cluster is basically designed as by following the standard steps like first the whole planning of the design of the cluster is made, secondly the a specific plan is then selected to generate the cluster, third step is the selection of the operating system and the necessary hardware to the system, last step the selection of the suitable software to proper working of the system.

Here each and every step after the development of the system is dependable on one another. While developing it the department of the budget is also observed. The main steps that is observed here while developing is the cluster design and its planning mission.

The operating system mainly used here is the Linux Red hat because it supports most everything related to the system. The standard application used for the cluster design is the OSCAR called as the Open Source Cluster Application Resources.  The hardware used is the Pentium four and a two GB of RAM.

The main achievement of the developed system is to gain the high speed performances related to the computing clusters and the cluster kit like the OSCAR etc… without the OSCAR the message passing of the system will never be completed. There are various other libraries and classes which are also a part of the development of the system.

 Download  NIT Computer Science Project Report on Implementing a Linux Cluster.