Secure Key Exchange and Encryption For Group Communication In Wireless Adhoc Networks

Any types of network which are not interconnected with wires or cables are known as a wireless network. A wireless ad-hoc network is a type of wireless network in de centralized form and does not depend on a specific existing set up or infrastructure. The data routing and forwarding happens between each and every nodes and network connectivity forms the basis of this routing.

The communication channels although are transformed through open air which makes it vulnerable to threats and attacks. Many range of threats such as eves dropping and interfering knowingly happens across ad-hoc networks. As said earlier, the nodes do a double role as both host and routers. For forwarding and receiving the data packets through nodes are connected internally through a multi hop network path. As the threat is more to wireless systems than a wired one, a need of proper security measure is extremely important 

This proposed system deals with an encryption mechanism along with a key change between the data transfer among nodes. This is done by using an additional parameter in MAC address in the form of specific key message while forwarding data by the nodes. The authenticated neighbor nodes would be able to exchange the key and any type of circular formation is avoided. The biggest advantage is that there would be only a momentarily dynamic connection between the two neighboring nodes. 

An ad-hoc network uses the probability of multi hop radio delaying and in this the nodes can act without having any fixed or steady infrastructure. Security is the prime concern across these ad-hoc networks and by using this key exchange and encryption it is almost well secured across a communication that spreads in groups. Each node has to be smart enough while forwarding the data to the neighboring nodes. 

Download  Secure Key Exchange and Encryption  For Group Communication In  Wireless Adhoc Networks .

Designing Reduced Instruction Set computer RISC Processor Using VHDL Project Report

Introduction to Designing Reduced Instruction Set computer (RISC) Processor Using VHDL Project: 

The designing of super-efficient computer processors and accelerators for hardware is done mainly by reducing the control of the compiler hardware to a low level.  RISC computing or reduced instruction set computing is a design of the CPU that can provide higher performance by simplified instructions which leads to a faster execution.

RISC is a system that allows the usage of optimized set of instruction to the highest level and uses the load/store architecture. Primitive computer languages were used to analyze the sequential programs, much similarly standard language are used to define the digital circuits. Hardware description language or HDL is used in hardware elements to concurrent the process model in it.  The cost effectiveness of the RISC model has impressed many of those who design a compact hardware.

The first step in designing is to test a program to check the input and output in the design module under various conditions. A simulator tool called Verilog simulator verifies the functioning of the design. Algorithm and logical unit will be synthesized and generation of a netlistwill happen which in turn will be transformed into a programmable logic device or PLD image file. The test jig is wired and verified after the PLD files are programmed in a device called CPLD.

Verilog HDL has many pros comparing to the other HDL’s. No special technology is selected while designing, also redesigning the circuit also don’t emerge as a case. The design is implemented into the tool and a gate level net list is created, the verification process is done in the design stage itself eliminating the errors and cons there and then itself.

The functional units of the machine:

Processor

Controller

Memory

Various functions includes data operations on ALU, storage, instruction, address registers and program counter content changing, altering memory content , data retrieval , bus movement controls etc. 

Download  Designing Reduced Instruction Set computer (RISC) Processor Using VHDL Project Report.

Balanced Ant Colony Optimization BACO in Grid Computing Abstract

Introduction to Balanced Ant Colony Optimization BACO in Grid Computing:

A huge computing power and technique is required to solve complex and difficult scientific doubts and problems. The space required for storing data is also pretty huge as the solution takes up a lot of memory. Grid computing is an innovative computation technique through which we can manage a large number of files through interactive workloads distribution system. It mainly focuses on unused processing cycles and harnesses them to solve these problems. 

 Two types of grids:

  • Computing grid
  • Data grid

It would consume a lot of time to process, solving and storing a large amount of data and grid computing helps us to do the same with less storage space and time. Status conditions of resource and networks are closely monitored and if it is found to be unstable, the proposed job would be a failure and will result in a large computation time. In order to bring in more effectiveness to the job, a scheduling algorithm is proposed to schedule these jobs in the most effective manner.

This scheduling algorithm is extremely important as hundreds of computers are used as resources and the task is impossible to do manually.  Balanced Ant Colony Optimization or BACO is such a scheduling algorithm used in the grid environment to schedule jobs effectively. Although there are other scheduling algorithms such as FCFS and SJF, BACO excels in the dynamic grid environment. Local search are made extremely quick and efficient and the strategy used for scheduling will be dependent on the job types and present environment.

Some of the problems solved with the help of BACO

  • Traveling salesman problem
  • Vehicle routing problem
  • Graph coloring problem

BACO algorithm is of different types:

  • Ant colony system
  • Max min ant system
  • Fast ant system
  • Elitist ant system
  • Rank Based ant system

Download  Balanced Ant Colony Optimization BACO in Grid Computing Abstract .

Multiple Routing Configurations for Fast IP Network Recovery Documentation

Introduction to Multiple Routing Configurations for Fast IP Network Recovery Project:

We can’t imagine a world without internet these days, the very fact that we use internet for even small things such as navigation or even finding a favorite recipe explains how much it has influenced all of our lives. Moreover the abundance and easy access to internet unlike olden days have made internet mass popular.

We often face problems while accessing the internet and the most re occurring problem is that of a decrease in browsing, uploading and downloading speeds sue to heavy net traffic, this happens mainly due to the failure of the nodes and slow  link recovery across network protocol. MRC or Multiple Routing Configuration is a new innovative technique through which perform a recovery process fast and effective after a node failure. 

Performance are analyzed by MRC by considering 

  • Scalability
  • Lengths of the backup path
  • Distribution of loads 

In cases of net traffic, MRC will reduce the congestion by recovering the traffic and improving its distribution. In the present network the traffic congestion is not managed properly as the work load is mainly relying upon link weights in many cases. Most connections throw light only on the cases where no failures happen.

In the case of load balancing, a better optimization can be obtained with the proposed MRC system as it has a very simple and effective approach.  The recovery is almost guaranteed on scenarios where single failure happens, MRC do this by managing the node and link failures under single mechanism. As this technique is sans a connection and uses hop by hop forwarding technique which are destination based for the assumption. During the diagnosis, if MRC detects a failure an additional routing information are maintained specifically which enables the forwarding of the packet to flow on that alter link without any delay 

Download  Multiple Routing Configurations for Fast IP Network Recovery Documentation  .

A Secure Communication Protocol For Ad Hoc Networks Java Project Abstract

Introduction to A Secure Communication Protocol For Ad Hoc Networks Java Project:

The biggest problem everyone faces while sending or sharing data is the security issue associated with the transfer. The chance of infiltration is higher in a less secured network and as a result there would be loss of important confidential data. The ad-hoc network is independent of any fixed infrastructure and it’s the each individual node in it that works as a router by managing and transferring the data among each other.

There should be a highly secured and safe protocol for communication between each of these nodes. This topic deals with such innovative security measure called clustering adapted for such ad-hoc networks. The data packets are shared between two nodes inside and among these clusters. A head node is selected to execute all important functions and uses various technological processes such as encryption and cryptography to make the whole system more authentic, secure and scalable. 

The data packets are usually vulnerable to hacker attacks and it is extremely important to protect them from such malicious attacks. Techniques such as symmetric key cryptography and authentication techniques such as Kerberos authentication are used to manage the integrity and transparency of these data packets. A method and an algorithm which is randomized to control access to channel broadcast is also another technique used to secure communication protocol of these types of ad-hoc networks.

A distributed clustering algorithm is also used to help cluster the nodes. A leader is selected within a cluster and other nodes maintain close proximity to this leader in cluster leader based scheme which reduces the cluster overhead considerably well. The two different algorithms namely randomized channel control access and distributed clustering are also used for increasing the security of the protocol. By using these methods the overall efficiency of the ad-hoc network increases and the overhead considerably decreases. 

Download  A Secure Communication Protocol For Ad Hoc Networks Java Project Abstract  .

Design and Implementation of Sip for Multistreaming Applications Project Report

Introduction to Design and Implementation of Sip for Multistreaming Applications Project:

The Session Initiation Protocol (SIP) is for the signaling protocol used for the preparation and differentiation of the Multimedia type of communications like voice calls and video calls in the field of Internet. The possible applications are like video conferencing, streaming of multimedia distributions, online games, emails etc. The SIP has applications like creation, modification, termination of the two way party or Unicast and Multiparty or Multicast areas of more than one media fields. 

The Proposed topic constitutes of developing the applications for the text and file transfer, voice and video conference by the use of SIP along with proxy server system. The SIP signaling is based on the pattern of Server- Client which uses protocols HTTP or SMTP. The requirement of the project includes the Internet phone or soft phone. The Internet phone applies the Session Initiation Protocol or Media Gateway Protocol (MeGaCo).

The SIP phone consists of the VoIP phone. The Soft Phone runs on the personal computer and does not need any special device. The soft Phone enhances the quality and similar to the normal phone along with head set or can be used with the USB device. The applications of SIP phone includes Voice conference, IP contact, IP phone system, FAX on the IP, Video conference, Call monitoring. 

The modules is needed to develop are

  1. User Agent Client or UAC which develops the conclusion and transfer to the servers.
  2. User Agent Server or UAS which receives the requests and process that requests to create response. 

The system has to be developed in the Linux 2.6 kernel operating system, ANSIC, GCC compiler, GDB debugging tool. The hardware requirements are  2 personal computers, 512 MB RAM, CPU with 2.2 GHz or above, LAN connection.

Download  Design and Implementation of Sip for Multistreaming Applications Project Report .

Practical Training Report on Basic Networking and Microsoft Windows Server

Introduction to Basic Networking and Microsoft Windows Server Project:

Basic networking is a concept which is based on the internet and also in the use of computer technology too. It is also based on the local development which is one of the main parts of the networking concept. In this basic networking user accomplish many authoritative terms by which they can work in a proper way. Here each and every single provides us the information about the authoritative power and the authentication modes.

Basically networks are the accumulation of each single personal computer by a part of networking devices. These accumulations later then work as the abundant networks. These networks are then connected to the computers by a device called as HUB which normally hosts the servers. Internet work has a concept called as Router which is used to join the server networks together to share the data from one computer to another. Packet switching, packet filtering, communication and path choice are the functions that a router can perform.

Another topic is the Topology which is the updates which are made to the material layout of the systems. There are various types of topology and some of them are Single node topology which is always joined with the servers, Bu8s topology which is joined to a common tabled which is named as trunk. Ring topology is which where all networks and devices are associated in a closed loop. Star topology is which every single device is associated with the HUB. Mesh topology is which every device is associated to the other devices.

Networks play an important role in this concepts and LAN, VAN, VPN San are the various types of networks. Layered approach is the part of OSI model which contains application layer, presentation layer, session layer, transport layer, network layer, data link layer and physical layer are the sub types of reference layers. There are various other topics that are related to the basic networking which is written in detail in various reference books.

Download  Practical Training Report on Basic Networking and Microsoft Windows Server .

A Spy Based Approach for Intrusion Detection Project Report

Introduction to A Spy Based Approach for Intrusion Detection Project:

The current intrusion detection system (IDS) is able to protect the only host or bunch of the interlinked systems that are networked. The single host IDS is known as the Host- based intrusion detection system. The network host IDS is known as the Network- based intrusion detection system.

The both intrusion detection system has some defaults, like, the host based IDS has demerit of the inability of the detection to the new kinds of the threats in the system, whereas the Network based IDS is cumbersome to handle, which is not able to identify the encrypted packets of data. The Network based IDS creates the time consuming transfer of the log information and that causes the huge collection of the data and resulting in traffic. This concludes with the incorrect performance of the system. 

The Proposed Spy Based IDS has the merit over two IDS by combining the both single host and Network IDS enhancing the efficiency and creates the information transfer with no problem. 

Anomaly Intrusion Detection 

This kind of Intrusion Detection System keeps the information regarding the use of the system and prepares the statistical data for it. This checks the unusual action which can be intrusions. 

Misuse Intrusion Detection 

The IDS can identify the only know intrusion type. This is not able to detect the new type of Intrusion. 

Features of the Spy Based Intrusion Detection System 

  1. Regulator of the system
  2. Honeypots
  3. Possesses network sensor
  4. Spy type
  5. Log
  6. Tracer

Elastic Site Using Clouds to Elastically Extend Site Resources Abstract

Introduction to Elastic Site Using Clouds to Elastically Extend Site Resources:

The main objective of this paper is to develop a elastic site by using cloud computing. As usage of computers are increasing are day to day, users want the sites to be accessed fast and use all the resources. As a result cloud computing was developed. Cloud computing is generally defined as the use of multiple server computers via a digital network as if they were one computer.

The term Cloud defines virtualization of resources like networks, servers, applications, data storage. IaaS is one of the type of cloud computing. By using this, elastic site which can efficiently provide the services is developed. Services like batch schedulers, storage archives can be efficiently used.

Brief into Elastic site:

 According to the demand, to maintain security, privacy and logistical considerations a resource manager has been developed. The resource manager is built on nimbus to extend physical clusters. The clusters which are extended are placed into the cloud. The elastic site manager is used to communicate directly with local resource managers.

The process speed depends upon the cluster. If the cluster nodes which are placed into cloud are increased, then we can expect greater processing speed. We can increase the cluster speed upto 150EC2 nodes.

Advantages:

By using cloud computing we can extend the services which are used. We can access the content faster and we can use the content which is only required. It is more secure when compared to other techniques.

Download  Elastic Site Using Clouds to Elastically Extend Site Resources Abstract .

Efficient and Secure Content Processing and Distribution by Cooperative Intermediaries Abstract

Introduction to Efficient and Secure Content Processing and Distribution by Cooperative Intermediaries:

The main objective of this paper is to give idea of effective and secure content processing. As usage of computers are increasing are day to day, cyber frauds are also becoming more like stealing content etc. Data security has become a serious problem to many of the users. At present the system which we are using mainly focuses on data delivery but not security.

Hence we need to develop a efficient and secure content processing by using some intermediaries. In e-commerce, e-government confidentiality is more important. The software requirements are jdk, netbeans and Ms access. It needs a machine with dual core processor, 512 Mb ram and a network cables.

Brief into Efficient and Secure Content Processing and Distribution by Cooperative Intermediaries : 

 This design uses peer to peer systems and data integrity service model. By using this only the authorized persons can modify the data. The metadata modification policies are specified by the users, according to the policies integrity can be achieved. Data present on the server is transferred to the client by passing through intermediaries.

Multiple intermediaries simultaneously process the different portions of data and is transferred to the client. Since the data to be transferred is passed through many intermediaries It is impossible to modify the data.

Advantages :

By using this method the security can be provided for intermediaries. . Integrity can be achieved. Confidentiality can be provided by using method.

Download  Efficient and Secure Content Processing and Distribution by Cooperative Intermediaries Abstract .