Dual Link Failure Resiliency through Backup Path Mutual Exclusion Abstract

Introduction to Dual Link Failure Resiliency through Backup Path Mutual Exclusion Project:

The main objective of this paper is to give idea of dual link failures. As usage of networks are increasing are day to day, we expect high transmission speed without any interruption. At present huge amount of information is being transferred in the networks and it is necessary to avoid any interruption due to failures.

The main reason why interruption occurs is due to dual link failures. These are important to solve because the links share resources, if there is any failure in shared resources this results in the failure of multiple links and the repair time for a failed link may be few hours or days and this is sufficient  for a second failure to occur. These dual link failures can be reduced by giving automatic backup path. By using this we can avoid dual link failures. This requires a hard disk of 40gb capacity, Ram of minimum 256mb,Processor with speed 3.0ghz and software’s like JDK, Java Swing and Sql 2000.

Brief into design and working:

The networks can be protected from link failures by path protection or link protection. By using path protection we can restore connection end to end by giving a backup path, back up path is given such that if the original path fails to work then this backup path comes into action.

Link protection is used to recover the signal by re-routing the connection at the place where the link is failed. When link protection is done to two failed links independently, then if they fail simultaneously these two links will not use each other in backup paths. This is called Backup link mutual exclusion. 

Download  Dual Link Failure Resiliency through Backup Path Mutual Exclusion Abstract.

Evacuation of Delayed Packets in Networks Java Project Report

Introduction to Evacuation of Delayed Packets in Networks Java Project:

The main objective of this paper is to give idea of delayed packets in networks. At present the networks which we are using supports only two classes of traffic, it takes long time to service if there is any failure. Hence to solve all these problems a OCGRR technology has been developed. This is the advanced version of OCRR technology. It requires a processor of 500MHZ and above, ram of 128 mb and hard disk of 10 gb size. It makes use of jdk 1.5, windows 2000 server family operating system and sql server databases. 

Brief into Evacuation of delayed packets in networks

In OCGRR, small rounds of a frame and a packet-by-packet are made used such that every stream inside a class can send only one packet in each small round. .In this packets of same class  are sent to the destination. Before frame is scheduled, each output port  data streams are stored in a separate Buffer .

Buffers are placed in frames such that each frame consists of only one buffer. Once Scheduling is done transmission traffic occurs and data is transferred according to their priorities. Only one packets are transmitted in one single round.

Advantages:

It minimizes delay, latency and jitter occurring in the network due to delaying of the packets. It can provide good bandwidth for the transmission of data packets. It reduces packet transmission time in the same stream. It can support multi class traffic which can increase the performance of the network.

Download  Evacuation of Delayed Packets in Networks Java Project Report .

Distributed Cache Updating For the Dynamic Source Routing Protocol Java Project Report

Introduction to Distributed Cache Updating For the Dynamic Source Routing Protocol Java Project:

The main objective of this technique is to find the broken links which are transferring information and to divert them to the other nodes which are present in the cache. A cache update algorithm is used for updating the new cache structure called a cache table for the broken links. In this paper we will see for the cache algorithms are implanted for find the broken links and updating the caches.

Brief into the working of the distributed cache updating:

To make cache update for the broken links each node maintains the cache table information required for making the cache updates. When a link failure is detected the algorithm notifies all the reachable nodes which have the cached link. The proactive cache updating includes some protocols for patching up the broken links.

We have two types of routing protocols available for the adhoc networks. The proactive protocols maintain a up to date information of the nodes which periodically disseminates the topology updates through the network. Whereas the on demand protocols make the discovery of the routes only when the routes are required.

Limitations of existing system and advantages of distributed cache updating:

In the existing system the link failure occurs in the mac layer as it undergoes multiple transmissions due to increase of stale route packet delivery latency. The stale route packets remain in the caches only due to the usage of FIFO replacement algorithm.

By using proactive cache updating the propagation of stale routes to other routes gets prevented. A new cache updating algorithm is used which makes the dynamic source routing to adapt quickly to the changes occurred in the topology.

Download  Distributed Cache Updating For the Dynamic Source Routing Protocol Java Project Report.

Discovering Conditional Functional Dependencies Java Project

Introduction to Discovering Conditional Functional Dependencies Java Project:

The main objective of this paper is to give idea of conditional functional dependencies. As usage of computers is increasing are day to day, the storage of data also is more. The data has to  be cleaned an removed if it is not useful.

Data cleaning should be done automatically by finding out the errors. Hence this gave rise to conditional functional dependencies. CFDs are considered as a extension of functional dependencies. CFD supports patterns of  related constants, and hence it can be used as rules for cleaning relational data. 

For discovering CFD three methods are used . First, CFD Miner is used to discover CFDs whish have constant patterns. These constant CFDs are important for identifying an object to clean .The two algorithms which are used for discovering general CFDs is CTANE and FastCFD, CTANE is considered as extension of TANE which is used  in mining of FDs. 

FastCFD is based on the depth-first approach and it reduces the search space. When the relation is large, CTANE is said to work well. FastCFD works better than CTANE in terms of arity, when the arity of the relation is large. Using CFDMiner is faster than CTANE and FastCFD. These algorithms used for cleaning are based on type of application by the user.

Download  Discovering Conditional Functional Dependencies Java Project.

Seminar Report on Demand Based Bluetooth Scheduling

Introduction to Seminar Topic on Demand Based Bluetooth Scheduling:

The main objective of this paper is to give idea of Bluetooth scheduling.  As the wireless communication is advancing day to day, blue tooth technology has been developed. Bluetooth is defined as a wireless communication technology between devices which are Bluetooth enabled. The communication through this Bluetooth device is  possible within a range of 10m. Each cluster of blue tooth device is called as piconet.

Brief into Demand Base Bluetooth Scheduling

Generally, this Bluetooth technology operates in ISM band. Bluetooth employs Time Division Duplex for communication between devices. Since this band is so noisy, to reduce noise frequency hopping is used. Each piconet consists of one master and up to 7 slaves in which each master and slave communicate by using TDD slots. 

Piconet is identified by the Master Bluetooth Address and clock. If the blue tooth participates in two piconets then a bridge formed between them and this is called a scatter nets. Demand based Bluetooth scheduling is achieved by fixing the polling positions of synchronous and shared slaves. The polling period is updated based on the traffic of the master or slave. The simulation of Demand-based scheduling is done by considering only ADS slaves. 

Advantages and Disadvantage:

The Demand-based Bluetooth scheduling increases the throughput of the piconet and reduces the power consumption of the piconet. This scheduling increases the access latency of the slaves. If the access latency of the slaves increases, this may lead to losing a packet.

 Download  Seminar Report on Demand Based Bluetooth Scheduling.

Demand Response Scheduling By Stochastic SCUC Project Abstract

Introduction to Demand Response Scheduling By Stochastic SCUC Project:

Many independent system operators have designed programs to utilize the services provided by the demand response. Demand responses are provides with the demand response providers which manages the customer responses. In this we will look into a stochastic model which utilizes the reserves provided by the demand response in the markets.

Brief on Demand response scheduling:

We find two types of demand response programs they are time based demand response programs and incentive based demand response programs. Time based demand response programs are designed for independent system operators which contain the programs like time of use tariffs, critical peak pricing and real time pricing.

Whereas the incentive based programs have a market based structure and can be offered in both retail and whole sale markets. Most of the incentive base programs have its own goal of operation and san over long term, midterm and short term. A stochastic mixed integer programming model is suggested in which the first stage involves a network constrained unit and the second stage involves security system scenarios.

Advantages and Drawbacks in the existing system:

The demand response providers participate in the electricity markets which act as a medium between the retail consumers and independent system operators. The demand response programs offer aggregate response in the independent system operators. Many independent system operators have the requirement of minimum curtailment level but the demand response programs are lagging this. The demand response programs don’t provide security to the system.

 Download  Demand Response Scheduling By Stochastic SCUC Project Abstract.

Data Leakage Detection Project Abstract

Introduction to Data Leakage Detection Project :

In this paper we will see how data how data leakage occurs and the preventive measures to reduce the data leakage. Data leakages occur when the sensitive are handed over to the trusted third parties and due to this leakage occurs and the data may be found at the unauthorized persons. Perturbation is the most effective technique use for reducing data leakages. In this paper we will look into the unobtrusive techniques for detecting the data leakages.

Leakage prevention techniques:

Perturbation is the effective technique in which the data to be handed over is modified and made less sensitive so that the leakage does not occur. Water marking method is traditionally used in which a unique code is embedded in the data so as to identify the data leakage.

This model is used for accessing the guilt of agents and algorithm is used for distributed the objects to the agents. We will use agent guilt model analysis which estimates the values in the objects for guessing the targets. By using the component failures are known and by using this it checks the location of the leaked agent. To check the parameters interaction with the scenarios generated guilt model analysis is used.

Conclusions:

When there is need for the data to be handover to the other agents then we can use the watermark technique for the data, for finding the leakages. If there are more intended users there it will be difficult for us to know from which agent the data leakage has occurred by using watermark technique. The proposed algorithm employs various data distribution strategies for finding the leaker which ensures safety for the distributer.

 

Wireless Technology Filling Blocks To Enhance Quality Of Image Project Report

Introduction to Wireless Technology Filling Blocks To Enhance Quality Of Image Project:

The Project focused on the filling up the blocks in misleads data in the Wireless image transmission. The images in the compact algorithms like JPEG, are divided into some blocks that consists of 8 x 8 pixels, which transmits into the channel, due to unwanted sound the images are collapsed.

The images of 8 x 8 pixels are measured into the two dimension Discrete Cosine Transform (DCT) that leaded by Quantization and Huffman encoding.  Generally the image got transferred on the wireless channel block by block. That reduces the quality of the image which leads to loss of whole block or at least some continuous block. The packet loss may go up to around 3.6 %. 

In the case of losing a single block leads to the other blocks in the reset interval might not be received with correct average value. To make the transmission vigorous the Forward error correction (FEC) and Automatic transmission query protocols (ARQ) are used. The FEC also requires the extra error correction packets for transmission. The ARQ slows down the data transmission speed which also enhances the network to be congested that triggered the packet loss in the starting. Our project focuses on the reconstruction of the blocks which are lost with the help of information around it. 

The bandwidth efficacy of the transmission then can be enhanced. The principle is to structured the blocks and replace it with the information present around them. The approach of JPEG compression was also taken into consideration in which the encoder itself inhibits the blocks and restructured by decoder similar to wireless network. The project makes the development of compression proportion and less quality reduction. 

The Existing System 

The system is unable to overcome the problem of loss of image blocks. Images are normally JPEG compressed format. The algorithm implemented is Discrete Cosine Transform and Quantization and Huffman encoding. 

The Proposed System 

The lost blocks are restructured. The transmitter may transfer one bit for every block which shows the reconstruction. The Image in painting with sides in between two areas or characteristic transformation of color or the gray value. The structured block is determined by the block classification algorithm which is saved. The technique is applicable for every single block. The perfect image in terms of quality is produced. 

The system can be developed in Windows ZP/2000 operating system, JAVA (Java Swings) program, Eclipse tool. 

The hardware requires Intel processor IV, 20 GB hard disk, 128 MB RAM.

Download  Wireless Technology Filling Blocks To Enhance Quality Of Image Project Report.

Multicast Authentication Based On Batch Signature Report

Introduction to Multicast Authentication Based On Batch Signature Project:

The Multicast Authenticate Batch Signature for sending and receiving of packet data has three features the Data integrity, Data origin authentication and non repudiation. The Data integrity has to assure the receiving packets must not change when it gets transmitted. The Data origin authentication work is to ensure that the receiving packets come from the particular authenticated sender. The non repudiation confirms that the packets cannot be neglected by the sender. Above all features is made functional by the Asymmetric key technique termed as Signature. The sender has to create a signature to all the packets with a secret key termed as Signing. The receiver verifies the lifetime of the signature along with the senders assigned key termed as Verifying. The packet is authenticated with approval of the verification. 

The multicast authentication protocol is a difficult task. The development is difficult due to some reasons. The efficiency needs understanding of sender and specifically for the receivers, the receivers’ ability and assets are different from multicast sender which may be strong server. 

The heterogeneity of the receiver needs the multicast authenticated protocol   to make the functional not only in advanced desktop computer, also in the resource able cell phones. The computation and communication overheads are important debate which cannot be neglected. 

The unavoidable packet loss in the internet is due to the traffic at routers. The congested router passes the buffering packets as per regulatory policy. The TCP some how re transmits the packets but the multicast packets passes on UDP, which is not able to replace the loss. The packet loss is more in mobiles.

The wireless channel continually change also enhances the packet loss. Above this the lower data transfer makes the wireless channel more congested. It is not required in either real time online streaming or stock quotes transfer applications. The online streaming will not be proper if the packet loss occurs, which leads to infrequent transfer of stock quotes. So when the end users need of good service, the multicast authentication protocol makes it by reducing the packet loss. Also if the packet loss occurs, the authenticity of existed packets received is more measurable. Other traditional multicast schemes are not able to reduce the packet loss. 

The Existing System 

The single cast authentication signature is existed in which the encrypted and decryption file is not possible to post. The transferring of files has no proper security when it goes between intermediate nodes. 

The Proposed System 

The Proposed System sends the encrypted and decryption file. The batch signature is verified to multicast the receiver to get many files. The security is present in the system to protect the file transfer and loss of packets. 

The hardware required in proposed systems are Pentium IV with 1.1 GHz, 40 GB hard disk drive, 512 MB RAM. The software required Windows XP operating system, JAVA (JDK 1.6.0), TCP/IP protocol, Eclipse and Java Swing. 

 Download  Multicast Authentication Based On Batch Signature Report.

Localized Sensor Area Coverage with Low Communication Overhead

The project explains about many localized sensor area coverage protocols regarding heterogeneous sensors. Every sensor has been mentioned along with its arbitrary sensing and transmission radii. The every single sensor consists of some limited time cycle that can receive messages from the specific nodes only in the limited time which has not completed.

After the due time of expiry it will not receive or hear any messages. The sensor node which is not completely surrounded sensing area i.e., or partially surrounded with some group of active sensors still active for some required time informing it by giving out message before become inactive. Our project makes this sensor to remain active till the sensor   active or not surrounded.  The covered nodes will be in the dormant stage by some time generating a leaving message to the sensor about its situation. Depends on the other sensor response the inactive sensor takes decision to be covered or still remain active. 

The Existing System 

The existing system has demerit of not having status links with other sensors. Because of the improper interaction between sensors the competition and data missing occurs. The cause of this demerit is improper use of the source. The affluent network is unable to surround the place of linked groups of active nodes. Therefore the nodes are not judging about the particular communicating node. 

The Proposed System 

The presented project overcomes this problem of improper communication between sensors. The sensors now posses the figure of the communicating device at the same time. So every sensor has ability to communicate or interact with other sensor or whole network. The less life time sensor with coverage, observes other sensors availability and given the job to the specific sensor. This system is very working and vigorous as it covers more area by using the limited active sensors and also the longstanding connection. However it may loss some messages. 

The Proposed system can be operated in Window XP. The software programs required for the application development are Microsoft Visual Studio.Net 2005 for the front end and Visual C#.Net for the Coding language. It requires Pentium III 700 MHz processor, 40 GB hard disk drive and 128 MB RAM.

 Download  Localized Sensor Area Coverage with Low Communication Overhead.