Palm Vein Technology Btech Final Year Project Seminar

There is a larger threat to personal data and national security which is due to the increase usage of today’s technology. The measures that were taken for safe information from outside intervention were failed to safe up to the mark. Fujitsu developed a technology known as palm vein pattern authentication that makes use of vascular patterns as a personal identification data.

 This Palm Vein Technology Btech Final Year Project Seminar gives security for the authentication data which is residing in the body, therefore very difficult to forge. It is highly accurate. It is used in various fields such as in passport issuing, hospitals, banking, government offices, etc. Business growth is achieved by these measures which reduce the size of the palm vein sensor and also shorten the time of authentication.

For the ubiquitous network which is used nowadays, the users are having a risk of the authenticated information to be hacked by hackers easily and they are accessing the same information anytime and anywhere. Because of this threat, personal identification technology has been utilized such as personal identification numbers, passwords, and identification cards. Therefore Fujitsu has developed four secured methods to solve these problems. They are fingerprints, faces, voice prints and palm veins. These securities are highly accurate. Contact less authentication technology is used in various financial solution products in public places.

Palm vein authentication technology contains small. Palm vein scanner which is easy and natural to operate. It is fast and highly accurate. The person needs to hold the palm a few centimeters over the scanner then within a second, it reads the person’s unique vein pattern. The picture of vein is taken and palm pattern is registered.

Conclusion:

Palm vein pattern authentication technology was developed by Fujitsu. It is used in Japan in a wide range. Therefore this vein technology can bring development in the technology and science in the near future.

Download Palm Vein Technology Btech Final Year Project Technical Seminar.

Cyborg- Human Computer Applications

Description: The research paper Cyborg- Human Computer Applications talks about Human Computer Applications. The research paper talks about applications in which there is an active association between man and the microprocessor. The paper suggests that in coming years there are going to be machines with an ability to think more powerfully than the human beings. In simple words the thinking activity is going to be overtaken by the machines in the near future. Machines have a very broad thinking prowess in comparison with computers.

The research paper suggests that the body is considered by cybernetics as a meat machine with connectivity similar to computer networks. Hence in order to enhance the performance of human beings it is suggested to replace organ/organs with computerized body part/s- an artificial heart, an all seeing bionic eye. Scientists are interested to know the functioning and programming skills of the neuron. The scientists are focusing on a structure located between the spinal cord and higher brain centers that is believed to integrate information from different origins, such as tactile or visual, to shape the commands that control muscle movement, Mussa-Ivaldi said. The research eventually could help doctors fashion sophisticated artificial limbs for those suffering from nerve damage, he said.

The success of this kind of man machine integration was relished when Prof. Kevin Warwick got a silicon chip transponder into his forearm and what he could do was amazing he can walk around and operate doors, heaters and computers without lifting a finger. A little later a microelectrode array having 100 electrodes was inserted in the left arm of Prof. Kevin Warwick. This was done in an attempt to make the human body directly send input to the computer. When Warwick lifted his arm signals were sent to the computer. Prof. Warwick was able to control an electric wheelchair and an artificial intelligent hand using his newly acquired neural interface.

Conclusion: The research paper concludes by saying that research may unveil wonderful things like thought communication, surgical procedures taken over completely by robots and many more. 

Download Cyborg- Human Computer Applications Technical Presentation.

Honeypots and Network Security

Description: The research paper Honeypots and Network Security talks about Honeypots as an essential breakthrough in the domain of network security. Honeypots as the name suggests are an exciting innovation. Honeypots belong to the domain of Network Security and Internet services. Honeypots are something that turn the tables against black hats. The research paper suggests that we are living in an age where communication through internet is the order of the day. There is hardly any information that is not found on the internet.

 Honeypots help computer scientists and computer forensics’ experts to create a fake program that looks similar to the vulnerable program in order to lure the hacker into a trap. Once the cyber criminal is caught subsequent legal proceeding are unleashed against him and on the other hand, security systems are made more powerful and resistant against attacks like these.  It is suggested in the research paper that honey pots can trap a cyber criminal, at the same time Honeypots may require lot of additional maintenance and other things which are beyond comfort level. Honeypot in itself is an important data that would certainly trigger the criminal’s interest. It is all about the company’s willingness to hold some information at stake. A honeypot is a fake resource and can be exploited or compromised. But once the attack is in progress the cyber scientists can analyse the pathway the criminal opts.

The high end honeypot has a deeper underlying operation. This involves a much higher risk as the complexity improves rapidly. A honeypot is easy to install and operate and it can be along with the server. The disadvantage of placing a honeypot in front of a firewall is that it cannot trap the internal culprit.

Conclusion: The research paper concludes on a note saying that honeypots are effective in understanding the pathway adopted by the cybercriminal.

Download Honeypots and Network Security technical Paper Presentation.

Network Security Honey Traps

Description: The research paper Network Security Honey Traps speaks about a new innovation in the domain of Network Security- honey traps. The research paper suggests that there has been colossal increase in the use of computer, virtual transaction, internet based collaboration and many more things. The research paper also tells that if the world of computers has heightened ease of transaction, or the transactional speed has been heightened like never before there is also a serious increase at par ‘security’ per se. In a threatened security background like this, fortifying the computer forensics becomes an indispensable tool and application to detect and deal with cyber criminals.

In the research paper it is suggested to develop very effective forensic tools to deal with cyber crimes rather than create an evolved technology and later escalate the forensic requirements to the desired levels. Computer forensics is the branch of computer science engineering that specifically deals with techniques for recovery, authentication and analysis of data.

What is meant by Computer Forensics: Computer forensics has two major roles to play. The primary objective of this branch is to catch these cyber criminals and put them through necessary legal procedures and second come up with counter measures to face these types of crimes. Since the day computers have gotten into active role so did these cyber criminals popularly called back hats. As soon as a new computer technology comes into existence a black hat enhance their skill to suit the purpose and trigger a crime. This is a kind of perpetual network and is called Computer and Network Forensics.

Honey traps are a simulation of potentially vulnerable features in a network. These simulations are not the real ones and they trap the black hat in a way that he tries to intrude a false network. This way the actions undertaken by the intruder can be carefully observed in order to update the technology with more advanced security measures.

Conclusion: The research paper ends on a note that although honey traps seem to be an effective solution to deal with cyber crime, there are still many hiccups in the domain that need attention.

Download Network Security Honey Traps Technical Student Paper Presentation PPT.

Btech Seminar Report on Holographic Memory

Description: The research paper Btech Seminar Report on Holographic Memory talks about Holographic memory. The research paper suggests that holography is a technique that allows recording and playing back of 3 dimensional images. Unlike other 3 dimensional images a holograph produces parallax. The parallax produces an effect so as to enable the user move the image up and don as if it is there right in front of him. In Holography, the aim is to record complete wave field (both amplitude and phase) as it is intercepted by recording medium the record in plane may not even be an image plane. The scattered or reflected light by object is intercepted by the recording medium and recorded completely in spite of the fact that the detector is insensitive to the phase difference among the various parts of the optical field.

In the holographic technique the difference between the object wave and the reference wave is recorded on a holographic material.  The record catches the whole wave which can be observed at a later time with an appropriate light beam. To this day holography continues to provide the most accurate depiction of 3-D image in the world. IBM and Lucent Bells are involved in a high end research pertaining to generation of accurate 3 D images. Hologram produces an image with reference to size, shape, brightness and contrast of object.

In order to record accurately a hologram requires a light of single color. Hence Laser is considered for recording images over a hologram. When we shine a light on the hologram the information that is stored as an interference pattern takes the incoming light and re-creates the original optical wave front that was reflected off the object hence the eyes and brain now perceives the object as being in front of us once again.

Conclusion: The benefits offered by a hologram are that the entire image can be retrieved at a time. It also facilitates storage of almost terabytes over small cubic devices

Download Btech Engineering Final Year Seminar Report on Holographic Memory

Programming Languages A History

Description: The research paper Programming Languages A History explains the history of Programming Languages. The first programming languages were created even before the modern computer. Programming has come into existence considering and transforming languages into codes. Herman Hollerith realized that he could encode information on punch cards when he observed that railroad train conductors would encode the appearance of the ticket holders on the train tickets using the position of punched holes on the tickets. Hollerith then proceeded to encode the 1890 census data on punch cards which he made the same size as the boxes for holding US currency.

The first computer codes were created for specific applications. During the first decade of the twentieth century, numerical calculations were based on decimal numbers. It was realized that logic could be represented by numbers, as well as with words. Alonzo Church was able to express the lambda calculus in a formulaic way. The Turing machine was an abstraction of the operation of a tape-marking machine, for example, in use at the telephone companies. However, unlike the lambda calculus, Turing’s code does not serve well as a basis for higher-level languages — its principal use is in rigorous analyses of algorithmic complexity.

The research paper suggests that like many “firsts” in history, the first modern programming language is hard to identify. From the start, the restrictions of the hardware defined the language. Punch cards allowed 80 columns, but some of the columns had to be used for a sorting number on each card. Fortran included some keywords which were the same as English words, such as “IF”, “GOTO” (go to) and “CONTINUE”. The use of a magnetic drum for memory meant that computer programs also had to be interleaved with the rotations of the drum. The programs that were created earlier were more hardware dependent than today.

To some people the answer depends on how much power and human-readability is required before the status of “programming language” is granted. Jacquard looms and Charles Babbage’s Difference Engine both had simple, extremely limited languages for describing the actions that these machines should perform. One can even regard the punch holes on a player piano scroll as a limited domain-specific programming language, albeit not designed for human consumption.

Download Programming Languages A History Final Year Project Review Seminar.

History of computers Engineering Seminar Presentation

Description: The History of computers Engineering Seminar Presentation suggests the history of computers. In 1000 BC was created the first calculating device called the ‘abacus’. In the 17th Century Pascal devised a mechanical device useful for computing it could help do additions and subtractions. In the 19th century scientist at Cambridge, Charles Babbage with the assistance of Lady Ada Lovelace created a machine that could store information and perform some logical computing. Howard Aikens and Grace Hooper developed an electrically operated machine which could calculate, store data, read characters and also special symbols. The machine was gigantic in size. It was named Harvard Mark 1. In 1945 First electronic general purpose calculator, ENIAC (Electronic Numerical Integrator and Calculator) built in U.S, weighs 33 tons consumes 150 kw and averages 5000 operations per second. In 1947 transistor, an essential storage device in computers was created by William Shockleen and John Bardeen. In 1948 First stored program computer, Manchester Mark 1, built in UK. Using valves, it can perform about 500 operations per second and has the first RAM. It fills a room the size of a small office. In 1951 early computer game, Nim, was played by Ferranti Nimrod computer at the Festival of Britain. In 1975 Microsoft was founded by American businessmen Bill Gates and Paul Allen. They developed DOS which later becomes the dominant operating system for computers.

In 1981 first portable computer, Osborne 1 was produced. At the size and weight of a sewing machine, however, it was much less convenient than current portable computers. In 1985 Microsoft launched Windows for PC. Windows is a GUI similar to Mac’s, making personal computer much easier to use. In 1990 IBM Pentium PC was produced. It holds up to 4,000 mega bytes of RAM and can perform up to 112 million instructions per seconds. The microprocessor chip at the heart of the computer measures 16mm by 17mm and contains 3.1 million transistors. It is designed using a system called VLSI (Very Large Scale Integration).

The presentation concludes saying that every computer has four basic parts, or units: an input unit such as the keyboard, that feeds information into the computer; a central processing unit (CPU) that performs the various tasks of the computer; an output unit , such as a monitor , that displays the results; a memory unit that stores information and instructions.

Download History of computers Engineering Seminar Presentation

Data Warehouse Striping Technique (DWS)

Description: The research paper Data Warehouse Striping Technique (DWS) talks about Handling Big Dimensions in Data Warehouses using the striping technique. The research paper suggests that the DWS technique enables distribution of large data houses through a cluster of computers. The data partitioning technique partitions information and sends them across different nodes or access points. Dimension tables are also replicated. The replication of the dimension tables poses a restraint to the applicability of the DWS technique to data warehouses with big dimensions. This paper proposes a strategy to handle large dimensions in a distributed DWS system and evaluates the proposed strategy experimentally.

The DWS technique relies upon the typical star schema and the typical data warehouse queries to optimize the way data is partitioned among the computers in the DWS system and the way queries are distributed and executed. The typical infrastructure required to implement DWS is a cluster of inexpensive computers connected via Ethernet and a DBMS installed in each node. All the nodes have a same star schema and dimensions are replicated in each node. The fact data is also installed to share the facts at each node. The uniform partitioning is necessary to assure optimal load balance and to facilitate the computation of confidence intervals in the cases (rare, in principle) when one or more nodes in the system are not available.

Conclusion: The research paper concludes over a note saying that a new technique called the ‘selective’ load is developed to overcome the limitations of the DWS technique in handling Data Warehouses. The suggested technique helps managing data warehouses more effectively and efficiently, maintaining nearly linear speed up in query execution time. The experiments performed with the TPC-H schema and queries have suggested that the selective load improves dramatically the performance of a DWS system when processing queries in data warehouse scheme with potentially big dimensions.

Download Data Warehouse Striping Technique (DWS) Technical White paper Presentation.

Challenges of Grid Computing

Description: The research paper Challenges of Grid Computing talks about Grid computing and its emergence in the present world. The research paper suggests that grid computing is slowly becoming the order of the day. With internet becoming the buzz word and transactions happening over internet almost incessantly there is a strong demand for quality services. Grid computing offers both quality and speed. Grid computing also facilitates downloading of multiple files at a time without impeding the speed of, or crashing the system. Grid computing also ensures that the user need not restart the system time and again.

Grid system is a way of connecting idle systems and utilizes their CPU Cycles in a pool of servers. In a grid besides servers, computers are connected via secure networks. All this is done for information and resource sharing. In a grid the systems connected are heterogeneous in nature and connected all across the globe. Grid computing is all about using many computers to solve a single problem at the same time.  Grid computing is gaining a lot of usage and users across the globe are opting for it but faster solutions to enhance the decision making processes. Grid computing has lots of profits to offer to the business which rely heavily upon virtual interactions and transactions. There are three types of grids: Computational grid, scavenging grid and data grid.

Conclusion: The research paper concludes on a note that, grid computing offers a new standard to IT infrastructure, because it makes distributed computing possible in a heterogeneous environment.  The grid is a virtual platform for managing computing and data management systems in a wide network. Grids are useful in businesses, science and information. Grids are very active as of now in areas pertaining to academics and research. There are three types of grids mentioned in the research paper, computational grid, scavenging grid and data grid. Most users today understand the concept of a Web portal, where their browser provides a single interface to access a wide variety of information sources.

A grid portal provides the interface for a user to launch applications that will use the resources and services provided by the grid. From this perspective, the user sees the grid as a virtual computing resource just as the consumer of power sees the receptacle as an interface to a virtual generator

Download Challenges of Grid Computing Information Technology Student Seminar PPT Presentation.

Grid Construction Distributed Computing

Description: The research paper Grid Construction Distributed Computing talks about Grid Computing and how it is achieved. The research paper suggests that grid computing is nothing but distributed computing taken to a new evolutionary level. Grid computing creates an illusion of large, virtual, self-managing computer. In grid computing several computers are connected and share information and resources via server pools. The aim of grid computing is to provide faster, secure and effective internet solutions, in the ever growing world wherein the demand for internet is growing day by day. The research paper also suggests that not all systems can become scalable over a grid. Grid computing also enables smooth and successful customer interaction. Grid computing also aims at a more refined collaboration between users.

What is data grid: Sharing in grid computing starts with sharing of data from the data bases via the data grid, a data grid can enhance the capabilities of this functioning in many ways. The files of data seamlessly span over many systems thus making the usability enhanced. Such spanning could improve data transfer rates. While the Resource layer is focused on interactions with a single resource, the next layer in the architecture contains protocols and services (and APIs and SDKs) that are not associated with any one specific resource but rather are global in nature and capture interactions across collections of resources.

Advantages of grid computing: High end traditional computing requires installation of costly hardware components. Grid computing on the other hand facilitates cost effective computing.

Conclusion: The research paper concludes on a note that grid computing enables faster sharing of files in an internet medium. Grid computing also makes downloading possible without playing foul with the system features by slowing it down or crashing it. Grid computing also makes it unnecessary to restart the system time and again.

Download Grid Construction Distributed Computing Bachelors of Engineering BE Seminar.