Seminar Report on Real-Time Neuroscience using Open-VIBE for Computer Science Students

Introduction to Real-Time Neuroscience using Open-VIBE Seminar Topic:

This paper discussed about a 3D platform for Real-Time Neuroscience, Open-VIBE and also discussed about Neurofeedback and Brain-computer interfaces.

In real-time if brain’s physiological activity monitored in real-time then the feedback can be returned which in turn helps him or her to exercise some control over it. This is the basic idea for the evolution of Neurofeedback and Brain-Computer Interfaces.  These methods are getting improved with many advanced technologies including improving speed of microprocessors,    graphics    cards and    digital   signal    processing algorithms.    More meaningful features can be extracted from the continuous flow of brain activation can and in turn its feedback becomes more informative. Open-VIBE is a platform which based on publicly available frameworks such as OpenMASK and OpenSG and it is also an open source.

Conclusions:

This paper reviewed about recent NF and BCI research, which focused on similarities, mostly the interaction between the user and the system. Open-ViBE is a general       platform designed to build brain virtual environments based upon real-time neuro-imaging data.  The most appreciable qualities of neurofeedback are that it     requires    an active role the patient and it is non-invasive.  Neurofeedback training became a preferred choice in the case   of children and adolescents for whom the balance of neurotransmitters and the brain anatomy are still in its formation stage.

 For a human, the maximum reported number of binary commands per minute is 20. But this transfer rate is low for non-clinical application even though this rate is a great achievement for locked-in syndrome patients. The common characteristics of all systems are interactive analysis and visualization of brain data.  Computation efficiency occurs due to the notion of interactivity. In parallel computation field, Open-VIBE platform takes advantage of OpenMASK and enables the use of multiprocessor machines along with computer PC clusters.  In OpenMASK, every module takes responsibility of a specific computation which can be used in turn by several other modules. 

Download  Seminar Report on Real-Time Neuroscience using Open-VIBE for Computer Science Students .

Biometrics in Security Seminar Topics for CSE Students 2012

Introduction to Biometrics in Security Seminar Topics:

This paper discussed about an emerging technology, Biometrics which uses human body parts as permanent password.  Improving technology helps computer systems to record, recognize the patterns, hand shapes and other physical characteristics of a human.  Biometrics provides the ability to devices to verify and identify an authenticated and authorized device user instantly.

Overview:

Human beings are getting identified or authenticated by many traditional methods like using smart cards, magnetic stripe cards and physical keys. But there is a possibility of losing those keys. Forgotten passwords and lost smart cards is a severe problem for any user.

So in order to overcome the problems of traditional methods, biometrics technology is emerged as a powerful identifying and authenticating technology.  

Biometric authentication is based on the identification of human body part to a person himself is the password in biometrics. For the applications which require security, access control, and user verification, the biometric technique needs to integrate into the corresponding applications. 

Biometrics secured resources effectively with a high level of security standards along with providing convenience to both users and administrators.  Based on the physiological or behavioral characteristics of a living person, biometric systems can automatically recognize the identity of a person.  

Visible parts of the human body can consider as physiological characteristics of a person. Fingerprints, palm geometry, retina are few of the physiological characteristics. Few factors like mood, stress, fatigue can affect behavioral characteristics. What a person does can consider as behavioral characteristics of that person.  Voice-prints, signatures, handwriting are few of the behavioral characteristics.

Conclusions:

Biometrics encounters many challenges and these challenges need to address in a planned manner.  To store and exchange data, most of the systems use proprietary techniques. Automated fingerprint imaging systems technology requires computer support to perform millions of comparisons. 

The government or industries may use this technology to monitor individual behavior. Even though biometric facing all these challenges, it is emerging as a most reliable security solution in the near future. 

Seminar Report on Visual Cryptographic Steganography – Images for Computer Engineering Students

In multimedia steganocryptic system, the message will be first encrypted and then encrypted date will be hidden into an image file.  Cryptography converts message into an unreadable cipher.  Both these technologies provide security to data over unsecure communication channel against intruder attacks.  In order to minimize the threat of intrusion over communication channel there is a need of powerful high secure system than steganogcyptic or cryptography technologies. The features of cryptography, steganography and multimedia data hiding are combined in visual cryptographic steganography technology.  This combined technology is more secure when compared to that of steganography and cryptography technologies.

Overview:

Visual steganography is most secure forms of steganography technology which implemented in image files.  In multiple cryptography technology, the data will be encrypted into cipher and then cipher data will be hidden in multimedia image file.  The threat of an intruder to access secret information becomes the most painful concern in communication media. Many traditional technologies like cryptography, steganography exists but in order to face few more problems addressed by these technologies, a visual cryptographic steganography technology is emerged.  To achieve data encryption, traditional cryptographic techniques are used and to hide the encrypted data, visual steganography algorithms are used.

Visual cryptography uses a special encryption technique to hide information in images and which can be decrypted by the human vision with the correct key image used.  This technique uses two transparent images.  Among these two images, one image contains random pixels and the other contains secret information. It is not possible to get the secret information with one of the images. Only both images together can help to retrieve the secret information.

To implement visual cryptography there is a need to print the two layers onto a transparent sheet.  If visual cryptography is used for secure communication, there is need for the sender to distribute one or more random layers in advance to the receiver.  The system is unbreakable until and unless both the layers fall into wrong hands. If one of the both layers is intercepted then it is not possible to retrieve the encrypted information at any cost. 

Download  Seminar Report on Visual Cryptographic Steganography – Images for Computer Engineering Students .

Scalable Location-Aware Computing Best Seminar Topic for CSE Students

Introduction to Scalable Location-Aware Computing Seminar Topic:

This paper discussed about an emerging technology, Location-aware computing and also discussed about a Rover system which enables location-based, traditional time-aware, user-aware and device-aware services. This computing includes the automatic information and services tailoring based on user current location. 

Overview: 

Must research has been carried out in the past decade on developing location-sensing technologies, location-aware application support and location-based applications as location is crucial component of context. Rover servers which implemented in “action-based” concurrent software architecture help to achieve system scalability to very large client sets.  The design and implementation of a multi-Rover system is still in its infant stage.  Rover system has huge impact on next generation of applications, devices and users.  

The location service of Rover system can track user’s location either by user manually entering current location or automated location determination technology.  Rover tailors application-level information between different wireless links based on link layer technology.  A rover system scales to large client population by using the process of fine-resolution application specific scheduling of resources in the network and at the servers. A user who possesses a personal device can gain access to Rover by registering his device and then Rover system tracked those devices.  Location-aware computing is emerging as an important part of everyday life. 

Conclusions: 

Rover system is a deployable system which uses both indoors and outdoors technologies. The main target of this system is to provide a completely integrated system that operates and allows a seamless experience of location aware computing to clients. Researches are still carrying out with a wide range of client devices which having limited capabilities and also for Bluetooth-based LAN technology. Rover Technology enhances the user experience in a large number places, amusement and theme parks, shopping malls, game fields, offices and business centers. The benefits of this Rover system are very higher in large user population environments as it is designed to scale to large user populations. 

Download  Scalable Location-Aware Computing Best Seminar Topic for CSE Students .

CSE Seminar Report on Study of Viruses and Worms

This paper discussed about various ways of computer virus and worm propagation techniques with the help of slammer and blaster worms case studies. A polymorphic worm is an upcoming technique which is a huge threat to the internet community. Few computer worms are written in a way that can be affected only to a particular region and attack at random times.  The big threat to the network and to the websites is created by these worms and that big threat is termed as Distributed Denial of Service (DDoS).

Overview:

Computer virus is the most high profile threat to information integrity.  Attackers over the network always attack the computers to exhaust important resources, damage the data and create havoc over the network.  Computer viruses had more visibility with the growth of global computing.  Computer viruses and computer worms are increasing rapidly day by day.  So, there is a need to create awareness among the people about the attacks of Computer viruses and worms.  Frequently, viruses require a host, and their target is to infect other files to live longer. A virus might rapidly infect every file or slowly infects the documents on the computer.  The computer worm is a program which designed in a way to copy itself from one computer to another, by using some network medium like e-mail, TCP/IP.  Unlike computer virus, worm is interested in infecting machines over the network. 

The prototypical worm infects target system only once then it tries spread to other machines over the network. Mailers and Mass-Mailer worms, Octopus and Rabbits are few of the categories come under computer worms. A Trojan Horse pretends to be useful programs but performs some unwanted action. Apart from Viruses, Worms and Trojan Horses, the other types of malicious programs are Logic Bombs, Germs, and Exploits. 

Conclusions:

This paper discussed about various computer viruses, worms and different malicious code environments. Attackers can get good control on internet hosts and also has the capability to spread their virus over network in few minutes which causes risk to the security of internet. 

Download  CSE Seminar Report on Study of Viruses and Worms .

Smart Note Taker CSE Seminar Topic with Report for B.tech Final Year Students

Introduction to Smart Note Taker CSE Seminar Topic:

This paper discussed about most useful application Smart Note Taker.  The data written on this will stored on pen memory chip and will allow the user to read in digital medium after the job has done. This product helps users to take easy and fast notes. This product is useful to the instructors in presentations. When the instructor draw figures or text then that data can be processed and sent to the server computer. Then the server computer broadcasts the same through the network to all of the computers in the presentation room.  The smart note taker is very simple but a powerful product.

Overview:

Smart Note Taker has capability to sense 3D shapes and motions when user tries to draw. The 3D shapes and motions which sensed are transferred to memory chip and then displayed on the corresponding display devices or broadcasted to other computers through network.  This product can monitor the notes, which taken before, on the application program like a word document or an image file used in the computer.  The sensed figures which were drawn on the air will get recognized by this product with the help of few software programs and the desired figure or text is displayed in the word document.

 If the application program is some paint program then most similar shape will get printed on the display screen.  The prior market of this product is educational services and schools. Smart note taker is great solution for the students to take entire notes given by the teacher without much effort and loss of information. This product helps to transfer teacher notes on the board to software directly.   

Conclusions:

The Smart Note Taker is good and helpful product for blinds that think and write freely. The Smart Note-Taker provides a better solution than other existing technologies to address the problems encountered by blind students in their classrooms.

Download  Smart Note Taker CSE Seminar Topic with Report for B.tech Final Year Students .

3G versus Wi-Fi technologies Easy Seminar Topic for CSE Students

This paper compared two powerful wireless technologies, 3G and Wi-Fi.  Wi-Fi is one of the popular WLAN technologies and 3G especially developed for mobile providers.  As a business and service model, 3G is more developed when compared to that of Wi-Fi.  Wi-Fi is more developed with respect to WLAN equipment and upstream supplier markets.

Overview:

 As a mobile provider both 3G and Wi-Fi are complements to each other. The interaction between 3G and Wi-Fi technologies favors heterogeneous future. With respective to the service deployment, 3G has only limited service deployment whereas installed base of Wi-Fi network equipments is growing significantly.  The main difference between 3G and Wi-Fi technologies is lies at their corresponding embedded support for voice services. The upgraded technology of wireless voice telephony networks is 3G technology. Wi-Fi offers a lower layer data communication service. The main advantage of 3G over Wi-Fi is that 3G provides better support for secure or private communications when compared to that of Wi-Fi.

3G formal standards picture is clearer than any WLAN technologies.  3G consists of WCDMA, the family of internationally sanctioned standards. Wi-Fi belongs to the family of continuously evolving 802.11 wireless Ethernet standards.  3G uses licensed spectrum and Wi-Fi uses shared unlicensed spectrum. Licensed spectrum offers protection from interference from other service provides. Unlicensed spectrum forces users to accept interference from other service providers.

Conclusions:

The main goal of the comparison between 3G and Wi-Fi is to explore the future of wireless access and to express the possibility of success because of having interaction between these technologies in the future.  Both the technologies succeeded in the current market.  Researches are carried out to integrate 3G mobile providers with Wi-Fi technology and to increase the possibility of success in mass market deployment .Cost is low to establish Wi-Fi networks when compared to that of 3G providers.

Download  3G versus Wi-Fi technologies Easy Seminar Topic for CSE Students .

I7 Processors Seminar Topic for CSE with Report Free Download

Introduction to I7 processors Seminar Topic:

This paper discussed about the latest and highly advanced processor, i7 processors.  This processor has 3 cores and each core performs task simultaneously, so the processing will be very fast. Quick path architecture, Turbo boost technology and Hyper threading are the new technologies included in this i7 processor.

Overview:

Turbo boost technology increases the CPU’s performance by improving the frequency of cores.  Frequency of cores can be increased based on the number of active cores.  Hyper threading technology shows one core on the processor as 2 cores to the operating system.  So this technology increases the performance of the overall system by increasing the multi-tasking capability and by doubles the execution resources available to the operating system.

I7 series consists of i7 920 processor, i7 940 processor and i7 extreme edition 965 processor. Among these 3 the price of i7 extreme edition 965 processor is more in current market when compared to the remaining two.  Intel core i7 processor is the fastest processor on our planet.  Many Intel desktop x86-x64 processors are there in Intel core i7 family.  I7 processors are the first processors which are released by using the Nehalem micro-architecture.  Quad core Intel core i7 mobile processors are available on notebooks and help to save the power. Core i7 processors are the high-end processors when compared to that of i3 and i5 series.

Conclusions:

Intel I7 series has many outstanding features. Clock speed is improved significantly within temperature and power limit with the help of turbo boost technology. This technology increases efficiency without any afford any extra effort or power.  In I7 series, each processor has explicitly its own dedicated memory.  QPA is designed in away it reduces execution time and latency time. 

Download  I7 Processors Seminar Topic for CSE with Report Free Download .

NIT IT Final Year Seminar Report on Semiconductors and Devices for IT Students

Introduction to  Semiconductors and Devices Seminar Topic:

This paper discussed about the atomic structure of semiconductors and its importance, also discussed about the concepts of energy bands and energy bands gaps.

Overview:

Based on the ability of conducting electricity, materials are classified into conductors, semiconductors and insulators. Semiconductors have ability to carry conductivity between the insulators and the conductors.  For easy understanding of semiconductor topic there is a need to apply modern physics to solid materials.  The foundation of electronic devices is formed with the devices made out of semiconductors. Semiconductors resistivity can change by an external electrical field. In semiconductors, in the electronic structure of the material current can be carried either by the flow of positively charged “holes” or by the flow of electrons.

Semiconductors are classified into extrinsic and intrinsic semiconductors. The semiconductors in its natural form are termed as intrinsic semiconductors. The process of changing an intrinsic semiconductor to an extrinsic semiconductor is known as semiconductor doping. In this process impurity atoms got introduced to an intrinsic semiconductor.

Impurity atoms changes semiconductor’s electron and hole concentrations, also acts as either donors or acceptors to the intrinsic semiconductor. Based on the effect on intrinsic semiconductors, impurity atoms are classified into donor or acceptor atoms. Donor impurity atoms have more valence electrons and Acceptor impurity atoms have less valence electrons than the atoms they replace in the intrinsic semiconductor lattice.  When a semiconductor has been doped by introducing a doping agent and having different electrical properties compared to that of intrinsic semiconductor is termed as an external semiconductor.  N-type and P-type are two types of extrinsic semiconductors.

Conclusions:

Most of the electronic devices are using semiconductors. All electronic devices like Computer, TV, radio, DVD player are worked only on dc power supply rather than ac power supply. So, there is a need of diodes to convert the ac to dc power. Now-a-days, semiconductors play a vital role in all the electronic devices.

Download  NIT IT Final Year Seminar Report on Semiconductors and Devices for IT Students .

Self- Managing Computing Final Year Seminar Topic for IT Students

This paper discussed about the self-managed systems in IT and its advantage. Also described about autonomic computing technologies, how an autonomic system depends on this technologies and how these systems helps to analyze an error condition and how to provide corrective actions when an error condition encounters.

Overview:

Autonomic computing is the self-managing computing model which named based on autonomic nervous system of human’s body. IBM has started autonomic computing technology. Designing and developing of self-management systems in order to overcome the complexity of computing system management is the main goal of IBM. Autonomic computing is one of self-managing characteristics which helps to identify unpredictable changes and implement corrective actions if an error occurs.  Self-managing computing helps to letting computing system and infrastructure works smarter and take care of managing themselves.

Self-managing systems have the ability to adapt to changes in business policies and surrounding business environment.  These systems have capability to perform managing activities in IT environment.  Most of the mundane operations can manage by this autonomic computing technology helps to IT professionals to focus on other high-value tasks. Self management computing improves system management efficiency.  This computing technology can balance the tasks managed by an IT professional and by the system in e-business infrastructure. Self management computing technology helped to the evolution of e-business. The main goal of autonomic computing is to create systems which are capable of high-level functioning by keep the actual complexities invisible to the user. But there are many challenges exists in developing an autonomic computing system.  The complexity of modern networked computer system is one of the limiting factor in autonomic system’s expansion.

Conclusions:       

The main objective of IBM autonomic computing initiative is to prepare IT systems self managing.  Self-managed systems can easily adapt to changing environments and react very efficiently to all conditions.  Quick response always helps to reduce application downtime and in turn prevents catastrophic loss of revenue. 

Download  Self- Managing Computing Final Year Seminar Topic for IT Students .