Volume 1 Issue 3- April 2011


S.No Title Page
1. Digital Divide in Democratic India
Sonia, Sandeeop Kumar

Emerging technologies in information technology are use in all departments of government. The information technology, partition citizens into two broad categories in which skilled citizens easily use the latest technology whereas rural people unable to use. In this paper, discuss how e-government removes digital partitioning between skilled and unskilled citizens. Information technology helps to use in various places. Further, government projects are discussed by which it ensures to keep the digital division of citizens declined. 

Keywords— digital government, e-governance, ICT.
Full Text PDF
2. Haemotology Expert System Using Rule Based Mechanism
Surendra Prasad Babu,L. Sreedhar, K.Rammurthy

Expert systems are popular knowledge based systems developed using the techniques of ‘Artificial Intelligence’. They exhibit the behavior of human expert in a specified field, which involves the problem solving and decision making procedures.  Here both the knowledge and the logics are collected from the experience of a specialist in that area. Hematology refers to human blood & its disorders.  The present paper deals with the design and development of ‘Expert System on Haemotology Using Rule Based Mechanism’. The knowledge base consists of information, collected from doctors (domain experts), about human blood and its disorders. This system mainly contains two modules ‘Information system for Haemotology’ and ‘Expert system for Haemotology’. The Haemotology information system helps the people to know about the information about the different aspects of Haemotology. The Haemotology Advisory system helps the patients to get the required advices about the different diseases attacked to them due to their blood   disorders. They submit the symptoms, they observe, in the form of queries. The system is developed using Java Server Pages (JSP) as front-end and MYSQL database as Back-end in such a way that all the activities are carried out in a use friendly manner.

Keywords: Expert System – Haemotology-Blood and its Disorders-Rule Based Mechanism-Backward Chaining-JSP – Rule Based –MYSQL
Full Text PDF
3. Clustering Gene Expression Datasets by Combinatorial Fusion Analysis
Karimulla.SK ,Srinivasulu.P,Dr. Srinivasa Rao.V ,Mastanvali Shaik

Genomic data that are  available in large volumes will define the need for new data analysis techniques and tools. Gene expressions data sets analysis through clustering is a combinatorial optimization problem. Combining with the conception of Harmony search in Optimization theory on combinatrics and with genomic halving technique in game theory, a new algorithm is proposed for clustering the Gene expressions datasets by Combinatorial Fusion Analysis mechanism. Meta heuristic algorithms are one of the best optimization techniques that are useful for finding optimal or near optimal solutions for combinatorial optimization problems whose search space is very huge. The inter cluster gene expressions datasets optimized by harmony search to overcome the problem of local optima , the dynamic memory improvisation in initial cluster assignment and the cluster size, the redundancy over the genomic data will be cleared out by applying by the whole genome duplication events in molecular evolution with the genomic halving technique. In this paper, a new algorithm and a framework are proposed to analyze the gene-expressions in various situations with the help of graph theory technique and Optimization Theory.

Keywords- Data mining,  Combinatorial Optimization Problem, Genomic halving technique, Clustering Algorithms, Meta Heuristics, Information Retrieval.

Ful Text PDF
4. The Queueing Theory in Cloud Computing to Reduce the Waiting Time
T. Sai Sowjanya, D.Praveen,K.Satish, A.Rahiman

Cloud computing is an emerging technology of business computing and it is becoming a development trend[r].. The process of entering into the cloud is generally in the form of queue, so that each user need to wait until the current user is being served. In the system, each Cloud Computing User (CCU) requests Cloud Computing Service Provider (CCSP) to use the resources, if CCU(cloud computing user) finds that the server is busy then the user has to wait till the current user completes the job which leads to more queue length and increased waiting time. So to solve this problem, it is the work of CCSP’s to provide service to users with less waiting time otherwise there is a chance that the user might be leaving from queue. CCSP’s can use multiple servers for reducing queue length and waiting time. In this paper, we have shown how the queuing model, M/M/S model is used for multiple servers to reduce the mean queue length and waiting time.

Keywords:  cloud computing, queuing theory, waiting time, queuing length
Full Text PDF
5. A Hybrid Approach to Face Recognition under Varying Illumination
Sarala Ramkumar, Silambarasan Kaliamoorthi

Face recognition algorithms are mainly divided into Holistic and Feature based approaches. Principal Component Analysis (PCA) and Fisher Discriminant Analysis (FDA) are the main holistic based approaches that overcome Feature based approach. The challenging factors in face recognition are facial expressions, illumination variations and face orientations. These existing face recognition algorithms work only to a level, that too only under well controlled laboratory setups and fail under varying illuminations. Face recognition algorithms are commonly used for authentication and authorization purposes. Since they are used in computer applications for various security purposes, the importance of the whole system is necessary. To avoid the breaches in security of the existing algorithms and to outperform those algorithms computationally a need of a better algorithm exists. Hence, a hybrid approach to human face recognition is worked out to overcome these limitations. Face recognition consists of Training and Testing phases. In the Training phase the most eminent and effective features are extracted from the database images and in the Testing phase the probe test image is selected as the input and its prominent features are extracted to be compared with the database images for any matching features. The security of the algorithm purely relies on the method of extracting the feature vectors from the digital image. The proposed method of the Hybrid algorithm classifies images of individual persons into separate classes. This technique is implemented with the YALE & ORL face databases. The database images were taken at different postures, by varying the lighting and facial expressions. The performance of the Hybrid algorithm attains 96% recognition rate under varying postures and illumination conditions and it naturally outperforms the standard Holistic algorithms.

Keywords— PCA (Principal Component Analysis), FDA (Fisher Discriminant Analysis), L2 norm, Hybrid approach.
Full Text PDF
6. Advanced Encryption Standard
Nikita Kangude, Priyesh Wani, Sanil Raut

Maintaining privacy in our personal communications is something everyone desires. Encryption is a means to achieve that privacy. It was invented for this very purpose. As more and more information is stored on computers or communicated via computers, the need to ensure that this information is invulnerable to snooping and/or tampering becomes more relevant. There are many standards available to encrypt data, including Data Encryption Standard (DES), Triple Data Encryption Standard (3 DES) and Advanced Encryption Standard (AES) out of which AES is recognized as industrial standard. AES provides strong security, simple design and better performance. It is used by many companies for creating commercially available encryption products. This paper is an overview of AES algorithm and its necessity in today’s world.

Keywords— AES, Encryption, DES, FIPS, Rijndael.
Full Text PDF
7. Digital Pathology: An Electronic Environment for Performing Pathologic Analyses from image
Samir Kumar Bandyopadhyay

Digital pathology is slowly gaining acceptance in both the clinical and research markets, and this is due in part to the wide spectrum of whole-slide scanning systems in the market today. However, whole slide scanning on its own is not sufficient. In order for digital pathology to be fully embraced by both clinical and research pathologists, image acquisition must be bundled with comprehensive image analysis applications and image management systems to provide a total digital pathology solution.Digital pathology is the product of a series of technologic innovations driven by a number of companies, as well as by investigators who have harnessed this technology to enhance their research and clinical practice. One of the most familiar technologic changes is the introduction of digital cameras to capture still images, replacing film as the preferred medium for photo microscopy. Hardly noticed now, this change introduced many pathologists to the benefit of capturing fields of interest in a digital format. This relatively modest change has afforded us the ability to use imaging information in new and innovative ways in clinical, educational and research endeavours.The use of digital pathology tools is essential to adapt and lead in the rapidly changing environment of 21st century neuropathology. Our study on the current state of the digital pathology suggests that the nature and pace of technologic change occurring within pathology are such that implementation of digital pathology technology and applications and will take a leading role in diagnostics and research.

Keywords: Tissue microarrays (TMAs), Brain Pathology, Computer Aided Diagnosis

Full Text PDF
8. Performance Enhancement of K-Best MIMO Decoder
R.Ramya, V.Tamizhamuthu

This work focuses on a modified K-Best algorithm for a multi input and multi output communication system. In MIMO systems, the maximum-likelihood (ML) decoder is known as the optimum scheme in the sense of minimizing the bit error rate (BER) which exponentially has increased computational complexity particularly in high-dimensional MIMO communication systems. The sphere decoding algorithm and its relevant K-Best decoding algorithm have been adopted, because of their ability to implement ML decoding with significantly reduced complexity. In this work, also a special type of code called gold code is used which boost ups the performances further. Also the combined benefits of radius reduction commonly associated with sphere decoding and the benefits associated with the K-Best decoding approaches are obtained. This modified algorithm needs much smaller value of K which holds the advantages of branch pruning algorithm and adaptively updates pruning threshold, on achieving near optimum performance. When compared to K-Best decoder this examines smaller number of points as much as possible. This K-Best decoder supports a 4X4 64-QAM system efficiently.

Key words:-MIMO, sphere decoder, k-best, gold code

Full Text PDF
9. A Comparative Study of Some Ingenious Sorting Algorithms on Modern Architectures
D.Abhyankar, M.Ingle

Heapsort, Quicksort and Mergesort are three ingenious sorting algorithms, which deserve special attention. In the past these algorithms were studied in detail, but the study was carried out on old machines and involved cache simulations. Over the years computer architecture has gone through radical changes. Therefore, a study valid on old architectures may not be valid on new architectures. The comparative study of Heapsort, Quicksort and Mergesort on modern machines with the help of latest performance analyzers is the central idea of the paper. To choose among Heapsort , Quicksort and Mergesort on modern machines, a comparative study of prior will be useful. Quicksort has been a usual choice for sorting, but some researchers suggest that accelerated Heapsort, a variant of Heapsort, is competitive enough to become a method of choice to solve sorting problem [6]. This paper examines the claim of accelerated Heapsort as a method of choice.

Full Text PDF
10. A Method for Face Segmentation, Facial Feature
Extraction and Tracking

Samir K. Bandyopadhyay

An important topic in face recognition as well as in video coding or multi-modal human machine interfaces is the automatic localization of faces or head and- shoulder regions in visual scenes. The algorithms therefore should be computationally efficient and robust against distortions like varying lighting conditions. This paper describes a method for segmenting frontal head and shoulder views of persons from grey level images. The segmentation is done by oriented template correlation. This matching method only depends on edge information, especially the orientation of the edges. In the matching stage we calculate the possibility for a face at the current image position using this model. The detection capabilities of the presented algorithm are evaluated on a large database of 1004 images each containing one or more faces.

Keywords:  Face segmentation, Face position detection, Facial feature extraction and Segmentation

Full Text PDF
11. Receiver Anonymity for Unicast Frames by using IEEE 802.11-Compliant MAC Protocol
Pooja Sharma,Deepak Tyagi, Vibhu Nagpal

Mix based systems used for real-time bidirectional traffic actually does very limited mixing, and hence are vulnerable to powerful adversaries. In this paper, we explore the design of a MIX-net based anonymity system in mobile ad hoc networks. The objective is to hide the source-destination relationship in end to end transmission. We survey existing MIX route determination algorithms that do not account for dynamic network topology changes, which may result in high packet loss rate and large packet latency. We then introduce adaptive algorithms to over-come this problem. We also focus on the    notion of providing anonymity support at MAC layer in wireless networks, which employs the broadcast property of wireless transmission. We design an IEEE 802.11-compliant MAC protocol that provides receiver anonymity for unicast frames and oers better reliability than pure broadcast protocol.

Full Text PDF
12. Structural Evaluation of Cell-filled Pavements Using
Finite Element Model

Subrat Roy

This paper presents the performance evaluation of cell-filled pavements using finite element model. The model of the pavement structure was developed using ANSYS, finite element software [1]. The concept of cell-filled pavement was developed in South Africa [2], [3]. Cell-filled pavement consists of laying a formwork of cells of plastic sheet over a compacted sub-base. The cells as are stretched along the carriageway and tensioned by using steel pegs. Cement bound materials are filled into the cells and compacted. During compaction, cell walls get deformed and provide interlocking among the blocks. Thus, the load carrying capacity of the surface layer is mainly dependent on the interlocking property of the blocks which, in turn, is governed by the degree of deformation of the plastic wall during construction. Interlocking among the cells of the cell-filled layer has been modeled using surface contact elements. Using the finite element model, Structural performance was evaluated in terms of surface deflections and vertical sub grade strains on top of the sub grade.

Keywords— Finite Element Model, Cell Fill Pavement

Full Text PDF