Volume 2 Issue 9- September 2012
S.No | Title | Page |
---|---|---|
1. | A FPGA Implementation of Memory Efficient Distributed Arithmetic Fir Filter Bhagyalakshmi.Basuvula, B.N.Srinivasa Rao Abstract
Finite impulse response (FIR) filters are the most popular type of filters implemented in software. An FIR filter is usually implemented by using a series of delays, multipliers, and adders to create the filter's output. To implement fir filter based on DA(distributed arithmetic) algorithm is directly applied in FPGA(field programmable gate array),it is difficult to achieve the best configuration in the coefficients of fir filter, memory and computational speed. This paper provides the Distributed Arithmetic can save considerable hardware resources through using LUT to take the place of MAC units. Another virtue of this method is that it can avoid system speed decrease with the increase of the input data bit width or the filter coefficient bit width, which can occur in traditional direct method and consume considerable hardware resources. The design based on Altera chips is synthesized under the integrated environment of quartus II 9.0. The results of simulation and test shows that this method reduces the FPGA hardware resources, memory size and high speed is achieved. This design has greatly improves memory efficient though FPGA realization. Keywords: FIR filter, DA algorithm, FPGA; |
1383-1385 Full Text PDF |
2. | A Novel Sparse Representation Method for Image
Restoration Applications K.S.K.L.Priyanka, U.V.Ratna Kumari Abstract
Sparse representation has been widely used in various image restoration applications. The quality of image restoration mainly depends on whether the used sparse domain can represent well the underlying image. Since the contents representing the underlying image can vary significantly across different images or different patches in an image, we propose to learn various sets of bases from a pre-collected dataset of example image patches and for processing a particular given patch, a suitable set of base is selected adaptively as local sparse domain. Here we introduce two adaptive regularization terms into the sparse representation framework. One is a set of auto regressive (AR) models are learned from the pre-collected dataset of example image patches and the best fitted AR model is adaptively selected for regularization. Second is image non-local self similarity regularization to regularize the image local structures. To make the sparse coding more accurate, a centralized sparsity constraint is introduced by exploiting the nonlocal image statistics. The local sparsity and the nonlocal sparsity constraints are unified into a variational framework for optimization. Extensive experiments on the proposed method of CSR method achieves convincing improvement over previous state-of-the-art methods in terms of PSNR and SSIM values. Keywords--------Image restoration, deblurring, super-resolution, sparse representation, regularization |
1386-1395 Full Text PDF |
3. | Data mining Application to Design a System for
Performance Improvisation of Students in their
Academic Studies Kalyani. M. Moroney, Dr. Sanjay U. Makh Abstract
This paper presents a system that can be used for a performance improvisation of students in their academic studies. The system accumulates a vast amount of information which is very valuable for analyzing the student’s performance and could create a gold mine of educational data. Data generated by the student and tutor is recorded in this system. However, due to the large quantity of data daily generated by the ssystem, it is very difficult to analyze this data manually. A very promising approach towards this analysis objective is the use of data mining techniques. Keywords— Classification, Data mining, Education system, Examination system, Regression |
1396-1401 Ful Text PDF |
4. | A Process to Comprehend Different Patterns of
Data Mining Techniques for Selected Domains Reshma Sultana, Vani, Managina Deepthi, PV Bhaskhar, Pedada Satish, Koppala KVP Sekhar Abstract
One of the important problems in data mining is the Classification rule learning which involves finding rules that partition given data into predefined classes. In the data mining domain where millions of records and a large number of attributes are involved, the execution time of existing algorithms can become prohibitive, particularly in interactive applications. Data mining is a field at the intersection of computer science and statistics, is the process that attempts to discover patterns in large data sets. It utilizes methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data preprocessing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. Keywords: FP-Growth, Apriori Algorithm, SS-BE, SS-MB |
1402-1405 Full Text PDF |
5. | Feature Selection and Clustering Approcahes to the
KNN Text Categorization K.Gayathri, Dr.A.Marimuthu Abstract
Automatic text classification is a discipline at the cross roads of information retrieval machine learning and computational linguistics and consists in the realization of text classifiers. (ie) software systems capable of assigning text to one or more categories or classes, from a pre-defined set. This article will focus on the feature selection for, reducing the dimensionality of the vectors, after that we apply one pass clustering for group the related data’s and then we apply classification technique like KNN for categorizations the data and finally evaluate the results by using precision, etc., Keywords: Text Categorization, Pre-processing, LSI, one pass clustering, KNN. |
1406-1409 Full Text PDF |
6. | Impact of TCP Congestion Control Algorithms on
IEEE802.11n MAC Frame Aggregation Mustafa Samih and Emad Al-Hemiary Abstract
The Media Access Control (MAC) introduced in the IEEE802.11n reduces the bottleneck in the legacy IEEE802.11 using different techniques such as frame aggregation and block acknowledgments. In this paper we investigate the interaction between MAC frame aggregation and Transmission Control Protocol (TCP). MAC layer interaction with TCP is very important as TCP determines the end-to-end transfer rate. Regarding the interaction with TCP a comparison is made through simulations among different TCP congestion control techniques, like Reno, NewReno, Vegas, Tahoe, and Selective Acknowledgments (SACK). Results show that with different aggregation sizes, NewReno and SACK outperform the other techniques. As an example a throughput of 80 Mbps is achieved with TCP NewReno and window size of 8 Kbyte. Due to the dependencies of Vegas on Round Trip Time (RTT) throughput remains constant versus offered load. Keywords— Frame Aggregation, IEEE802.11n, TCP, SACK |
1410-1414 Full Text PDF |
7. | Analysis of Utility Based Frequent Itemset
Mining Algorithms HarishBabu. Kalidasu, B.PrasannaKumar, Haripriya.P Abstract
Knowledge Discovery Plays major role in the business applications. Association Analysis is one among them which helps in Market Basket Analysis. For Generating Frequent Itemsets and Association rules from large datasets, this helps to improving the business. But, Apriori Algorithm, Frequent Pattern Tree generates only frequent patterns and there is a need to generate high utility itemsets to improve the performance of market basket analysis. Generating High Utility Itemsets means, finding the Itemset with high Intrestedness from the data source. Even though there are number of similar approaches are proposed in recent years. It is the problem to find frequent itemsets when the data base having large amount of transactions. A huge amount of candidate itemsets effects on the performance. In this paper, we are comparing high utility based itemset mining algorithms FP Growth algorithm, UP Growth algorithm for finding frequent itemsets from high utility based itemset mining. |
1415-1419 Full Text PDF |
8. | Implementation of Secure and Efficient Model to Control Over Data Sharing in the Cloud Computing M. Sri Pavani, K.Ravindra , Y. Ramesh Babu Abstract
Cloud computing is a rapidly growing segment of the IT industry that will bring new service opportunities with significant cost reduction in IT capital expenditures and operating costs, on-demand capacity, and pay-per-use pricing models for IT service providers. Among these services are Software-as-a-Service, Platform-as-a-Service, Infrastructure-as– a-Service, Communication-as-a-Service, Monitoring-as-a-Service, and Storage-as-a-Service. Storage-as-a-Service provides data owners a cost effective service to store massive data and handles efficient routine data backup by utilizing the vast storage capacity offered by a cloud computing infrastructure. However, shifting data storage to cloud computing infrastructure introduces several security threats to data as cloud providers may have complete control on the computing infrastructure that underpins the services. These security threats include unauthorized data access, compromise data integrity and confidentiality, and less direct control over data for data owner. The current literatures propose several approaches for storing and sharing data in the cloud environments. However, these approaches are either applicable to specific data formats or encryption techniques. In this paper, unlike previous studies, we introduce a secure and efficient model that allows the data owners to have full control over data sharing in the cloud environment. In addition, it prevents cloud providers from revealing data to unauthorized users. The proposed model can be used in different IT areas, with different data and encryption techniques, to provide secure data sharing for fixed and mobile computing devices. Keywords- cloud computing; cloud storage; data sharing model; data access control; data owner full control, cloud storage as a service; data encryption |
1420-1426 Full Text PDF |
9. | Design and Implementation of PSO-PID Controller for MA2000 Robotic Manipulator Firas Abdullah Thweny Al-Saedi, Ali H. Mohammed Abstract
In this paper, a complete control system is proposed to control the TQ MA2000 which is a six degree of freedom (DOF) complex structure robotic manipulator arm. The kinematics of the MA2000 manipulator were derived and verified with the aid of Matlab. The implemented control scheme is a networked control system (NCS), where two PCs interacted via a computer network to form the overall control and tuning entities. The control system utilized the proportional-Integral-Derivative (PID) algorithm to control the MA2000 manipulator, where each joint was treated as a separate Single Input Single Output (SISO). Particle Swarm Optimization (PSO) algorithm was used to tune the individual PID controllers of the robotic arm. The tuning process was considered as a nonlinear optimization problem, where a certain fitness function was minimized in order to obtain the desirable transient response for each joint. Finally, a set of tests were conducted against the tuned controllers where satisfactory results were obtained. |
1427-1433 Full Text PDF |
- Home
- Author Instruction
- Editorial Board
- Call for Paper
- Current volume
- Archives
- Indexing
- FAQ
- Publication Fees
- Contact
- Citation Index 0.47
Email: ijcsetonline@gmail.com / editor@ijcset.net