From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput...

56
1 From The Desk of HOD Keeping with the motto of Acquire Knowledge and Grow, under patronage of our Director, Dr. R.K. Agarwal, department has published the Vol. 6, No. 2 a half-yearly Journal. The aim of publishing Journal of Computer Application (JCA) is to inculcate habit of writing and reading a technical paper among faculty and students. Topics and contents have been selected to educate our students on current state of the art technology of common applications in simplistic manner without going through the mathematical details. Length of topics have deliberately been kept very short to accommodate most of the writers, and keep the interest of readers as well. My sincere appreciation to all writers specially to students of MCA department. Enjoy the reading and kindly offer your valuable suggestion for improvement in our subsequent issues. Looking ahead for your support. Prof. S. L. Kapoor

Transcript of From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput...

Page 1: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

1

From The Desk of HOD

Keeping with the motto of Acquire Knowledge and Grow, under patronage of our Director,

Dr. R.K. Agarwal, department has published the Vol. 6, No. 2 a half-yearly Journal. The aim

of publishing Journal of Computer Application (JCA) is to inculcate habit of writing and

reading a technical paper among faculty and students. Topics and contents have been selected

to educate our students on current state of the art technology of common applications in

simplistic manner without going through the mathematical details. Length of topics have

deliberately been kept very short to accommodate most of the writers, and keep the interest of

readers as well. My sincere appreciation to all writers specially to students of MCA department.

Enjoy the reading and kindly offer your valuable suggestion for improvement in our subsequent

issues.

Looking ahead for your support.

Prof. S. L. Kapoor

Page 2: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview
Page 3: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

3

CONTENTS

S. No. TITLE Page No.

1 Object Recognition Techniques for Digital Images 5-7Dr. B.K. Sharma, Professor, MCA Department, AKGEC, GZB

2 Genetic Programming Approach for Reverse Engineering 8-10Ruchi Gupta, Assistant Professor, MCA Department, AKGEC, GZB

3 Large Scale Networks Based on SDN 11-14Indu Verma, Assistant Professor, MCA Department, AKGEC, GZB

4 Skinput Technology 15-16Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB

5 System Overview of Earth Simulator 17-19Mani Dwivedi, Assistant Professor, MCA Department, AKGEC, GZB

6 Big Data in the Cloud: Converging Technologies 20-22Anjali Singh, Assistant Professor, MCA Department, AKGEC, GZB

7 The New Trends in Database 23-26Snehlata Kaul, Assistant Professor, MCA Department, AKGEC, GZB

8 QoS issues in web services 27-29Aditya Pratap Singh, Assistant Professor, MCA Department, AKGEC, GZB

9 Direct Digital ManufacturingDr. Pratima Singh, Assistant Professor, MCA Department, AKGEC, GZB 30-32

10 Basics for Integration of Data Warehouse and Data Mining 33-36Dheeraj Kumar Singh, Assistant Professor, MCA Department, AKGEC, GZB

11 The SEO Guidelines for Achieving Higher Ranking in Search Results 37-39Dr. Pooja Arora, Assistant Professor, MCA Department, AKGEC, GZB

12 Ambient Intelligence 40-42Saroj Bala, Assistant Professor, MCA Department, AKGEC, GZB

13 How to Design a Compiler for Any Language 43-44Neha Tayal, Student, MCA 3rd Year, AKGEC, GZB

14 Piracy 45-46Ashish Keshari, Student, MCA 3rd Year, AKGEC, GZB

15 Dna Can Store Your Digital Data for Upto 2,000 Years 47-48` Ankit Verma, Student, MCA 3rd Year, AKGEC, GZB

16. The Rise of Ethical Hacking 49-50Himani Jindal, Student, MCA 3rd Year, AKGEC, GZB

14 Study of Algorithms for Identifying and Replacing Web Service Using Service 51-53Composition And Genetic TechniqueKrishna Kumar Singh, Assistant Professor, MCA Department, AKGEC, GZB

15 Wireless Charging of Batteries Using Microwaves 54-56Anuj Kumar Dwivedi, Assistant Professor, MCA Department, AKGEC, GZB

Page 4: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview
Page 5: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

5

OBJECT RECOGNITION TECHNIQUES FORDIGITAL IMAGES

Dr. B.K. SharmaProfessor, MCA Department, AKGEC, GZB

Email : [email protected]

Abstract— Object recognition is a process for identifying aspecific object in a digital image or video. Object recognitionalgorithms depend on matching, learning, or pattern recognitionalgorithms using appearance-based or feature-based techniques.Common techniques include edges, gradients, Histogram ofOriented Gradients (HOG), Haar wavelets, and linear binarypatterns. Object recognition is useful in applications such asvideo stabilization, automated vehicle parking systems, andcell counting in bio-imaging

1. INTRODUCTIONObject recognition is a most important task in image processingand computer vision. It is concerned with determining theidentity of an object being observed in an image from a set ofknown tags. Humans can recognize any object in the real worldeasily without any efforts but machines cannot recognizeobjects by self. Algorithmic descriptions of recognition taskare implemented on machines; which is an intricate task. Thusobject recognition techniques need to be developed which areless complex and efficient. Many successful approaches thataddress the problem of general object detection use arepresentation of the image objects by a collection of localdescriptors of the image content. Global features provide betterrecognition. Color and shape features can also be used. Variousobject recognition techniques are discussed in this paper

2. OBJECTRECOGNITION TECHNIQUES

2.1. Template MatchingTemplate matching is a straightforward process technique forfinding small parts of an image which match a template image.In this technique template images for different objects arestored. When an image is given as input to the system, it ismatched with the stored template images to determine theobject in the input image. Templates are frequently used forrecognition of characters, numbers, objects, etc for either coloror gray level images. It can be performed either be pixel to pixelmatching or feature based. In feature based the features oftemplate image is compared to features of sub-images of thegiven input image; to determine if the template object is presentin the input image. In [1], authors have proposed a mathematicalmorphological template matching approach for object detectionin inertial navigation systems (INS).

In [2], an approach for measuring similarity between visualimages based on matching internal self-similarities. A templateimage is to be compared to another image. Measuring similarityacross images can be complex, the similarity within each imagecan be easily revealed with simple similarity measure, such asSSD (Sum of Square Differences), resulting in local self-similarity descriptors which can be matched across images.

2.2. ColorBbasedColor provides effective information for object recognition. Asimple and efficient object detection scheme is to representand match images on the basis of color histograms. FahadKhan, et.al. [3] proposed the use of color attributes as anexplicit color representation for object detection. The colorinformation is extended in two existing methods for objectdetection, the part-based detection framework and the EfficientSubwindow Search approach . The three main criteria whichshould be taken into account when choosing an approach tointegrating color into object detection are feature Combination,photometric invariance and compactness.

2.3. Active and PassiveObject detection in passive manner does not involve localimage samples extracted during scanning. Two main object-detection approaches that employ passive scanning:

The window-sliding approach: It uses passive scanningto check if the object is present or not at all locations ofan evenly spaced grid. This approach extracts a localsample at each grid point and classifies it either as anobject or as a part of the background [4].The part-based approach: It uses passive scanning todetermine interest points in an image. This approachcalculates an interest value for local samples at all pointsof an evenly spaced grid. At the interest points, theapproach extracts new local samples that are evaluatedas belonging to the object or the background [5].

Some methods try to bound the region of the image in whichpassive scanning is applied. It is a computationally expensiveand inefficient scanning method. In this method at eachsampling point costly feature extraction is performed, whilethe probability of detecting an object or suitable interest pointcan be squat. In active scanning local samples are used to

Page 6: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

6

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

guide the scanning process. At the current scanning positiona local image sample is extracted and mapped to a shiftingvector indicating the next scanning position. The method takessuccessive samples towards the expected object location, whileskipping regions unlikely to contain the object. The goal ofactive scanning is to save computational effort, while retaininga good detection performance [6].

2.4. Shape BasedRecently, shape features have been extensively explored todetect objects in real-world images. The shape features aremore striking as compared to local features because most objectcategories are better described by their shape then texture,such as cows, horses and cups and also for wiry objects likebikes, chair or ladders, local features unavoidably contain largeamount of background mess. Thus shape features are oftenused as a replacement or complement to local features. A. Berg,et.al. [7], have proposed a new algorithm to findcorrespondences between feature points for object recognitionin the framework of deformable shape matching. The basicsubroutine in deformable shape matching takes as input animage with an unknown object (shape) and compares it to amodel by solving the correspondence problem between themodel and the object. Then it performs aligning transformationand computes a similarity based on both the aligning transformand the residual after applying the aligning transformation.In [8], a new shape-based object detection scheme of extractionand clustering of edges in images using Gradient vector Girding(GVG) method is proposed that results a directed graph ofdetected edges. The algorithm used contains a sequential pixel-level scan, and a much smaller second and third pass on theresults to determine the connectiveness. The graph is built oncell basis and the image is overlaid with a grid formed of equalsized cells. Multiple graph nodes are computed for individualcells and then connected corresponding to the connectivity inthe 8-neighbourhood of each cell. Finally, the maximumcurvature of the result paths is adjusted.

2.5 Local and Global FeaturesThe most common approach to generic object detection is toslide a window across the image and to classify each suchlocal window as containing the target or background. Thisapproach has been successfully used to detect rigid objectssuch as faces and cars in [9] and [4]. In [10], a method of objectrecognition and segmentation using Scale-Invariant FeatureTransform (SIFT) and Graph Cuts is presented. SIFT feature isinvariant for rotations, scale changes, and illumination changes.By combing SIFT and Graph Cuts, the existence of objects isrecognized first by vote processing of SIFT key points. Thenthe object region is cut out by Graph Cuts using SIFT keypointsas seeds. Both recognition and segmentation are performedautomatically under cluttered backgrounds includingocclusion.

3. APPLICATION OF OBJECT RECOGNITION

3.1. Biometric recognition: Biometric technology uses humanphysical or behavioral traits to recognize any individual forsecurity and authentication. Biometrics is the identification ofan individual based on distinguished biological features suchas finger prints, hand geometry, retina and iris patterns, DNA,etc. For biometric analysis, object recognition techniques suchas template matching can be used.

3.2. Surveillance: Objects can be recognized and tracked forvarious video surveillance systems. Object recognition isrequired so that the suspected person or vehicle for examplebe tracked.

3.3. Industrial inspection: Parts of machinery can berecognized using object recognition and can be monitored formalfunctioning or damage.

3.4. Content-based image retrieval (CBIR): When the retrievalis based on the image content it is referred as CBIR. Asupervised learning system, called OntoPic, which providesan automated keyword annotation for images and content–based image retrieval is presented.

3.5. Robotic: The research of autonomous robots is one ofthe most important issues in recent years. The humanoid robotsoccer competition is very popular. The robot soccer playersrely on their vision systems very heavily when they are in theunpredictable and dynamic environments. The vision systemcan help the robot to collect various environment informationas the terminal data to finish the functions of robot localization,robot tactic, barrier avoiding, etc. It can decrease the computingefforts, to recognize the critical objects in the contest field byobject features which can be obtained easily by objectrecognition techniques.

3.6. Medical analysis: Tumour detection in MRI images, skincancer detection can be some examples of medical imaging forobject recognition.

3.7. Optical character/digit/document recognition:Characters in scanned documents can be recognized byrecognition techniques.

3.8. Human computer interaction: Human gestures can bestored in the system, which can be used for recognition in thereal-time environment by computer to do interaction withhumans. The system can be any application on mobile phone,interactive games, etc.

3.9. Intelligent vehicle systems: Intelligent vehicle systemsare needed for traffic sign detection and recognition, especially

Page 7: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

7

for vehicle detection and tracking. In [11], such a system isdeveloped. In detection phase, a color-based segmentationmethod is used to scan the scene in order to quickly establishregions of interest (ROI). Sign candidates within ROIs aredetected by a set of Haar wavelet features obtained fromAdaBoost training. Then, the Speeded Up Robust Features(SURF) is applied for the sign recognition. SURF finds localinvariant features in a candidate sign and matches thesefeatures to the features of template images that exist in dataset. The recognition is performed by finding out the templateimage that gives the maximum number of matches.

4. CONCLUSIONI have discussed various object detection techniques. Thetemplate matching technique requires large database of imagetemplates for correct object recognition. Hence it must be usedonly when limited objects are to be detected. Global featuresand shape based method can give better result and are efficientas compared to local features. These techniques help in easyaccess of the images. I also find their applications in differentfields such as biometric recognition, medical analysis,surveillance, etc.

5. REFERENCES[1] W. Hu, A.M.Gharuib, A.Hafez, “Template Match Object

Detection for Inertial Navigation Systems,” Scientificresearch (SCIRP), pp.78-83, May 2011.

[2] E.Shectman, M.Irani, “Matching Local Self-Similaritiesacross Images and Videos,” In IEEE InternationalConference on Computer Vision and Pattern Recognition,pp. 1-8, 2007.

[3] F. Khan, R. Muhammad , et.al., “Color Attributes for ObjectDetection,” In IEEE International Conference onComputer Vision and Pattern Recognition (CVPR), pp.3306 – 3313, 2012.

[4] P. Viola and M. Jones, “Robust real-time object detection,”International. Journal of Computer Vision, 57(2), pp.137–154, 2004.

[5] R. Fergus, P. Perona, A. Zisserman, “Weakly supervisedscale-invariant learning of models for visual recognition,”International Journal of Computer Vision, 2006.

[6] G. de Croon, “Active Object Detection,” In 2ndInternational conference on computer vision theory andapplications (VISAPP 2007), Barcelona, Institute forSystems and Technologies of Information, Control andCommunication (INSTICC), pp. 97–103, 2007.

[7] A. Berg, T.Berg , J. Malik, “Shape Matching and ObjectRecognition using Low Distortion Correspondences,”In IEEE International Conference on Computer Visionand Pattern Recognition (CVPR), pp. 26 – 33, 2005.

[8] H. Moballegh, N. Schmude, and R. Rojas, “GradientVector Griding: An Approach to Shape-based ObjectDetection in RoboCup Scenarios,” from: www.ais.uni-bonn.de/robocup.de/papers/RS11_Moballegh.pdf

[9] C. Papageorgiou and T. Poggio, “A trainable system forobject detection,” International. Journal of ComputerVision, 38(1), pp.15–33, 2000.

[10] A. Suga, K. Fukuda, T. Takiguchi, Y.Ariki, “ObjectRecognition and Segmentation Using SIFT and GraphCuts,” In 19th International Conference on PatternRecognition ,pp. 1-4, 2008 .

[11] K. Murphy, A. Torralba , D. Eaton and W. Freeman,“Object detection and localization using local and globalfeatures,” Towards Category-Level Object Recognition,2005.

Dr. B. K. Sharma is a Professor of AjayKumar Garg Engineering College, Ghaziabad.He obtained his MCA degree from JNU, NewDelhi, M.Tech. from Guru Gobind SinghIndraprastha University, Delhi and Ph.D.from Shobhit University, Meerut. His areasof specialization are Software Watermarking,Discrete Mathematics, Theory ofComputation and Compiler Design. During

his career of more than decade in the teaching, he has published manyResearch papers in International/National Journals/Conferences. Hehas also published many books for engineering students.

Object Recognition Techniques for Digital Images

Page 8: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

8

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

GENETIC PROGRAMMING APPROACH FORREVERSE ENGINEERING

Ruchi GuptaAssistant Professor, MCA Department, AKGEC, GZB

E-mail: [email protected]

Abstract— This article overviews the application of Search BasedSoftware Engineering (SBSE) to reverse engineering with aparticular emphasis on the growing importance of recentdevelopments in genetic programming and genetic improvementfor reverse engineering. This includes work on SBSE forremodularisation, refactoring, regression testing, syntax-preserving slicing and dependence analysis, concept assignmentand feature location, bug fixing, and code migration.

Index Terms— Genetic Programming, Reverse Engineering,Search based software engineering.

1. INTRODUCTIONThe term ‘search based software engineering’ was introducedin 2001 [1] to capture the (then emerging) interest in the use ofcomputational search as a means of optimizing softwareengineering problems. The motivation was that search basedoptimization was ideal for the multiple conflicting andcompeting objectives with which software engineers routinelycontend. The algorithms used to conduct search basedoptimization are also known to perform well in the presence ofpartial, noisy and missing data. This makes them attractivetools with which to approach the complex, noisy and incompleteworld in which the software engineer has to find engineeringsolutions. Since 2001, SBSE has been applied to almost everyaspect of software engineering activity. A more detailed surveyof the entire field of SBSE can be found elsewhere [2], while atutorial introduction is also available [3] that assumes no priorknowledge of computational search techniques. However inthis paper, we focus on reverse engineering and theconsiderable potential for the development of new forms ofGenetic Programming (GP) and Genetic Improvement (GI) toreverse engineering. Section 2 presents a summary of theapplication of SBSE to reverse engineering. Section 3 brieflyreviews the application of GP to reverse engineering .Section 4Sets out some directions for future work that form part of a‘GP4RE’ research agenda; genetic programming applicationsfor reverse engineering.

2. APPLICATIONS OF SBSE TOREVERSE ENGINEERING

A large number of problems in reverse engineering are amenableto SBSE. In this section we highlight some of the many existing

approaches, focusing on the application of SSBSE to theproblems of re-modularization, refactoring, regression testing,slicing, and concept assignment in particular. We also discussthe general applicability of genetic programming to reverseengineering problems. Our overall conclusion is that these arevibrant and active areas of research with multiple open problemsremaining to be tackled.

A. Re-ModularizationSoftware structure degrades making periodic remodularizationimportant [2]. The search based modularization approachesdiscussed in this section assume that we start with some formof module dependence graph, from which we seek to constructsuitable module boundaries.

B. RefactoringIn this subsection we highlight some SBSE approaches tooptimization and refactoring. Early work on search basedtechniques for program transformation tended to focus onimproving program execution time (through parallelization) [3],[4] and size [5]. These early search based approaches soughtto go beyond the peep-hole optimizations available tocompilers. Search based transformation seeks higher leveltransformations of source code that improve space and timeperformance. More recently, authors have focused on thepotential of search based approaches to suggest sequencesof refactoring steps that might be applied to a system in orderto automate or partially automate the refactoring process. Thereare both single and multiple objective [6] approaches to searchbased refactoring. Search based refactoring is a widely-studiedautomated refactoring approach. It can also be used toexperimentally validate metrics.

C. Regression TestingIn this subsection we briefly mention SBSE approaches toregression testing. Positive aspects include coverage of thesystem and early fault discovery, while the negative aspectsinclude cost drivers to be minimized, such as execution timeand oracle cost [7]. The constraints come from technicalconcerns (for example, one test must be executed first since itcreates a resource consumed by subsequent test) or businessconcerns (for example, we priorities tests that exercisebusiness-critical features).

Page 9: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

9

D. Slicing and Dependence AnalysisIn this subsection we relate SBSE and program slicing. Programslicing and dependence analysis has been widely studied inthe reverse engineering literature, because slices haveapplications on comprehension [8], re-use [9], specificationmining, understanding bug reports and tool traces, codesalvaging and testing. Dependence analysis also can be usedto identify anti-patterns or code smells that may be the triggerfor reverse engineering interventions.

3. APPLIACTIONS OF GP TO REVERSE ENGINEERINGThe topics in search based reverse engineering briefly outlinedso far have used a variety of search based algorithms, (mostnotably genetic algorithms, but also hill climbing and simulatedannealing). In this subsection we briefly review topics in reverseengineering for which genetic programming has proved to beparticularly applicable, to set the scene for Section IV, whichpresents our GP4RE agenda focusing on novel topics in geneticprogramming for reverse engineering1) Bug Fixing: Work on GP for automated bug fixing has

demonstrated that some bugs in existing programs canbe automatically patched so that the regression test suitepasses

2) Code Migration: Work on code migration was used toautomatically port the core algorithm of a UNIX utility(gzip) from standard programming environment writtenin the C language to the CUDA language on a generalpurpose graphics processing card (GPGPU) environment.

3) Code Composition, Reuse and Model Extraction: Geneticprogramming has been used to construct behavioralmodels, allowing the reverse engineer to predict theresponse time of an assembly of re-used components ina novel architecture into which they are to be deployed

4. GP4RE: NEW GENETIC PROGRAMMINGAPPLICATIONS FOR REVERSE ENGINEERING

We propose that GP be used to re-implement the new featurein a more automated fashion. Such an approach may yield atransplant that is syntactically re-constructed (through search)to ensure that the transplantation is successful. We could eventransplant from one language and platform into another usingthis approach. At a high level, we identify key steps likely tooccur in a software transplant algorithm to add feature F fromsource System D (the donor) to destination System H (thehost):

1) Localise: Identify and localise the code DF ⊆ D thatimplements F (this might use, for example, concept andfeature location (see Section II-E))

2) Abstract: Construct an abstraction AF from DF , retainingcontrol and data flow directly related to F in the donorbut abstracting references to D-specific identifiers sothat these become parameterised.

3) Target: Find locations HF in the host, H, where code

implementing F could be located.4) Interface: Construct an interface, I and add it to the host,

H, allowing the resulting combination H ∪∪∪∪∪ I to act as a‘harness’ into which candidate transplants can beinserted and evaluated.

5) Insert: Instantiate and concretise a candidate transplantA0 F (concretised from AF ) at HF .

6) Validate: Validate the resulting transplanted system H ∪I ∪ A0 F .

7) Repeat: Repeat the above steps until a suitably welltolerated transplant is found.

We call this seven-step approach the LATIIVR approach:Localise, Abstract, Target, Interface, Instantiate and Verify.

5. CONCLUSIONIn this paper we have briefly described the application ofSearch Based Software Engineering to reverse engineering.The literature shows a recent upsurge in papers on GeneticProgramming (GP) for reverse engineering, with exciting resultson automated software repair, migration and improvementusing GP. We believe that there are many more equally fruitfulapplications of GP in problems of slicing, transformation andpartial evaluation. We also outlined, in more detail, the use ofGP to achieve automated and semi-automated softwaretransplants and advocate its use in software product lineextension and branch merging. We believe GP may evenprovide a means to search for overall reverse engineeringstrategies. Much of this research agenda (a ‘GP4RE researchagenda’) remains to be explored. We are very interested tocollaborate with other researchers and practitioners on theseand related topics.

6. REFERENCES1. Mark Harman and Bryan F. Jones. Search based software

engineering. Information and Software Technology,43(14):833–839, December 2001.

2. Gabriele Bavota, Andrea De Lucia, Andrian Marcus, andRocco Oliveto. Software re-modularization based onstructural and semantic metrics. In 17th WorkingConference on Reverse Engineering, WCRE 2010, pages195–204. IEEE Computer Society, 2010

3. Andy Nisbet. GAPS: A Compiler Framework for GeneticAlgorithm (GA) Optimised Parallelisation. In Peter M. A.Sloot, Marian Bubak, and Louis O. Hertzberger, editors,Proceedings of the International Conference andExhibition on High-Performance Computing andNetworking (HPCN ’98), volume 1401 of Lecture NotesIn Computer Science, pages 987–989, Amsterdam, TheNetherlands, 21-23 April 1998. Springer

4. Conor Ryan. Automatic Re-Engineering of Software usingGenetic Programming, volume 2. Kluwer AcademicPublishers, 2000.

5. Andrea Arcuri, David Robert White, John A. Clark, and

Genetic Programming Approach for Reverse Engineering

Page 10: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

10

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

Xin Yao. Multi-objective improvement of software usingco-evolution and smart seeding. In 7 th InternationalConference on Simulated Evolution and Learning (SEAL2008), pages 61–70, Melbourne, Australia, December 2008.Springer.

6. Mark Harman and Laurence Tratt. Pareto optimal search-based refactoring at the design level. In GECCO 2007:Proceedings of the 9 th annual conference on Geneticand evolutionary computation, pages 1106 – 1113,London, UK, July 2007. ACM Press.

7. Mark Harman, Phil McMinn, Muzammil Shahbaz, andShin Yoo. A comprehensive survey of trends in oraclesfor software testing. Technical Report ResearchMemoranda CS-13-01, Department of Computer Science,University of Sheffield, 2013.

8. David Binkley, Mark Harman, L. Ross Raszewski, andChristopher Smith. An empirical study of amorphousslicing as a program comprehension support tool. In 8 th

IEEE International Workshop on ProgramComprehension, pages 161–170, Los Alamitos,California, USA, June 2000. IEEE Computer Society Press

9. Jon Beck and David Eichmann. Program and interfaceslicing for reverse engineering. In IEEE/ACM 15thConference on Software Engineering (ICSE’93), pages509–518, Los Alamitos, California, USA, 1993. IEEEComputer Society Press.

Ruchi Gupta is an Assistant Professor inAKGEC, Ghaziabad, an affiliated college ofUttar Pradesh Technical University,Lucknow, (India). She did her MCA andMTech (CS) from U.P Technical UniversityLucknow and pursuing Ph.D. (ComputerScience) from Sharda University GraterNoida. Her area of interest includes GeneticAlgorithm.

Page 11: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

11

LARGE SCALE NETWORKSBASED ON SDN

Indu VermaAssistant Professor, MCA Department, AKGEC, GZB

E-mail: [email protected]

Abstract— The explosion of mobile devices and content, servervirtualization, and advent of cloud services are among the trendsdriving the networking industry to reexamine traditionalnetwork architectures. Many conventional networks arehierarchical, built with tiers of Ethernet switches arranged in atree structure. This design made sense when client-servercomputing was dominant, but such a static architecture is ill-suited to the dynamic computing and storage needs of today’senterprise data centers, campuses, and carrier environments.Some of the key computing trends driving the need for a newnetwork paradigm include.

INTRODUCTIONSoftware-defined networking (SDN) is a network architecturethat decouples the control and data planes, moving the controlplane (network intelligence and policy making) to an applicationcalled a controller [1]

The fast growth of the Internet outside of research facilitiesled to the formation of large networks, turning the interest ofresearchers and developers in deploying and experimentingwith new ideas for network services. However, it quicklybecame apparent that a major obstacle towards this directionwas the high complexity of managing the networkinfrastructure. Network devices were used as black boxesdesigned to support specific protocols essential for theoperation of the network, without even guaranteeing vendorinteroperability. Therefore, modifying the control logic of suchdevices was not an option, severely restricting networkevolution. To remedy this situation, various efforts focusedon finding novel solutions for creating more open, extensibleand programmable networks [5].

The first years of the 2000s saw major changes in the field ofnetworking. New technologies like ADSL emerged, providinghigh-speed Internet access to consumers. At that moment itwas easier than ever before for an average consumer to affordan Internet connection which could be used for all sorts ofactivities, from e-mail and teleconference services to large fileexchanges and multimedia. This mass adoption of high-speedInternet and of all the new services that accompanied it hadcataclysmic effects for networks, which saw their size andscope increase along with traffic volumes. Industrialstakeholders like ISPs and network operators started

emphasizing on network reliability, performance and quality ofservice and required better approaches in performing importantnetwork configuration and management functions like routing,which at the time were primitive at best. Additionally, new trendsin the storage and management of information like theappearance of cloud computing and the creation of large datacenters made apparent the need for virtualized environments,accompanied by network virtualization as a means to supporttheir automated provisioning, automation and orchestration.In the telecommunications world virtual private networks(VPNs) have been a service offering since the mid eighties. Aloose definition of a VPN in this context would be a networkwith a logically closed user group implemented over a public(switched) network. By and large these virtual networks arerealized by allowing clever address mappings over which userscan exercise some control [9]. A result of this technologicalshift was the emergence of new improved networkprogrammability attempts, with the most prominent examplebeing SDN.

SDN means different things to different constituencies. Forsome, it's the networking manifestation of what MarcAndreessen terms "software eating the world." The main aimis to replace proprietary management and control technologyand overpriced switches and routers with commodity hardwarebuilt from merchant silicon under the direction of centralizedcontrollers running on virtual servers, themselves running oncommodity hardware. In other words we can also say thatSDN isn't just a low-level packet-pushing technology [10]; it'sabout creating platforms for applications, configurationmanagement and control that enhance network automation andagility, ultimately lowering operational costs. In this construct,SDN enables network service chains that extend to the top ofthe network stack, "using software to virtually insert servicesinto the flow of network traffic."

SDN encompasses low-level switching optimization and high-level application orchestration. When SDN gets in touch withprivate clouds, it enables fully virtualized data centers [2].

Software-defined networking (SDN) is an approach tocomputer networking that allows network administrators tomanage network services through abstraction of lower-level

Page 12: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

12

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

functionality. This is done by eliminating the coupling of thesystem that makes decisions about where traffic is sent throughthe control plane from the underlying systems that forwardtraffic to the selected destination that relates the data plane.That’s how the networking can be simplified. As acommunication between control plane and data plane has tobe established, to achieve the process SDN requires somemethod for the control plane to communicate with the dataplane. One such mechanism, OpenFlow, is often misunderstoodto be equivalent to SDN, in fact the OpenFlow protocol is afoundational element for building SDN solutions [3].

Software-defined networking (SDN) is not just the technology,but as of yet it is largely conceptual -- and those conceptsvary depending on the approach. Teh various SDNarchitectures, OpenFlow, SDN APIs, and overlay networks arebeing introduced either as if they are interchangeable, orwithout ever mentioning the other options. It's not surprisingthat SDN leaves many folks in IT with no clutch of the"definition" at all.

The basis of SDN is virtualization, which in its most simplisticform allows software to run separately from the underlyinghardware. Virtualization has made cloud computing possibleand now allows datacenters to dynamically provision ITresources exactly where they are needed, on the cloud. Tokeep up with the speed and complexity of all this split-secondprocessing, the network must also adapt, becoming moreflexible and automatically responsive. The idea of virtualizationto the network as well, separating the function of traffic controlfrom the network hardware, resulting in SDN.

As virtualization, cloud, and mobility create more complexenvironments, networks must be able to adapt in terms ofsecurity, scalability, and manageability. That’s why the legacynetworks have serious limitations and old methods that simplywill no longer work. Most enterprise networks, however, relyon fixed boxes and appliances requiring a great deal of manualadministration. Changing or expanding these networks for newcapabilities, applications, or users requires reconfiguration thatis time consuming and expensive. Software-defined networkstake a lesson from server virtualization and introduce anabstraction layer separating network intelligence andconfiguration from physical connections and hardware. In thisway, SDN offers programmatic control over both physical andvirtual network devices that can dynamically respond tochanging network conditions using OpenFlow or some otherprogrammable and controllable packet/flow processingprotocol. There are several approaches to SDN that can bedescribed: Though the technology is very much in the midstof its development, vendors and industry organizations areworking to make the technology open and flexible whileadhering to existing Internet standards. At its core, SDNpromises to enable network technology innovation and

versatility while reducing complexity and administrativeoverhead.

Software-Defined Network (SDN) has been interested in thefield of network management [4].• It enables flexible and uniform management.• It has been expected to overcome the issues of network

administrations.• Reduction of human error by reducing human

intervention.• Providing high quality network with small cost by

integrating network resource.

Figure: 1 Software Defined Network Architecture

ARCHITECTURESDN architecture consists of three layers. At the top is theapplication layer, which includes applications that deliverservices, such as switch/network virtualization, firewalls, andflow balancers. These are abstracted from the bottom layer,which is the underlying physical network layer. In betweenlies the SDN controller, the most critical element of SDN. Thecontroller removes the control plane from the network hardwareand runs it as software, but must integrate with all the physicaland virtual devices in the network [11]. In this way, thecontroller facilitates automated network management and makesit easier to integrate and administer business applications.

OPENFLOW ENABLES SDNThe OpenFlow protocol, originally developed at StanfordUniversity, is being adopted as the basis of SDN strategies.But OpenFlow is not the only way to do SDN and should notbe equated with it. The OpenFlow specification is managed bythe Open Networking Foundation (ONF). The goal is to createa common "language" for programming network switches.OpenFlow is used between a controller and a switch to tell thecontroller about traffic flows and communicate to the switchhow to forward those flows. OpenFlow switch is composed oftwo logical components. The first component contains one ormore flow tables responsible for maintaining the informationrequired by the switch in order to forward packets. The secondcomponent is an OpenFlow client, which is essentially a simpleAPI allowing the communication of the switch with thecontroller.

Page 13: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

13

The flow tables consist of flow entries, each of which definesa set of rules determining how the packets belonging to thatparticular flow will be managed by the switch (i.e. how theywill be processed and forwarded). Each entry in the flow tablehas three fields: i) A packet header defining the flow, ii) AnAction determining how the packet should be processed andiii) Statistics, which keep track of information like the numberof packets and bytes of each flow and the time since a packetof the flow was last forwarded. Once a packet arrives at theOpenFlow switch, its header is examined and the packet ismatched to the flow that has the most similar packet headerfield. If a matching flow is found, the action defined in theAction field is performed [6]. These actions include theforwarding of the packet to a particular port in order to berouted through the network, the forwarding of the packet inorder to be examined by the controller or the rejection of thepacket. If the packet cannot be matched to any flow, it is treatedaccording to the action defined in a table-miss flow entry.

The exchange of information between the switch and thecontroller happens by sending messages through a securechannel in a standardized way defined by the OpenFlowprotocol. This way, the controller can manipulate the flowsfound in the flow table of the switch (i.e. add, update or deletea flow entry) either proactively or reactively as discussed inthe basic controller principles. Since the controller is able tocommunicate with the switch using the OpenFlow protocol,there is no longer a need for network operators to interactdirectly with the switch.

OpenFlow first gained popularity with service providersincluding Google, and many hardware and software vendors,including Alcatel-Lucent, Brocade, Cisco, Dell, F5, HP, JuniperNetworks, NEC, Plexxi, and VMware, support it as members ofthe ONF. The foundation has not released the standard as anopen source spec, but instead allows members to license it foruse in products.

Fig. 2 The open flow model

SDN USING APISThe Application programming interfaces (APIs) are an alternateway to provide the abstraction necessary for SDN along with

a highly programmable infrastructure. APIs provide a channelby which instructions can be sent to a device to program it.Programmers can read API documentation to understand thedevice and code the appropriate commands into theirapplications. In SDN, APIs are called "northbound" or"southbound," depending on where they function in thearchitecture.

APIs that reside on a controller and are used by applicationsto send instructions to the controller are northbound, becausethe communication takes place north of the controller.Southbound APIs reside on network devices such as switches.These are used by the SDN controller to provision the network,with the communication taking place south of the controller.

SDN NETWORK OVERLAYAnother SDN option is a network overlay. Virtual, or overlaynetworks are a way to build a logical network on top of anothernetwork for purposes like: emulation of physical resources,development of new protocols, reserved services andcustomized topologies [8]. In this case, rather than building anentire logical SDN network from scratch, the SDNimplementation is built as an overlay in order to leverage aphysical network that already exists. The overlay is createdusing virtual switches inside hypervisors.

Figure 3 SDN using APIs

These set up tunnels that make use of the underlying physicalnetwork, but don't need to actually configure the hardware tosend traffic to its destination.

Emerging protocols including VXLAN, STT, and NVGRE makethis possible by using network encapsulation. Several vendors,most notably VMware, offer overlay network solutions.Network virtualization (an abstraction of the physical networkin terms of a logical network) clearly does not require SDN.Similarly, SDN (the separation of a logically centralized controlplane from the underlying data plane) does not imply networkvirtualization. Interestingly, however, a symbiosis betweennetwork virtualization and SDN has emerged, which has begun

Large Scale Networks Based on SDN

Page 14: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

14

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

to catalyze several new research areas [12].

BENEFITS OF SDNWhy should SDN be considered, especially if it is still indevelopment? The technology has the potential to makesignificant improvements to service request response times,security, and reliability. It could also reduce costs byautomating many processes that are currently done manuallyand by allowing IT departments to replace (at least in somecases) high-margin devices with commodity hardware. SDNarchitecture is:

Directly programmable: Network control is directlyprogrammable because it is decoupled from forwardingfunctions.Agile: Abstracting control from forwarding letsadministrators dynamically adjust network-wide trafficflow to meet changing needs.Centrally managed: Network intelligence is (logically)centralized in software-based SDN controllers thatmaintain a global view of the network, which appears toapplications and policy engines as a single, logicalswitch.Programmatically configured: SDN lets networkmanagers configure, manage, secure, and optimizenetwork resources very quickly via dynamic, automatedSDN programs, which they can write themselves becausethe programs do not depend on proprietary software.Open standards-based and vendor-neutral: Whenimplemented through open standards, SDN simplifiesnetwork design and operation because instructions areprovided by SDN controllers instead of multiple, vendor-specific devices and protocols.

The other benefits of applying the SDN principles in differenttypes of networks, the unification of heterogeneousenvironments and the wide number of applications that thisparadigm offers demonstrate its very high potential to becomea major driving force commercially in the very near futureespecially for cloud-service providers, network operators andmobile [7] carriers. It remains to be seen whether thesepredictions will be confirmed and to what extent SDN willdeliver its promises.

CONCLUSIONHaving seen the basic concepts of SDN and some importantapplications of this approach, the impact of SDN to the researchcommunity and the industry can be seen easily. While thefocus of each interested party might be different, from designingnovel solutions exploiting the benefits of SDN to developingSDN enabled products ready to be deployed in commercialenvironments, their involvement in the evolution of SDN helpsin shaping the future of this technology.

Seeing what the motivation and the focus of current SDN-

related attempts will provide us with indications of what willpotentially drive future research in this field. By applying SDNprinciples the various organisations are able to choose thenetworking hardware according to the features it required, whileit managed to develop innovative software solutions.

REFERENCES[1] White Paper Software-Defined Networking: Why We Like

It and How We Are Building On It[2] “SDN Vendor Comparison,” available from http://

www.networkcomputing.com/networking/sdn-vendor-comparison-launches/d/d-id/1234183?

[3] “Introduction of SDN,” available from http://en.wikipedia.org/wiki/Software_defined_networking

[4] Hiroki Nakayama, “An implementation model andsolutions for stepwise introduction of SDN.”

[5] Xenofon Foukas, Mahesh K. Marina, KimonKontovasilis, “Software Defined Networking Concepts”.

[6] McKeown, Nick, et al. "OpenFlow: enabling innovationin campus networks." ACM SIGCOMM ComputerCommunication Review 38.2 (2008): 69-74.

[7] Campbell, Andrew T., et al. "Open signaling for ATM,internet and mobile networks (OPENSIG'98)." ACMSIGCOMM Computer Communication Review 29.1 (1999):97-108.

[8] Tennenhouse, David L., et al. "A survey of active networkresearch."IEEE Communications Magazine, 35.1 (1997):80-86.

[9] Van der Merwe, Jacobus E., et al. "The tempest-a practicalframework for network programmability." IEEE Network12.3 (1998): 20-28.

[10] "Devolved Control of ATM Networks," Available fromhttp://www.cl.cam.ac.uk/research/srg/netos/old-projects/dcan/.

[11] Qadir, Junaid, Nadeem Ahmed, and Nauman Ahad."Building Programmable Wireless Networks: AnArchitectural Survey." arXiv preprint arXiv:1310.0251(2013).

[12] Feamster, Nick, Jennifer Rexford, and Ellen Zegura. "Theroad to SDN." ACM Queue 11.12 (2013): 20-40.

Indu Verma is working as an assistantprofessor in Ajay Kumar Garg EngineeringCollege, Ghaziabad, U.P(India), she hasobtained her M-Tech (ComputerEngineering) with Hons. from ShobhitUniversity and MCA from U.P technicalUniversity, Lucknow (U.P.). She has been inteaching from last 8 years; she has beenmember of several academic and

administrative committees.During her teaching tenure she has also been coordinated a NationalConference and many Technical fests at college level. She has attendedseveral seminars, workshops and conferences at various levels. Alsoshe has some papers published at national and internationalconferences. Her area of research includes Computer Networks,Network Security and database.

Page 15: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

15

SKINPUT TECHNOLOGYHarnit Saini

Assistant Professor, MCA Department, AKGEC, GZBE-mail: [email protected]

Abstract— The Microsoft company have developed Skinput, atechnology that appropriates the human body for acoustictransmission, allowing the skin to be used as an input surface.In particular, it resolve the location of finger taps on the armand hand by analyzing mechanical vibrations that propagatethrough the body. It collects these signals using a novel array ofsensors worn as an armband. This approach provides an alwaysavailable, naturally portable, and on-body finger input system.It assess the capabilities, accuracy and limitations of thetechnique through a two-part, twenty-participant user study.To further illustrate the utility of the approach, authors concludewith several proof-of-concept applications they developed.

1. INTRODUCTIONThe primary goal of Skinput is to provide an always availablemobile input system - that is, an input system that does notrequire a user to carry or pick up a device. A number ofalternative approaches have been proposed that operate inthis space. Techniques based on computer vision are popularThese, however, are computationally expensive and error pronein mobile scenarios (where, e.g., non-input optical flow isprevalent). Speech input is a logical choice for always-availableinput, but is limited in its precision in unpredictable acousticenvironments, and suffers from privacy and scalability issuesin shared environments. Other approaches have taken the formof wearable computing.

This typically involves a physical input device built in a formconsidered to be part of one's clothing. For example, glove-based input systems allow users to retain most of their naturalhand movements, but are cumbersome, uncomfortable, anddisruptive to tactile sensation. Post and Orth present a "smartfabric" system that embeds sensors and conductors into abric,but taking this approach to always-available input necessitatesembedding technology in all clothing, which would beprohibitively complex and expensive. The SixthSense projectproposes a mobile, alwaysavailable input/output capability bycombining projected information with a color-marker-basedvision tracking system. This approach is feasible, but suffersfrom serious occlusion and accuracy limitations. For example,determining whether, e.g., a finger has tapped a button, or ismerely hovering above it, is extraordinarily difficult.

2. BIO-SENSINGSkinput leverages the natural acoustic conduction propertiesof the human body to provide an input system, and is thus

related to previous work in the use of biological signals forcomputer input [1]. Signals traditionally used for diagnosticmedicine, such as heart rate and skin resistance, have beenappropriated for assessing a user's emotional state. Thesefeatures are generally subconsciouslydriven and cannot becontrolled with sufficient precision for direct input. Similarly,brain sensing technologies such as electroencephalography(EEG) & functional near-infrared spectroscopy (fNIR) havebeen used by HCI researchers to assess cognitive andemotional state; this work also primarily looked at involuntarysignals. In contrast, brain signals have been harnessed as adirect input for use by paralyzed patients, but direct braincomputer interfaces (BCIs) still lack the bandwidth requiredforeveryday computing tasks, and require levels of focus, training,and concentration that are incompatible with typical computerinteraction [2].

There has been less work relating to the intersection of fingerinput and biological signals. Researchers have harnessed theelectrical signals generated by muscle activation during normalhand movement through electromyography (EMG). At present,however, this approach typically requires expensiveamplification systems and the application of conductive gelfor effective signal acquisition, which would limit theacceptability of this approach for most users [3]. The inputtechnology most related to our own is that of Amento et alwho placed contact microphones on a user's wrist to assessfinger movement. However, this work was never formallyevaluated, as is constrained to finger motions in one hand.TheHambone system employs a similar setup, and through anHMM, yields classification accuracies around 90% for fourgestures (e.g., raise heels, snap fingers). Performance of falsepositive rejection remains untested in both systems at present.Moreover, both techniques required the placement of sensorsnear the area of interaction (e.g., the wrist), increasing thedegree of invasiveness and visibility. Finally, bone conductionmicrophones and headphones - now common consumertechnologies - represent an additional bio-sensing technologythat is relevant to the present work [4]. These leverage the factthat sound frequencies relevant to human speech propagatewell through bone.Bone conduction microphones are typicallyworn near the ear, where they can sense vibrations propagatingfrom the mouth and larynx during speech. Bone conductionheadphones send sound through the bones of the skull andjaw directly to the inner ear, bypassing transmission of soundthrough the air and outer ear, leaving an unobstructed path for

Page 16: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

16

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

environmental sounds.The Skinput system rendering a seriesof buttons on the arm. Users can press the buttons directly,with their fingers, much like a touch screen [5].

3. CONCLUSION Skinput is an input technology that uses bio-acoustic sensingto localize finger taps on the skin. When augmented with apico-projector, the device can provide a direct manipulation,graphical user interface on the body. The technology wasdeveloped by Chris Harrison,Desney Tan, and Dan Morris, atMicrosoft Research's Computational User Experiences Group.Skinput represents one way to decouple input from electronicdevices with the aim of allowing devices to become smallerwithout simultaneously shrinking the surface area on whichinput can be performed. While other systems, like SixthSensehave attempted this with computer vision, Skinput employsacoustics, which take advantage of the human body's naturalsound conductive properties (e.g., bone conduction). Thisallows the body to be annexed as an input surface without theneed for the skin to be invasively instrumented with sensors,tracking markers, or other items. Microsoft has not commentedon the future of the projects, other than it is under activedevelopment. It has been reported this may not appear incommercial devices for at least 2 years.

Ten channels of acoustic data generated by three finger tapson the forearm, followed by three taps on the wrist. Theexponential average of the channels is shown in red. Segmentedinput windows are highlighted in green. Note how differentsensing elements are actuated by the two locations.

Skinput has been publicly demonstrated as an armband, whichsits on the biceps. This prototype contains ten smallcantilevered Piezo elements configured to be highly resonant,sensitive to frequencies between 25 and 78 Hz. Thisconfiguration acts like a mechanicalFast Fourier transform andprovides extreme out-of-band noise suppression, allowing thesystem to function even while the user is in motion. From theupper arm, the sensors can localize finger taps provided to anypart of the arm, all the way down to the finger tips, withaccuracies in excess of 90% (as high as 96% for five inputlocations). Classification is driven by a support vector machineusing a series of time-independent acoustic features that actlike a fingerprint. Like speech recognition systems, the Skinputrecognition engine must be trained on the "sound" of eachinput location before use. After training, locations can be bound

to interactive functions, such as pause/play song, increase/decrease music volume, speed dial, and menu navigation.Withthe addition of a pico-projector to the armband, Skinput allowsusers to interact with a graphical user interface displayeddirectly on the skin. This enables several interaction modalities,including button-based hierarchical navigation, list-basedsliding navigation (similar to an iPod/SmartPhone/MID), text/number entry (e.g., telephone number keypad), and gaming(e.g., Tetris, Frogger) .

REFERENCES1. "Skinput:Appropriating the Body as an Input Surface".

Microsoft Research Computational User ExperiencesGroup. Retrieved 26 May 2010.

2. Harrison, Chris; Tan, Desney; Morris, Dan (10–15 April2010). "Skinput: Appropriating the Body as an InputSurface" (PDF). proceedings of the ACM CHI conference2010.

3. Goode, Lauren (26 April 2010). "The Skinny on TouchTechnology". Wall Street Journal.

4. Sutter, John (19 April 2010). "Microsoft's Skinput turnshands, arms into buttons". CNN.

5. Ward, Mark (26 March 2010). "Sensors turn skin intogadget control pad". BBC News.

Harnit Saini has done her Bachelor ofComputer Applications from Kanya MahaVidyalya, Jalandhar, Punjab, affiliated toGuru Nank Dev University, Amritsar, Punjabin the year 2002. She has done Master ofComputer Applications with honours fromPunjab Institute of Engineering andTechnology, Mandi Gobindgarh, Punjab,affiliated to Punjab technical University,Jalandhar, Punjab in year 2005. She has done

Master of Technology degree in Computer Science and Engineeringfrom Ajay Kumar Garg Engineering College, Ghaziabad in year 2013.She has attended the national conference on Development of ReliableInformation Systems, Techniques and Related Issues during herM.Tech. at Ajay Kumar Garg Engineering College, Ghaziabad inFebruary, 2012. She has also attended a workshop of formal languages,Automata Theory and Computations at Ajay Kumar Garg EngineeringCollege, Ghaziabad in April, 2012 .She has published two papersduring her M.Tech in various international Journals. She is an activemember of IEEE. She possesses good moral values and calmness. Sheis ready to face challenges at every moment of life. Faith in God isher biggest strength.

Page 17: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

17

SYSTEM OVERVIEW OF EARTH SIMULATORMani Dwivedi

Assistant Professor, MCA Department, AKGEC, GZBE-mail: [email protected]

Abstract— The Earth Simulator (ES) is developed by theJapanese government's initiative "Earth Simulator Project",was a highly parallel vector supercomputer system for runningglobal climate models to evaluate the effects of global warmingand problems in solid earth geophysics. The system wasdeveloped for Japan Aerospace Exploration Agency, JapanAtomic Energy Research Institute, and Japan Marine Scienceand Technology Center (JAMSTEC) in 1997. Constructionstarted in October 1999, and the site officially opened on March11, 2002. The project cost 60 billion yen.ES was replaced by the Earth Simulator 2 (ES2) in March2009.[1] ES2 is an NEC SX-9/E system, and has a quarter asmany nodes each of 12.8 times the performance (3.2x clockspeed, four times the processing resource per node), for a peakperformance of 131 TFLOPS. With a delivered LINPACKperformance of 122.4 TFLOPS,[2] ES2 was the most efficientsupercomputer in the world at that point. In November 2010,NEC announced that ES2 topped the Global FFT, one of themeasures of the HPC Challenge Awards, with the performancenumber of 11.876 TFLOPS.[3]

1. INTRODUCTIONThe Earth Simulator Center has several special features thathelp to protect the computer from natural disasters oroccurrences. A wire nest hangs over the building which helpsto protect from lightning. The nest itself uses high-voltageshielded cables to release lightning current into the ground. Aspecial light propagation system utilizes halogen lamps,installed outside of the shielded machine room walls, to preventany magnetic interference from reaching the computers. Thebuilding is constructed on a seismic isolation system,composed of rubber supports, that protect the building duringearthquakes. Built by NEC, ES was based on their SX-6architecture. It consisted of 640 nodes with eight vectorprocessors and 16 gigabytes of computer memory at each node,for a total of 5120 processors and 10 terabytes of memory. Twonodes were installed per 1 metre x 1.4 metre x 2 metre cabinet.Each cabinet consumed 20 kW of power. The system had 700terabytes of disk storage (450 for the system and 250 for theusers) and 1.6 petabytes of mass storage in tape drives. It wasable to run holistic simulations of global climate in both theatmosphere and the oceans down to a resolution of 10 km. Itsperformance on the LINPACK benchmark was 35.86 TFLOPS,which was almost five times faster than its predecessor, ASCIWhite [1,2].

2. HARDWAREThe Earth Simulator was developed as a national project by

three governmental agencies: the National Space DevelopmentAgency of Japan (NASDA), the Japan Atomic Energy ResearchInstitute (JAERI), and the Japan Marine Science andTechnology Center (JAMSTEC). The ES is housed in the EarthSimulator Building (approx; 50m x 65m x 17m). The upgrade ofthe Earth Simulator has been completed in March 2009. Therenewed system (ES2) 160 nodes of NEC's SX-9E.

3. SYSTEM OVERVIEW3.1 System ConfigurationThe ES is a highly parallel vector supercomputer system of thedistributed-memory type, and consisted of 160 processor nodesconnected by Fat-Tree Network. Each Processor nodes is asystem with a shared memory, consisting of 8 vector-typearithmetic processors, a 128-GB main memory system. The peakperformance of each Arithmetic processors is 102.4Gflops. TheES as a whole thus consists of 1280 arithmetic processors with20 TB of main memory and the theoretical performance of131Tflops [1].

3.2 Construction of CPUEach CPU consists of a 4-way super-scalar unit (SU), a vectorunit (VU), and main memory access control unit on a single LSIchip. The CPU operates at a clock frequency of 3.2 GHz. EachVU has 72 vector registers, each of which has 256 vectorelements, along with 8 sets of six different types of vectorpipelines: addition /shifting, multiplication, division, logicaloperations, masking, and load/store. The same type of vectorpipelines works together by a single vector instruction andpipelines of different types can operate concurrently.

3.3 Processor Node (PN)The processor node is composed of 8 CPU and 10 memorymodules.

3.4 Interconnection Network (IN)The RCU is directly connected to the crossbar switches andcontrols inter-node data communications at 64GB/sbidirectional transfer rate for both sending and receivingdata. Thus the total bandwidth of inter-node network isabout 10TB/s.

3.5 Processor Node (PN) CabinetThe processor node is composed two nodes of one cabinet,and consists of power supply part 8 memory modules and PCIbox with 8 CPU modules.

Page 18: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

18

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

3.6 SoftwareAll of the software available on the ES2 system are designedand developed so that users can fully and readily exploit theoutstanding performance of the world's largest computer.Below is the description of software technologies used in theoperating system, Job Scheduling and the programmingenvironment of ES2 [1].

3.7 Operating systemThe operating system running on ES is developed for NEC'sSX Series supercomputers. (The SX series are vectorsupercomputers designed, manufactured, and marketed byNEC). SUPER-UX is an operating system that takes the functionfrom BSD and SVR4.2MP as an operating system based onUNIX System V, and strengthens the function necessary forthe super computer in addition. (SUPER-UX, Berkeley SoftwareDistribution (BSD) and SVR4.2MP are Unix based operatingsystems) [1,2].

3.8 Mass Storage File SystemIf a large parallel job running on 640 PNs reads from/writes toone disk installed in a PN, each PN accesses to the disk insequence and performance degrades terribly. Although localI/O in which each PN reads from or writes to its own disksolves the problem, it is a very hard work to manage such alarge number of partial files. Then ES adopts Staging and GlobalFile System(GFS) that offers a high-speed I/O performance [2].

3.9 Job SchedulingES is basically a batch-job system. Network Queuing SystemII (NQSII) is introduced to manage the batch job. Queueconfiguration of the Earth Simulator. ES has two-type queues.S batch queue is designed for single-node batch jobs and Lbatch queue is for multi-node batch queue. There are two-typequeues. One is L batch queue and the other is S batch queue.S batch queue is aimed at being used for a pre-run or a post-run for large-scale batch jobs (making initial data, processingresults of a simulation and other processes), and L batch queueis for a production run. As a result, the large-scale parallelprogram is able to be executed efficiently. PNs of L-system areprohibited from access to the user disk to ensure enough diskI/O performance. Therefore the files used by the batch job arecopied from the user disk to the work disk before the jobexecution. This process is called "stage-in." It is important tohide this staging time for the job scheduling. Main steps of thejob scheduling are summarized as follows;1. Node Allocation2. Stage-in (copies files from the user disk to the work disk

automatically)3. Job Escalation (rescheduling for the earlier estimated start

time if possible)4. Job Execution5. Stage-out (copies files from the work disk to the user

disk automatically).

When a new batch job is submitted, the scheduler searchesavailable nodes (Step.1). After the nodes and the estimatedstart time are allocated to the batch job, stage-in process starts(Step.2). The job waits until the estimated start time after stage-in process is finished. If the scheduler find the earlier start timethan the estimated start time, it allocates the new start time tothe batch job. This process is called "Job Escalation" (Step.3).When the estimated start time has arrived, the schedulerexecutes the batch job (Step.4). The scheduler terminates thebatch job and starts stage-out process after the job executionis finished or the declared elapsed time is over (Step.5). Toexecute the batch job, the user logs into the login-server andsubmits the batch script to ES. And the user waits until the jobexecution is done. During that time, the user can see the stateof the batch job using the conventional web browser or usercommands. The node scheduling, the file staging and otherprocessing are automatically processed by the systemaccording to the batch script [1].

3.10 Programming Model in ESThe ES hardware has a 3-level hierarchy of parallelism: vectorprocessing in an AP, parallel processing with shared memoryin a PN, and parallel processing among PNs via IN. To bringout high performance of ES fully, you must develop parallelprograms that make the most use of such parallelism. the 3-level hierarchy of parallelism of ES can be used in two manners,which are called hybrid and flat parallelization, respectively .In the hybrid parallelization, the inter-node parallelism isexpressed by HPF or MPI, and the intra-node by microtaskingor OpenMP, and you must, therefore, consider the hierarchicalparallelism in writing your programs. In the flat parallelization,the both inter- and intra-node parallelism can be expressed byHPF or MPI, and it is not necessary for you to consider suchcomplicated parallelism. Generally speaking, the hybridparallelization is superior to the flat in performance and viceversa in ease of programming. Note that the MPI libraries andthe HPF runtimes are optimized to perform as well as possibleboth in the hybrid and flat parallelization.

3.11 LanguagesCompilers for Fortran 90, C and C++ are available. All of themhave an advanced capability of automatic vectorization andmicrotasking. Microtasking is a sort of multitasking providedfor the Cray's supercomputer at the same time and is also usedfor intra-node parallelization on ES. Microtasking can becontrolled by inserting directives into source programs or usingthe compiler's automatic parallelization. (Note that OpenMP isalso available in Fortran 90 and C++ for intra-nodeparallelization.)

3.12 Message Passing Interface (MPI)MPI is a message passing library based on the MPI-1 andMPI-2 standards and provides high-speed communicationcapability that fully exploits the features of IXS and shared

Page 19: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

19

memory. It can be used for both intra- and inter-nodeparallelization. An MPI process is assigned to an AP in the flatparallelization, or to a PN that contains microtasks or OpenMPthreads in the hybrid parallelization. MPI libraries are designedand optimizedcarefully to achieve highest performance ofcommunication on the ES architecture in both of theparallelization manner.

3.13 High Performance Fortrans (HPF)Principal users of ES are considered to be natural scientistswho are not necessarily familiar with the parallel programmingor rather dislike it. Accordingly, a higher-level parallel languageis in great demand. HPF/SX provides easy and efficient parallelprogramming on ES to supply the demand. It supports thespecifications of HPF2.0, its approved extensions, HPF/JA,and some unique extensions for ES.

3.14 Tools(1) Integrated development environment (PSUITE): Integrateddevelopment environment (PSUITE) is integration of varioustools to develop the program that operates by SUPER-UX.Because PSUITE assumes that various tools can be used byGUI, and has the coordinated function between tools, it comesto be able to develop the program more efficiently than themethod of developing the past the program and easily [3].

(2) Debug Support: In SUPER-UX, the following are preparedas strong debug support functions to support the programdevelopment.

3.15 Lightning protection systemThree basic features:• Four poles at both sides of the Earth Simulator Building

compose wire nest to protect the building from lightningattacks.

• Special high-voltage shielded cable is used for inductivewire which releases a lightning current to the earth.

• Ground plates are laid by keeping apart from the buildingabout 10 meters.

3.16 IlluminationLighting: Light propagation system inside a tube (255mmdiameter, 44m(49yd) length, 19 tubes) Light source: halogen

lamps of 1 kW Illumination: 300 lx at the floor in average Thelight sources installed out of the shielded machine room walls.3.17 Seismic Isolation System 11 isolators (1 ft height, 3.3 ft.Diameter, 20-layered rubbers supporting the bottom of the ESbuilding).

4. CURRENT PROJECTSThe following is a list of the most recent and most importantprojects performed on the Earth Simulator. Most projectscontain a link to their corresponding research page:

Atmosphere and Ocean Simulation Research GroupSolid Earth Simulation Research GroupMultiscale Simulation Research GroupAdvanced Perception Research GroupHolistic Simulation Research ProjectIndustrial Application Group

5. CONCLUSIONAn overview of earth Simulator has been discussed in thispaper. ES was the fastest supercomputer in the world from2002 to 2004. Its capacity was surpassed by IBM's Blue Gene/L prototype on September 29, 2004. ES hardware and systemoverview is also discussed.

REFERENCES1. "Japan's Earth Simulator 2 open for business". 1 March

2009.2. "Earth Simulator update breaks efficiency record". 5 June

2009.3. "Earth Simulator" Wins First Place in the HPC Challenge

Awards". 17 November 2010.

Mani Dwivedi did her MCA from IIMTManagement College, Merrut ,Affiliated toMahamaya Technical University, Noida inthe year 2008. She also received the degree inM.Tech(Computer Science Engineering) fromMahamaya Technical University, Noida. Sheis currently working in MCA department ofAjay Kumar Garg Engineering College,Ghaziabad.

System Overview of Earth Simulator

Page 20: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

20

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

BIG DATA IN THE CLOUD: CONVERGINGTECHNOLOGIES

Anjali SinghAssistant Professor, MCA Department, AKGEC, GZB

E-mail: [email protected]

Abstract— Two IT initiatives are currently top of mind fororganizations across the globe: big data analytics and cloudcomputing. Big data analytics offers the promise of providingvaluable insights that can create competitive advantage, sparknew innovations, and drive increased revenues. As a deliverymodel for IT services, cloud computing has the potential toenhance business agility and productivity while enablinggreater efficiencies and reducing costs.Both technologies continue to evolve. Organizations are movingbeyond questions of what and how to store big data to addressinghow to derive meaningful analytics that respond to real businessneeds. As cloud computing continues to mature, a growingnumber of enterprises are building efficient and agile cloudenvironments, and cloud providers continue to expand serviceofferings.

1. INTRODUCTIONThis paper describes how cloud and big data technologies areconverging to offer a cost-effective delivery model for cloud-based big data analytics. It also includes:How cloud computing is an enabler for advanced analyticswith big data.How IT can assume leadership for cloud-based big dataanalytics in the enterprise by becoming a broker of cloudservices.Analytics-as-a-service (AaaS) models for cloud-based big dataanalytics.Practical next steps to get you started on your cloud-basedbig data analytics initiative.It makes sense, then, that IT organizations should look to cloudcomputing as the structure to support their big data projects.Big data environments require clusters of servers to supportthe tools that process the large volumes, high velocity, andvaried formats of big data. Clouds are already deployed onpools of server, storage, and networking resources and canscale up or down as needed. Cloud computing offers a cost-effective way to support big data technologies and theadvanced analytics applications that can drive business value

Major Problems to be discussed.How cloud computing is an enabler for advanced analyticswith big data.How IT can assume leadership for cloud-based big dataanalytics in the enterprise by becoming a broker of cloudservices.

Analytics-as-a-service models for cloud-based big dataanalyticsPractical next steps to get you started on your cloud-basedbig data analytics initiative

2. CLOUD AND BIG DATA: A COMPELLINGCOMBINATION

Using cloud infrastructure to analyze big data makes sensebecause:Investments in big data analysis can be significant and drive aneed for efficient, cost-effective infrastructure. The resourcesto support distributed computing models in-house typicallyreside in large and midsize data centers. Private clouds canoffer a more efficient, cost-effective model to implement analysisof big data in-house, while augmenting internal resources withpublic cloud services. This hybrid cloud option enablescompanies to use on-demand storage space and computingpower via public cloud services for certain analytics initiatives(for example, short-term projects), and provide added capacityand scale as needed.

Big data may mix internal and external sources. Whileenterprises often keep their most sensitive data in-house, hugevolumes of big data (owned by the organization or generatedby third-party and public providers) may be locatedexternally—some of it already in a cloud environment. Movingrelevant data sources behind your firewall can be a significantcommitment of resources. Analyzing the data where it resides—either in internal or public cloud data centers or in edge systemsand client devices—often makes more sense.

Data services are needed to extract value from big data.Depending on requirements and the usage scenario, the bestuse of your IT budget may be to focus on analytics as a service(AaaS)—supported by your internal private cloud, a publiccloud, or a hybrid model.

Analytics as a Service Insight FrameworkYou can address user needs across the full range of analyticsrequirements with cloud-based AaaS—from data delivery andmanagement to data usage. By developing a comprehensivecloud-based big data strategy, you can define an insightframework and optimize the total value of enterprise data.

Page 21: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

21

An AaaS insight framework encompasses the following keycapabilities:

Capturing and extracting structured and unstructured datafrom trusted sources, including prioritizing the most criticaldata and identifying what to retain and for how long.

Managing and controlling data under comprehensive policyand governance guidelines across a global enterprise and incompliance with specific industry requirements.

Performing data integration, analysis, transformation, andvisualization to deliver the right information to the right locationat the right time.

Intel Infrastructure Technologies for Cloud and Big DataAnalytics

Servers based on the Intel® Xeon® processor E5 and E7families are at the heart of infrastructure that supports bothcloud and big data environments providing industry-leading,highly efficient, high-performance computing. In addition:Intel Xeon processor E5 family-based storage servers supportadvanced storage capabilities such as compression,encryption, automated tiering of data, data deduplication,erasure coding, and thin provisioning and are ideal for storingand processing large volumes of data. These compute-intensive storage tasks provide enhanced security, greaterefficiencies, and better total cost of ownership through areduced storage footprint.

Intel Solid-State Drives (SSDs) are high-throughput, high-endurance drives for raw storage.Intel Ethernet 10 Gigabit Converged Network Adapters providehigh-throughput connections for large data sets.

Intel also provides hardware-enhanced security capabilities,including Intel Data Protection Technology with AdvancedEncryption Standard New Instructions (Intel AES-NI),4 whichspeeds data encryption and decryption up to 10 times.5, 6Also, Intel Platform Protection Technology with TrustedExecution Technology (Intel TXT)7 can provide a hardwareroot of trust to ensure that data is processed on or migrated totrusted pools of servers.

IT Playing to Win with Big Data AnalyticsIn a growing number of companies, business users alreadyconsume IT as a service. IT can continue to extend this role tobrokering cloud-based big data analytics services. As a cloudservices broker, your role is to weigh user needs against theavailable delivery options for your organization. This meansdeveloping a strategy for private, public, and hybrid services;driving discipline into the selection of cloud service providers;and negotiating and establishing contracts with potential cloud

service providers, among other similar tasks. Organizationally,this can reduce risk and better utilize existing investments inprivate cloud technologies. Individual users benefit by gettingthe right solution to meet their needs.

IT can quickly demonstrate value to the business by partneringwith users to:Select the right private or public cloud implementation for theirneeds by defining technology requirements, assessing risk,and specifying deployment requirements based on corporategovernance policies and regulatory compliance requirements.For example, certain workloads may have to be managed in aprivate cloud in a specific location.Build or work effectively with a technology partner to developservices as required.Evaluate and vet outside services for design, delivery,customization, pricing, privacy, integration, security, andsupport.

IT as a Broker of Cloud ServicesCloud computing and the myriad of public cloud servicesavailable to businesses have made it easier for two of theheaviest users of analytics—line-of-business owners and thechief marketing officer—to bypass IT and purchase servicesdirectly. However, uninformed business users may be temptedto buy “instant analytics,” but ad hoc methods for adoptingpublic cloud services throughout your organization can causesignificant problems—such as choosing the wrong vendor,losing control of your sensitive data, and getting a poor returnon investment, to name a few. IT offers specific services,perspectives, and skills that can reduce the risk of using publicclouds and better utilize existing private cloud resources

Big data also demands a new set of skills in the enterprise—many of which reside in IT. IT departments can offer thetechnology know-how needed to help make cloud computingand big data work in your organization, including Hadoopadministrators and developers and specialists in Hadoopcomponents such as the Apache HBase* database. Big dataanalytics projects involve multidisciplinary teams, and ITmembers must be active collaborators with data scientists,another emerging big data–related role. Data scientists areindividuals who apply big data to complex business problemsand make sense of the results. While they may sit in thebusiness, they also can be part of the IT organization.

IT Playing to Win with Big Data AnalyticsIT can quickly demonstrate value to the business by partneringwith users to:Select the right private or public cloud implementation for theirneeds by defining technology requirements, assessing risk,and specifying deployment requirements based on corporategovernance policies and regulatory compliance requirements.

Big Data in the Cloud: Converging Technologies

Page 22: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

22

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

For example, certain workloads may have to be managed in aprivate cloud in a specific location.

Build or work effectively with a technology partner to developservices as required.

Evaluate and vet outside services for design, delivery,customization, pricing, privacy, integration, security, andsupport.

Provision services from internal and external sources so thatthey appear seamless to users.

Develop relationships with vetted cloud service providers.Manage existing services, including service level agreements(SLAs) and service life cycle.

As a service broker, IT collaborates with the business on thebest way to use technology for competitive advantage. Withcloud-based big data analytics, the objective must be to providethe right solution for users’ needs balanced against corporategovernance policies, existing IT resources, performancerequirements, and overall business goals. In most ITdepartments today, providing this consultative approach toservice will require IT to reorganize to remove silos, hire ordevelop team members with new skills, and encourage a strongpartnership with the business.

REFERENCES1. Groenfeldt, Tom. “Big Data—Big Money Says It Is a

Paradigm Buster.” Forbes (January 6, 2012). forbes.com/sites/tomgroenfeldt/2012/01/06/big-data-big-money-says-it-is-a-paradigm-buster/

2. Peer Research: Big Data Analytics: Intel’s IT ManagerSurvey on How Organizations Are Using Big Data. IntelIT Center (August 2012).

3. Ubuntu 2013 Server and Cloud Survey. Ubuntu Server(September 10, 2013).

4. No computer system can provide absolute security.Requires an enabled Intel processor and softwareoptimized for use of the technology. Consult your systemmanufacturer and/or software vendor for moreinformation.

Ms. Anjali Singh has an teachingExperience of 13 years and persued herMCA and M.Tech degrees in year 2001and 2013 respectively. which includesteaching in different subjects likeComputer Networks, multimedia SystemsMobile Computing, Modelling Simulation,Cyber Security, Human Values etc.

Page 23: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

23

THE NEW TRENDS IN DATABASESnehlata Kaul

Assistant Professor, MCA Department, AKGEC, GZBE-mail: [email protected]

Abstract— Dynamic and swiftly embryonic nature of systemsmotivated investigate imposes special requirements on thetechnology, approach, design and architecture of computationalinfrastructure including database management system. Severalsolutions have been proposed to meet the potential and novelmethods have been developed to address the persisting problemsof data integration. In this review I would like to discuss thenew and current trends and key technologies underlying someof the prominent databases.

1. INTRODUCTIONRelational Database Management Systems (RDBMSs) continueto do the heavy lifting in data management, while newerdatabase management systems are taking on prominent rolesas well. NoSQL database systems such as Key-Value, ColumnFamily, Graph, and Document databases, are gainingacceptance due to their ability to handle unstructured andsemi-structured data. MultiValue, sometimes called the fifthNoSQL database, is also a well-established databasemanagement technology which continues to evolve to addressnew enterprise requirements. Through this paper I will try todiscuses some of the latest databases which subside therequirement of Relational database such as :

MongoDBCouchDBOracle noSQL DatabaseOrient DB

1.1 MongoDBMongoDB (from humongous) is a cross-platform document-oriented database. Classified as a NoSQL database, MongoDBeschews the traditional table-based relational databasestructure in favor of JavaScript Object Notation (JSON)-likedocuments with dynamic schemas, making the combination ofdata in certain types of applications easier and faster. Releasedunder a combination of the GNU Affero General Public Licenseand the Apache License, MongoDB is free and open-sourcesoftware.

First developed by the software company 10gen (nowMongoDB Inc.) in October 2007 as a component of a plannedplatform as a service product, the company shifted to an opensource development model in 2009, with 10gen offeringcommercial support and other services.[1] Since then,MongoDB has been adopted as backend software by a number

of major websites and services, including Craigslist, eBay,Foursquare, Source Forge, Viacom, and The New York Timesamong others.[citation needed] As of 2014, MongoDB wasthe most popular NoSQL database system.[2]

1.1.1 Features of MongoDBDocument-orientedInstead of taking a business subject and breaking it up intomultiple relational structures, MongoDB can store the businesssubject in the minimal number of documents.[3]

Ad hoc queriesMongoDB supports search by field, range queries, regularexpression searches. Queries can return specific fields ofdocuments and also include user-defined JavaScript functions.IndexingAny field in a MongoDB document can be indexed (indices inMongoDB are conceptually similar to those in RDBMSes).Secondary indices are also available.

ReplicationMongoDB provides high availability with replica sets. A replicaset consists of two or more copies of the data. Each replica setmember may act in the role of primary or secondary replica atany time. The primary replica performs all writes and reads bydefault. Secondary replicas maintain a copy of the data of theprimary using built-in replication.

Load balancingMongoDB scales horizontally using sharding. The userchooses a shard key, which determines how the data in acollection will be distributed. The data is split into ranges(based on the shard key) and distributed across multiple shards.(A shard is a master with one or more slaves.)MongoDB can run over multiple servers, balancing the loadand/or duplicating data to keep the system up and running incase of hardware failure. [4]

File storageMongoDB can be used as a file system, taking advantage ofload balancing and data replication features over multiplemachines for storing files.

AggregationMapReduce can be used for batch processing of data andaggregation operations. The aggregation framework enables

Page 24: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

24

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

users to obtain the kind of results for which the SQL GROUPBY clause is used.

Server-side JavaScript executionJavaScript can be used in queries, aggregation functions (suchas MapReduce), and sent directly to the database to beexecuted.

Capped collectionsMongoDB supports fixed-size collections called cappedcollections. This type of collection maintains insertion orderand, once the specified size has been reached, behaves like acircular queue.

1.2 CouchDB :CouchDB is a database that completely embraces the web.Store your data with JSON documents. Access your documentsand query your indexes with your web browser, via HTTP.Index, combine, and transform your documents with JavaScript.CouchDB works well with modern web and mobile apps. Youcan even serve web apps directly out of CouchDB. And youcan distribute your data, or your apps, efficiently usingCouchDB’s incremental replication. CouchDB supports master-master setups with automatic conflict detection.[5]

CouchDB comes with a suite of features, such as on-the-flydocument transformation and real-time change notifications,that makes web app development a breeze. It even comes withan easy to use web administration console. You guessed it,served up directly out of CouchDB! We care a lot aboutdistributed scaling. CouchDB is highly available and partitiontolerant, but is also eventually consistent. And we care a lotabout your data. CouchDB has a fault-tolerant storage enginethat puts the safety of your data first.[6]

1.2.1 Features of CouchDBDocument StorageCouchDB stores data as "documents", as one or more field/value pairs expressed as JSON. Field values can be simplethings like strings, numbers, or dates; but ordered lists andassociative arrays can also be used. Every document in aCouchDB database has a unique id and there is no requireddocument schema.

ACID SemanticsCouchDB provides ACID semantics. It does this byimplementing a form of Multi-Version Concurrency Control,meaning that CouchDB can handle a high volume of concurrentreaders and writers without conflict.

Map/Reduce Views and IndexesThe stored data is structured using views. In CouchDB, eachview is constructed by a JavaScript function that acts as theMap half of a map/reduce operation. The function takes a

document and transforms it into a single value which it returns.CouchDB can index views and keep those indexes updated asdocuments are added, removed, or updated. .[7]

Distributed Architecture with ReplicationCouchDB was designed with bi-direction replication (orsynchronization) and off-line operation in mind. That meansmultiple replicas can have their own copies of the same data,modify it, and then sync those changes at a later time. RESTAPI

All items have a unique URI that gets exposed via HTTP. RESTuses the HTTP methods POST, GET, PUT and DELETE for thefour basic CRUD (Create, Read, Update, Delete) operations onall resources.

Eventual ConsistencyCouchDB guarantees eventual consistency to be able toprovide both availability and partition tolerance.

Built for OfflineCouchDB can replicate to devices (like smartphones) that cango offline and handle data sync for you when the device isback online. CouchDB also offers a built-in administrationinterface accessible via web called Futon.[8]

1.3 The Oracle NoSQL DatabaseThe Oracle NoSQL Database is a distributed key-valuedatabase. It is designed to provide highly reliable, scalableand available data storage across a configurable set of systemsthat function as storage nodes.

Data is stored as key-value pairs, which are written to particularstorage node(s), based on the hashed value of the primarykey. Storage nodes are replicated to ensure high availability,rapid failover in the event of a node failure and optimal loadbalancing of queries. Customer applications are written usingan easy-to-use Java/C API to read and write data.

Oracle NoSQL Driver links with the customer application,providing access to the data via appropriate storage node forthe requested key. A web based console as well as commandline interface is available for easy administration of the cluster.

1.3.1 Features of Oracle NoSQL DatabaseArchitecture Oracle NoSQL Database is built upon the OracleBerkeley DB Java Edition high-availability storage engine. Inaddition to that it adds a layer of services for use in distributedenvironments to provide a distributed, highly available key/value storage, suited for large-volume, latency-sensitiveapplications.[9]

Sharding and replicationOracle NoSQL Database is a client-server,sharded,shared-

Page 25: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

25

nothing system. The data in each shard are replicated on eachof the nodes which comprise the shard. It provides a simplekey-value paradigm to the application developer.

High availability and fault-toleranceOracle NoSQL Database provides single-master,multi-replicadatabase replication. Transactional data is delivered to allreplica nodes with flexible durability policies per transaction.In the event the master replica node fails, a consenus basedPAXOS -based automated fail-over election process minimizesdowntime. As soon as the failed node is repaired, it rejoins theshard, is brought up to date and then becomes available forprocessing read requests. [10]

Transparent load balancingOracle NoSQL Database Driver partitions the data in real timeand evenly distributes it across the storage nodes.

ACID compliant transactionOracle NoSQL Database provides ACID complaint transactionsfor full Create, Read, Update and Delete (CRUD) operations,with adjustable durability and consistency transactionalguarantees.

JSON data formatOracle NoSQL Database has support for the Avro dataserialization, which provides a compact, schema-based binarydata format. Avro allows you to define a schema (using JSON)for the data contained in a record's value and it also supportsschema evolution.

Elastic configuration"Elasticity" refers to dynamic online expansion of the deployedcluster. One can add more storage nodes to increase thecapacity, performance, reliability, or all of the above OracleNoSQL Database includes a topology planning feature, withwhich an administrator can now modify the configuration of aNoSQL database, while the database is still online. [11]

1.4 OrientDBOrientDB is a 2nd Generation Distributed Graph Database withthe flexibility of Documents in one product with an OpenSource commercial friendly license (Apache 2 license). Firstgeneration Graph Databases lack the features that Big Datademands: multi-master replication, sharding and more flexibilityfor modern complex use cases

OrientDB is incredibly fast: it can store 220,000 records persecond on common hardware. Even for a Document baseddatabase, the relationships are managed as in Graph Databaseswith direct connections among records. You can traverse partsof or entire trees and graphs of records in a few milliseconds.Supports schema-less, schema-full and schema-mixed modes.Has a strong security profiling system based on user and roles

and supports SQL amongst the query languages. Thanks tothe SQL layer, it's straightforward to use for those skilled in therelational database world.

1.4.1. Features of OrientDBFully transactional:Supports ACID transactions guaranteeing that all databasetransactions are processed reliably and in the event of a crashall pending documents are recovered and committed.

Graph structured data model:Native management of graphs. Fully compliant with the ApacheTinkerPop Gremlin (previously known as Blueprints) opensource graph computing framework.

SQL:Supports SQL queries with extensions to handle relationshipswithout SQL join, manage trees, and graphs of connecteddocuments.

Web technologies:Natively supports HTTP, RESTful protocol, and JSONadditional libraries or components.

Distributed:Full support for multi-master replication includinggeographically distributed clusters.

Run anywhere:Implemented using pure Java allowing it to be run on Linux,OS X, Windows, or any system with a compliant JVM.

Embeddable:Local mode to use the database bypassing the Server. Perfectfor scenarios where the database is embedded.

Apache 2 License:Always free for any usage. No fees or royalties required touse it.

Lightweight:A full server has a footprint of about 1 MB. Commercial supportis available from Orient Technologies.[12]

2 CONCLUSIONNew types of applications have created a situation where theexisting database management technology is not sufficient.Both the size and complexity of database is dramaticallychanges and increased. The result of this the Relationaldatabase is not sufficient to handle the database, we requiredthe new database technology to handle the current database.Through this article I briefly review the some of the latest andcurrent database technology.

The New Trends in Database

Page 26: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

26

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

3. REFERENCES[1] 10gen embraces what it created, becomes MongoDB

Inc.". Gigaom. Retrieved 27 August 2013.[2] Jump up to: a b "Popularity ranking of database

management systems". db-engines.com. Solid IT.Retrieved 2015-07-04.

[3] The MongoDB NoSQL Database Blog, The AGPL[4] Eliot Horowitz (2015-022). "Renaming Our Upcoming

Release to MongoDB 3.0". Retrieved 2015-02-23.[5] Apache Software Foundation. "Apache CouchDB".

Retrieved 15 April 2012.[6] Smith, Jason. "What is the CouchDB replication

protocol? Is it like Git?". StackOverflow. Stack Exchange.Retrieved 14 April 2012.

[7] Oracle NoSQL High Availability[8] Jump up^ Intelligent Drivers[9] Deploying Cassandra across Multiple Data Centers".

DataStax. Retrieved 11 December 2014.

[10] "NoSQL Relational Database Management System: HomePage". Strozzi.it. 2 October 2007. Retrieved 29 March 2010.

[11] Jump up^ "Oracle NoSQL ACID Compliance Transactions[12] "Multi-Model Database - OrientDB Manual".

Snehlata Kaul is working as an AssistantProfessor in Ajay Kumar GargEngineering College, Ghaziabad, U.P(India). She has obtained her MCA fromDR. B. R. Ambedkar MarathwadaUniversity, Aurangabad, Maharashtraand M.Tech. from KSOU, Mysore. Shehas more than decades of teachingexperience. Her research area includes

multi agent system, DBMS, ADBMS & SPM. She has attendedseveral seminars, workshops and conferences at various levels.She has published many papers in national and internationaljournals and conferences.

Page 27: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

27

QoS ISSUES IN WEB SERVICESAditya Pratap Singh

Assistant Professor, MCA Department, AKGEC, GZBE-mail: [email protected]

Abstract— Web Services have emerged as a new Web-basedtechnology methodology for exchanging information on theInternet using platform-neutral standards, such as XML andadopting Internet-based protocols. Web services have become apromising technology to design and build complex inter-enterprise business applications. This paper describes thefundamental concepts of web services, Quality of web services.Due to huge requirement, web services are being developedwith quality concern. Qualities of service (QoS) issues areconcerned like performance, security, reliability, degree ofinteroperability and resualibility. This paper presents thecomprehensive study about the positive and negative effects ofsoftware Quality of Service attributes involved for developingdistributed applications. This paper also describes the issuesrelated with quality of web services and Classification of webservices.

1. INTRODUCTIONA Web service is seen as an application accessible to otherapplications over the Web. Web Services are rapidly emergingas a popular standard for sharing data and applications overthe web. Many enterprises are moving towards using andinvesting in Web Services in place of their traditional client-server computing and in-house servers [1]. Web servicesextend the World Wide Web infrastructure to provide the meansfor software to connect to other software applications. theyare implemented using a collection of several related,established and emerging technologies and communicationprotocols that include HTTP, XML, Simple Object ApplicationProtocol (SOAP), Universal Description Discovery andIntegration (UDDI), Web Services Description Language(WSDL), Common Object Request Broker Architecture(CORBA), Java Remote Method Invocation (RMI) , and .NET.Applications access Web services via ubiquitous Webprotocols and data formats such as HTTP, XML, and SOAP,with no need to worry about how each Web service isimplemented. Web services combine the best aspects ofcomponent-based development and the Web [2].

SOAP works on the request/response model of making remoteprocedure calls. HTTP protocol to employ application-levelsecurity coupled with secure sockets. The Components ofSOAP server over HTTP shown in Fig.1 Firewalls are able tofilter SOAP packets. Secure sockets examine the SOAP headerand filter the packets based upon the object name or theparticular method. A SOAP message consists of the SOAPheader and body. The header is optional but if it exists it

contains one or more header elements that provide Metainformation regarding the method call. The body containsserialized method arguments. The method call’s XML elementhas to have the same name as the remote method [3].

Figure 1 Components of a SOAP server over HTTP

Web Service Description Language (WSDL)WSDL is a document which describes the services location onthe web and the functionality the service provides. It isstandard method of describing web Service and theircapabilities. A WSDL document defines services as collectionsof network endpoints, or ports. In WSDL, the abstract definitionof endpoints and messages is separated from their concretenetwork deployment or data format bindings. This allows thereuse of abstract definitions: messages, which are abstractdescriptions of the data being exchanged, and port types whichare abstract collections of operations. The concrete protocoland data format specifications for a particular port typeconstitute a reusable binding. A port is defined by associatinga network address with a reusable binding, and a collection ofports define a service. Hence, a WSDL document uses thefollowing elements in the definition of network services:

Types– a container for data type definitions using some typesystem (such as XSD).Message– an abstract, typed definition of the data beingcommunicated.Operation– an abstract description of an action supported bythe service.

Page 28: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

28

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

Port Type– an abstract set of operations supported by one ormore endpoints.Binding– a concrete protocol and data format specificationfor a particular port type.Port– a single endpoint defined as a combination of a bindingand a network address.Service– a collection of related endpoints.

WSDL uses XML as the underlying format and is a key part ofthe UDDI initiative. Messages contain either procedure-oriented or document oriented information. WSDL describesoperations and messages abstractly. Those operations andmessages are then bound to a concrete network protocol andmessage format to define an endpoint. The WSDL documentspecification improves interoperability between applications.It doesn’t make a difference what protocol or encoding schemeis used. An overview of how WSDL is used in the Web Servicecommunication process can be seen in Fig. 2. The concreteprotocol and data format specifications for a particular porttype constitute a reusable binding. A port is defined byassociating a network address [4].

Figure 2 WSDL overview

WSDL supported protocols are: SOAP, HTTP GET, HTTPPOST, and MIME.

QUALITY ISSUES IN WEB SERVICESQoS plays an important role in finding out the performance ofweb services. Earlier, QoS has been used in networking andmultimedia applications. Recently, there is a trend in adoptingthis concept to web services. The basic aim is to identify theQoS attributes for improving the quality of web servicesthrough replication services [4], load distribution, and serviceredirection. To measure the QoS of a web service, attributeslike Response Time, Throughput, Availability, Reliability, Cost,

and Response Time are considered. Table 1 shows the Qualityissues of web services.

Table.1 .Quality of Services required for web services

S. QoS requirement DescriptionNo1 Performance How fast a service request can be

completed measured by throughput,response time, latency, executiontime, transaction time

2 Reliability Assured and ordered delivery formessages being transmitted andreceived by service requestors andservice providers

3 Scalability capability of increasing the computingcapacity of service provider'scomputer system and system's abilityto process more users' requests,operations or transactions in a giventime interval

4 Capacity Web services should support therequired number of simultaneousconnections

5 Robustness the degree to which a web service canfunction correctly even in the presenceof invalid, incomplete or conflictinginput values

6 Exception Handling provided with the functionality ofexception handling.

7 Accuracy Web services provided with highaccuracy and low error rate

8 Integrity web services should be provided tocomponent can prevent unauthorizedaccess.(i)Data integrity(ii)Transaction integrity

9 Accessibility Web services should be provided withhigh accessibility. Accessibilityrepresent whether the web service isaccomplished of serving the client'srequests

10 Availability This availability is the probability thatthe system is up and related toreliability. Time-to-Repair (TTR) isassociated with availability

11 Interoperability Web services should be interoperablebetween the different developmentenvironments used to implementservices so that developers usingthose services do not have to thinkabout which programming language oroperating system the services arehosted on

12 Security Provided with the required security.Security for web services means

Page 29: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

29

providing authentication, authori-zation, confidentiality, traceability/auditability, data encryption

13 Network-Related QoS Requirements QoS mechanismsoperating at the web serviceapplication level must operate togetherwith the QoS mechanisms operatingin the transport network (for exampleRSVP( Signaling Message betweensender and reciver, Multi ProtocolLable Switching(MPLS), etc.)Basic network level QoS parameterswhich include network delay, delayvariation, and packet loss[9]

The QoS measure is observed by Web services users. Theseusers are not human beings but programs that send requestsfor services to Web service providers. QoS issues in Webservices have to be evaluated from the perspective of theproviders of Web services and from the perspective of theusers of these services. Depending on type of application whereweb service is to be utilized, decide the importance of QoSattributes. Different perspective gives importance to differentquality attributes as well as some issues are common for bothperspectives.

CONCLUSIONCurrent technologies help establish a solid foundation to enablethe functionalities of Web services. Communication protocols,XML-based standards, and management facilities provide agood support for interoperation among Web services. Webservices describe a service-oriented, component-basedapplication architecture that is based on an open, Internet-

centric infrastructure and protocols. This paper presents thecomprehensives study about the positive and negative effectsof software Quality of Service attributes involved for developingdistributed applications using web services. This paper surveysand study web services, SOAP and WSDL. This paper alsodescribes various issues related with quality of web services.

REFERENCES[1]. Ladan M I., "Web Services Metrics: A Survey and A

Classification", 2011 International Conference onNetwork and Electronics Engineering, Singapore, vol.11.

[2]. Web Services – MSDN, https://msdn.microsoft.com/en-us/magazine/cc163950.aspx

[3]. Vikas Pandya, “Cross Language Barriers with SOAP anda Java Web Service” http://www.devx.com/java/free/articles/pandya01/pandya01.asp

[4]. Aaron E. Walsh, “UDDI, SOAP, and WSDL: The WebServices Specification Reference Book”, 2002.

Aditya Pratap Singh received his Masterof computer application degree fromUttar Pradesh Technical University,Lucknow in 2003. He is perusing PhDfrom Gautam Budhha University, GreaterNoida. He is assistant professor in theDepartment of MCA at Ajay Kumar GargEngineering College, Ghaziabad. Hiscurrent research interests are in

component-based software engineering and softwaremeasurement. He has presented his work in several nationaland international conferences. His work has appeared in IEEEExplore. He has served in program committee relating to nationalconferences on cyber security issues.

QoS issues in web services

Page 30: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

30

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

DIRECT DIGITAL MANUFACTURINGDr. Pratima Singh

Assistant Professor, MCA Department, AKGEC, GZBE-mail: [email protected]

Abstract— Additive fabrications technologies, which untilrecently have been relegated to rapid prototyping applications,have entered into the manufacturing processes. The use ofadditive fabrication processes is what differentiates direct digitalmanufacturing (DDM) from conventional manufacturingmethods, and it is from these technologies that uniqueadvantages and opportunities arise. Direct from 3D digital data,a component is manufactured—layer-by-layer—withoutmachining, moulding or casting. Direct Digital Manufacturing(DDM) is a process that produces parts directly from a CADfile. This eliminates the use of the investment in tooling. Itremoves time lag between design and production. Its usages isnot just limited to Prototype production but is actually beingused for Mass Production. The following contents illustratethe concept of DDM and its application area. It also consists ofa brief discussion on how this technology has the potential tobringing an Industrial revolution.

Keywords—Direct Digital Manufacturing, AdditiveManufacturing, Rapid Prototyping, 3D Printing.

I. INTRODUCTIONDirect Digital Manufacturing (DDM )is a unique, newalternative for the manufacture of end-use items. Having littlein common with conventional manufacturing methods, thedistinctiveness of DDM overturns old principles, creates newcriteria and changes the decision-making process. The mainbenefit of DDM is that it eliminates limitations imposed bytraditional manufacturing procedures, for example die castingor injection molding. DDM essentially changes many of theprinciples and “facts” that govern conventional manufacturingenterprises. Because DDM is an additive procedure thateliminates tooling, there are several benefits that are notavailable with traditional procedures.

With DDM, time-to-market is no longer restricted by the lead-time for tooling. Because DDM eliminates tooling, a productcan be manufactured on the same day that the design iscompleted. This enables companies to produce an instantprototype and react faster to the demands of the user. It alsoreduced the capital investment of Manufacturer and time gapbetween design, Manufacturing and Testing and launchingthe product commercially

The following part of the article is divided into followingsections for better understanding of DDM technology andhow it has the potential to call for an industrial development.

In section II we understand the concept and advantages ofDirect Digital Manufacturing, followed by its usage andApplication area in Section III. Section IV is followed byConclusion of the article.

II. CONCEPT AND ADVANTAGE S OF DIRECTDIGITAL MANUFACTURING

Direct Digital Manufacturing (DDM) is a process that producesparts directly from a CAD file. When discussing DDM, manywill use terms such as rapid manufacturing, additivemanufacturing or freeform fabrication; all of which aresynonymous to DDM.[5]

Rapid Prototyping to Direct Digital Manufacturing deals withvarious aspects of joining materials to form parts. AdditiveManufacturing (AM) is an automated technique for directconversion of 3D CAD data into physical objects using avariety of approaches[4]. Manufacturers have been using thesetechnologies in order to reduce development cycle times andget their products to the market quicker, more cost effectively,and with added value due to the incorporation of customizablefeatures. Realizing the potential of AM applications, a largenumber of processes have been developed allowing the use ofvarious materials ranging from plastics to metals for productdevelopment[3]

DDM is often used, as a beta(for prototyping design) to gainmarket validation throughout the design process and to buildparts that require functionality and durability without the needfor cosmetic value. The primary advantage of DDM is that itremoves constraints imposed by traditional manufacturingprocesses, such as injection molding or die casting.DDMfundamentally alters many of the “facts” that are taken forgranted during the efforts to design, manufacture and assemblea product. If also shifts the logic applied to business decisionsin the sales, marketing and finance departments as well asthose in the executive suite. Because DDM impartsfundamental changes, many have difficulty grasping the keyconcepts and benefits. To appreciate what DDM can do for amanufacturing company, an understanding of the fundamentalchanges that it imparts is necessary.

Following are the commercial advantages of using a DDMtechnology[4].1. Time-to-market is no longer constrained by the lead time

for tooling. Since DDM eliminates tooling, a product may

Page 31: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

31

be manufactured the same day that a design iscompleted.

This allows companies to respond faster and to do so withless initial cash outlay for tooling and capital equipment.

2. Product design is no longer constrained by the rules fordesign for manufacturability (DFM) or the rules for DFA(design for assembly). Since components aremanufactured using additive fabrication technologies,such as FDM( Fused Deposition Modelling), there is nolimit to design complexity. Complex parts are made withoutadditional expense or production delays.

3. Product revisions are no longer constrained by inventorylevels, expense for new tooling or the time for toolingrework. Because there is no tooling, short-runmanufacturing principals may be used. This on-demandstrategy reduces inventory levels and minimizes losseswhen product modifications are implemented. Also,companies may change the design at any time, and asfrequently as desired, so that products remain innovative,optimized and cost effective. Manufacturers no longerface the difficult decision of postponing a desperatelyneeded modification because of lost time and highexpense.

4. Production schedules are no longer bound to inaccuratesales forecasts or rigid economic order quantities. Theon-demand, just-in-time (JIT) manufacturing practicepromoted by DDM, diminishes the need to build aproduction schedule from a sales forecast that is in thetoo distant future to be reliable.

Additionally, unlike traditional manufacturing methods, thereis no time consuming set-up or change over. Therefore, thereis no penalty for frequent product changes in the manufacturingschedule.

The above flexibility holds the potential to revolutionize theManufacturing process

IV. APPLICATION AREA OF DDMDDM is the fabrication of components in a seamless mannerfrom computer design to actual part in hand. Also know as “3Dprinting” or “additive,” “rapid,” “instant,” “on-demand”manufacturing, DDM uses 3D computer-aided design files todrive the computer-controlled fabrication of parts[5].

Unlike traditional machining methods, which involve workingfrom a rough mold and then cutting away to achieve the desiredcomplex shape, direct digital manufacturing creates the shapeprecisely and instantly, using additive fabrication.

DDM is commonly explained through the example of creating

a coffee cup [5]. An old-style craftsman might slowly shape apiece of clay by hand into a handmade mug. Designers andmachinists in a factory would build a series of metallic moldsand then create a series of tools to mill metal into the keycomponents of the cup (handle, bottom, etc.), which wouldthen be assembled on a production line, often through welding.By contrast, a DDM designer would create a digital 3D modelof the cup, then turn production over to the computer, whichwould digitally slice it into a stack of wafer-thin layers, printthem out, and fuse them together.

The biggest advantage of DDM is in Defence Industry .Theability to design, simulate and test in the virtual world allows amuch more rapid turnaround than traditional techniques. Nowwe can add in the ability to rapidly fabricate the real part orsystem using the same digital file. This also applies to anychanges or improvements to a design, as the entiremanufacturing line doesn’t need to be retooled for any smallchange or new Pentagon requirement.

DDM can more easily produce organic shapes than the linearones commonly used in defence manufacturing. That promisesto introduce more complex shapes and geometries — veryimportant for energy efficiency and even stealth characteristics.Yet the lower energy demands and material waste mean lowermanufacturing costs, thus producing savings that could getpassed on to the Defense Department.

Among the projects supported by Defense AdvancedResearch Projects Agency and the Office of Naval Researchare programs to manufacture superalloy airfoils — for example,the wings of fixed-wing aircraft and helicopter rotor blades —and work at Boeing for on-demand rapid manufacturing forcertain nonflight critical hardware on military aircraft. OtherDoD-supported research projects are poised to take DDM tothe next technical level, such as efforts to develop ever-moresophisticated lasers and manufacturing systems, further workon industrial- and military-strength products, and efforts tomake DDM work at an even more scalable level, includingthrough the use of nanotechnology.

DDM is defined as a process that is used to perform one ofthe following three manufacturing functions:[1]• Manufacture end-use, sellable goods: The manufactured

items are the components and subassemblies that gointo the products that a company sells to its customers.

• Produce end-use items that are used by the manufacturer:Rather than making a company’s products, the processis used to manufacture devices that aid in the productionof the sellable products. These fabrication and assemblytools include jigs and fixtures.

• Create tooling for the molding, casting or forming ofproducts: This application can be either the directproduction of tooling, or the indirect creation of tooling

Direct Digital Manufacturing

Page 32: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

32

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

from a pattern, that is constructed from an additivefabrication technology.

V CONCLUSIONSDDM must be viewed as an alternative to traditionalmanufacturing methods rather than a replacement technology.DDM is not a global replacement for manufacturing processesthat are performing as needed, expected or desired. DDM isnot a cure-all or a magical solution to all of the ills on themanufacturing floor. Rather, DDM is an alternative that shouldbe considered when the limitations and constraints of existingmanufacturing methods impact the ability to manufacture adesired product practically, efficiently or affordably.

DDM produces finished parts directly from digital data. It doesnot just produces prototype but a fully functional parts arecreated using commercial grade thermoplastics[2]. This reducedthe Cycle time between Conceptualization, Modelling,Manufacturing and Testing of the product. Thus Companiescan adopt DDM to achieve the following benefits:

Eliminate the investment in toolingRemove time lag between design and productionEliminate penalty for redesignReduce the size of an economical lot

All the benefits earned above comes with its own challengesof reliability , safety and maintainability issues. [5] 3-D printingbureaus and do-it-yourself rapid prototyping machines arelocated around the world. If a university team can already designand build a small aircraft for fun, so can someone with more

wicked purposes in mind. Thus one of the challenge faced byDDM would be the processes to work through the “ilities”(reliability, safety, maintainability.

REFERENCES[1] Scroot Crump ,Direct Digital Manufacturing, WhitePaper,[2] Gunter Schmid And Ulrich Eidenschink, Rapid

Manufacturing with FDM in Jig and FixtureCons,Cimetrixsolutions.com/resources/white paper

[3] Dr Ian Gibson, Dr David W Rosen, Dr Brunt Stucker,Additive Manufacturing Technologies: RapidPrototyping to Direct Digital Manufacturing, http://link.springer.com/book/10.1007%2F978-1-4419-1120-9

[5] Larry Schutte, Peter W Singer,DDM: IndustrialGameChanger,www.brookings.edu/research/articles/2011/10/10-digital-manufacturing-singer

Author is currently working as an AssistantProfessor in Ajay Kumar Garg EngineeringCollege, Ghaziabad. U.P since 2008. She wasearlier associated with Babu Banarasi DasNational Institute Of Technology &Management. Lucknow from Aug 2001 toAug. 2008. She has completed her M Tech(UP Technical University) and PhD From IITBHU. She has cleared ISTQB (International

Software Testing Qualification Board) Certification in 2008.Her areaof Interest is in Software Engineering, Software Project Management,Software Testing, Computer Organization and Digital Electronics,Digital Logic Design.

Page 33: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

33

BASICS FOR INTEGRATION OF DATAWAREHOUSE AND DATA MINING

Dheeraj Kumar SinghAssistant Professor, MCA Department, AKGEC, GZB

E-mail: [email protected]

Abstract— Data Warehouse and OLAP are essential elementsof every decsion support system, which has increasingly becomea focus of the database industry. Also, Data Mining andKnowledge Discoveryn means knowledge extraction from datawarehouse is one of the fastest growing computer science fields.Traditional query and report tools have been used to describeand extract what is in a database. The user forms a hypothesisabout a relationship and verifies it with a series of queries.Data mining can be used to generate a hypothesis for businessintelligence. But there are also some other possibilities forcollaboration between Data Warehouse as well in Data Miningtechnologies.

I. INTRODUCTIONThe role of information in creating competitive advantage forbusinesses and other enterprises is now a business axiom:whomever controls critical information can leverage thatknowledge for profitability. The difficulties associated withdealing with the mountains of data produced in businessesbrought about the concept of information architecture whichhas spawned projects such Data Warehousing (DW)[1]. Thegoal of a data warehouse system is to provide the analys withan integrated and consistent view on all enterprise data whichare relevant for the business analysis. On the other hand, DataMining and Knowledge Discovery is one of the fast growingcomputer science fields. Its popularity is caused by anincreased demand for tools that help with the analysis andunderstanding of huge amounts of data[3]. The question ariseshere:”Where is the intersection of these powerful technologiesnowadays ?” Three basic approaches of integration andinteraction between them are discussed here:

1. Integration on the front end level combining On-LineAnalytical Processing (OLAP) and data mining tools intoa homogeneous Graphic User Interface;

2. Integration on the database level adding of data miningcomponents directly in DBMS;

3. Interaction on back end level – the usage of data miningtechniques during the data warehouse design process.

2. DATA WAREHOUSE, OLAP AND DATA MININGDW and OLAP are essential elements of decision support.They are complementary technologies - a DW stores and

manages data while OLAP transforms the data into possiblystrategic information. Decision support usually requiresconsolidating data from many heterogeneous sources: thesemight include external sources in addition to several operationaldatabases. In data warehouses historical and summarized datais more important than detailed. Many organizations want toimplement an integrated enterprise warehouse that collectsinformation about all subjects spanning the whole organization.Some organizations are settling for data marts instead, whichare departmental subsets focused on selected subjects (e.g., amarketing data mart, personnel data mart).

Knowledge Discovery (KD) is a nontrivial process ofidentifying valid, novel, potentially useful, and ultimatelyunderstandable patterns from large collections of data[5]. Oneof the knowledge discovery steps is Data Mining (DM). DM isthe step that is concerned with the actual extraction ofknowledge from data, in contrast to the KD process that isconcerned with many other things like understanding andpreparation of the data, verification of the mining results etc.

Fig. 1. Cube presentation of data in multidimensional model

The knowledge discovery goals are defined by the intendeduse of the system. We can distinguish two types of goals:verification and discovery. With verification, the system islimited to verifying the user’s hypothesis. That kind of

Page 34: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

34

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

functionality is mainly supported by OLAP technology. Datamining involves fitting models to, or determining patterns from,observed data[1]. Most data-mining methods are based ontried and tested techniques from machine learning, patternrecognition, and statistics: classification, clustering,regression, and so on[4]. We introduce in brief only some ofthem having relation with the rest of the paper:

Classification is learning a function that maps (classifies)a data item into one of several predefined classes.Classification-type problems are generally those whereone attempts to predict values of a categorical dependentvariable (class, group membership, etc.) from one or morecontinuous and/or categorical predictor variables.Regression is learning a function that maps a data itemto a real-valued prediction variable. Regression-typeproblems are generally those where one attempts topredict the values of a continuous variable from one ormore continuous and/or categorical predictor variables.There are a large number of methods that an analyst canchoose from when analyzing classification or regressionproblems. Tree classification and regression techniquesproduce predictions based on few logical if-thenconditions is frequently used approach[8].Clustering is a common descriptive task where one seeksto identify a finite set of categories or clusters to describethe data. The categories can be mutually exclusive andexhaustive or consist of a richer representation, such ashierarchical or overlapping categories. Closely relatedto clustering is the task of probability density estimation,which consists of techniques for estimating from datathe joint multivariate probability density function of allthe variables.Change and deviation detection focuses on discoveringthe most significant changes in the data from previouslymeasured or normative values.

Fig. 2. Typical data warehousing architectureand points of integration

3. INTEGRATION TO OLAP WITH DATA MININGMost data mining tools need to work on integrated, consistent,and cleaned data, which requires costly data cleaning, datatransformation, and data integration as preprocessing steps.A data warehouse constructed by such preprocessing servesas a valuable source of high quality of data for OLAP as wellas for data mining. Effective data mining needs exploratorydata analysis. A user will often want to traverse through adatabase, select portions of relevant data, analyze them atdifferent granularities, and present knowledge/results indifferent forms[8]. By integration of OLAP and data mining,OLAP mining (also called On-Line Analytical Mining) facilitatesflexible mining of interesting knowledge in data cubes becausedata mining can be performed at multi-dimensional and multi-level abstraction space in a data cube. Cubing and miningfunctions can be interleaved and integrated to make datamining a highly interactive and interesting process. The desiredOLAP mining functions are:

Cubing then mining: With the availability of data cubesand cubing operations, mining can be performed on anylayers and any portions of a data cube. This means thatone can first perform cubing operations to select theportion of the data and set the abstraction layer(granularity level) before a data mining process starts.For example, one may first tailor a cube to a particularsubset, such as “year = 2004”, and to a desired level,such as at the “city level” for the dimension “store”, andthen execute a prediction mining module.Mining then cubing: This means that data mining can befirst performed on a data cube, and then particular miningresults can be analyzed further by cubing operations.For example, one may first perform classification on a“market” data cube according to a particular dimensionor measure, such as profit made. Then for each obtainedclass, such as the high profit class, cubing operationscan be performed, e.g., drill-down to detailed levels andexamine its characteristics.Comparative mining: A flexible data miner should allowcomparative data mining, that is, the comparison ofalternative data mining processes. For example, a dataminer may contain several cluster analysis algorithms.One may like to compare side by side the clusteringquality of different algorithms, even examine them whenperforming cubing operations, such as when drillingdown to detailed abstraction layers.

It is possible to have other combinations in OLAP mining. Forexample one can perform “mining then mining”, such as firstperform classification on a set of data and then find associationpatterns for each class. In a large warehouse containing a hugeamount of data, it is crucial to provide flexibilities in data miningso that a user may traverse a data cube, select mining spaceand the desired levels of abstraction, and test different miningmodules and alternative mining algorithms[6] .

Page 35: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

35

4 RDBMS INTEGRATION WITH DATA MININGTraditionally, data mining tasks are performed assuming thatthe data is in memory. In the last decade, there has been anincreased focus on scalability but research work in thecommunity has focused on scaling analysis over data that isstored in the file system[7]. This tradition of performing datamining outside the DBMS has led to a situation where startinga data mining project requires its own separate environment.Data is dumped or sampled out of the database, and then aseries of special purpose programs are used for datapreparation. Over the life cycle of an enterprise application,such deployment and management of models remains one ofthe most important tasks [8]. The objective of the integrationon the database level is to alleviate problems in deployment ofmodels and to ease the data preparation by working directlyon relational data. The business community already tries tointegrate the KD process. The biggest DBMS vendors likeIBM, Microsoft and Oracle integrated some of the DM toolsinto their commercial systems. Their goal is to make the use ofDM methods easier, in particular for users that use their DBMSproducts[3].

5. THE FEASIBILITY OF DATA MINING TECHNIQUESDURING THE DATA WAREHOUSE DESIGN PROCESS

The design phase is the most costly and critical part of a datawarehouse implementation. In different steps of this processarises problems that can be solved by usage of data miningtechniques. For better understanding of where data miningcan support design phase, in [2] the process is divided tofollow steps: Data Source Analysis, Structural Integration ofData Sources, Data Cleansing, Multidimensional DataModeling and Physical DW Design.

The first step of the warehouse design cycle is the analysis ofthese sources. They are often not adequately documented.Data mining algorithms can be used to discover the implicitinformation about the semantics of the data structures.

Data Source AnalysisOften, the exact meaning of an attribute cannot be deducedfrom its name and data type. The task of reconstructing themeaning of attributes would be optimally supported bydependency modeling using data mining techniques andmapping this model against expert knowledge, e.g., businessmodels. Association rules are suited for this purpose. Otherdata mining techniques, e.g., classification tree and ruleinduction, and statistical methods, e.g., multivariate regression,probabilistic networks, can also produce useful hypotheses inthis context.

Structural Integration of Data SourcesData mining methods can discover functional relationshipsbetween different databases when they are not too complex. Alinear regression method would discover the corresponding

conversion factors. If the type of functional dependency (linear,quadratic, exponential etc.) is a priori not known, model searchinstead of parameter search has to be applied.

Data CleansingData cleansing is a non-trivial task in data warehouseenvironments. The main focus is the identification of missingor incorrect data (noise) and conflicts between data of differentsources and the correction of these problems. Typically,missing values are indicated by either blank fields or specialattribute values. A way to handle these records is to replacethem by the mean or most frequent value or the value which ismost common to similar objects. Advanced data miningmethods for completion could be similarity based methods ormethods for dependency modeling to get hypotheses formissing values.

6. CONCLUSIONSIn the paper we observe basic approaches for integration ofdata warehouse, OLAP and knowledge discovery. We presentthe main characteristics of On-Line Analytical Mining.Sometimes one needs to perform interactive exploratoryanalysis to observe which function discloses the mostinteresting features in the database. OLAP mining provides anexploratory analysis tool, however, further study should beperformed on the automatic selection of data mining functionsfor particular applications.

Finally we look at how data mining methods can be used tosupport the most important and costly tasks of data warehousedesign. Hundreds of tools are available to automate portionsof the tasks associated with auditing, cleansing, extracting,and loading data into data warehouses. But tools that use datamining techniques still are rare.

REFERENCES[1] Trillium Software System: Achieving Enterprise Wide

Data Quality , White Paper, 2000[2] Sapia C.; Höfling G.; Müller M.; Hausdorf C.; Stoyan H.;

Grimmer U.:On Supporting the Data WarehouseDesignby Data Mining Tech niques.Proc.GI-Workshop DM andDW, 1999.

[3] Cios K. J.; Kurgan L.: Trends in Data Mining andKnowledge Discovery, Knowledge Discovery inAdvanced Information Systems, Springer, 2002

[4] Fayyad U. , Piatetsky-Shapiro G., Smyth P.: From datamining to know ledge discovery in databases. AIMagazine, 17(3):37-54, 1996.

[5] Fayyad U., Piatetsky-Shapiro G., Smyth P.: Knowledgediscovery and da ta mining - Towards a unifyingframework. Proc. 2nd Int. Conf. on KDD'96, Portland,Oregon, pp. 82-88, 1996.

[6] J. Han : OLAP Mining: An integration of OLAP with datamining. Proc. of IFIP ICDS, 1997.

Basics for Integration of Data Warehouse and Data Mining

Page 36: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

36

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

[7] Han J., Chee S., Chiang J. Y. : Issues for On-LineAnalytical Mining of Data Warehouses. Proc. of 1998,SIGMOD'96 Workshop on Research Issues on DMKD'98,1998, pp. 2:1-2:5

[8] Netz, A.; Chaudhuri, S.; Fayyad, U.; Bernhardt, J.:Integrating data mining with SQL databases: OLE DB fordata mining , ICDE 2001, Page(s): 379 -387

Dheeraj Kumar Singh is working as anassistant professor in Ajay Kumar GargEngineering College, Ghaziabad, U.P. (India).He has obtained M.Tech.(InformationTechnology) from GGSIP University, Delhi,and MCA from UPTU, Lucknow (U.P.). Hehas been in teaching from last 10+ years: hehas been member of several academic andadministrative committees. During histeaching tenure he has worked as a committee

member in organizing a National Conference and many technicalfests at college level. He has attended several seminars, workshops,and conferences at various levels. Also he has some papers publishedat national and international conferences and journals. His area ofresearch includes Programming Languages, Software Reliability,DBMS, Data Mining, and Data Warehousing.

Page 37: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

37

THE SEO GUIDELINES FOR ACHIEVING HIGHERRANKING IN SEARCH RESULTS

Dr. Pooja AroraAssistant Professor, MCA Department, AKGEC, GZB

E-mail: [email protected]

Abstract— Search Engine Optimization (SEO) relates to theactivity of optimizing individual web pages or the entire websiteto make them friendlier to acquire higher raking in the searchresults. All the major search engines such as Google, Yahoo,Ask, Bing, etc. rank web-pages based on certain factors thataffect its ranking; therefore, SEO aims at generating the righttypes of signals on the web-pages. The core methodology usedin SEO is to upgrade both content and associated coding of thewebsite to improve its visibility and prominence in organicsearches made by the search engines. The optimized websitesobtain better ranks, and typically get a higher number of visitors.This paper pinpoints certain gaps in the known search engineoptimization techniques. Finally, I put forward some pertinentguidelines for optimizing the websites.

Keywords— Search Engine Optimization, Search Ranking, WebCrawling, Internet Marketing, Website Promotion, Re-ranker.

I. INTRODUCTIONSearch engine optimization is a process that uses a searchprinciple for search engine to acquire higher ranking for web-pages or websites to enhance the probability of website access.Since the emergence of Google, Yahoo, Ask, Blekko, Voluniaand Bing, the SEO principles and search engine innovationshave been in the process of development. Search engine is amethodology which is based on different strategies using acomputer program to gather information from the Internet, andorganizing and processing the searched information fordisplaying the search results to the users in a systematic way[5]. A search engine has four major areas: searcher, indexer,hunter, and user. Search engine is not only the necessaryfunction in the website to provide a convenience for users,but it is also an effective tool of understanding web user’sbehavior. Efficient search engine allows users to find the targetinformation quickly and accurately. At present, the majorInternet search engines are Google, Yahoo, Ask, and Bing andso on.

Search engine optimization relates to the activity of optimizingthe individual web pages or the website to get higher positionin the search results. The SEO technique entails two processes:on-site optimization and off-site optimization. On-siteoptimization pertains to the practices which are used duringthe development of a website. Followings are the important

aspects that need meticulous attention during the developmentof a website to make it search engine friendly. Domain nameand title of the web-pages should contain the keyword forwhich the website is required to be optimized. In addition, itrequires choosing descriptive text for the website contentsfollowed by figuring out the keyword phrases through whichwebsite is meant to be optimized. These keywords are requiredto be placed within the headings, anchor tags and Alternativetags (Alt-tag) as well as in the main contents. For SEO, Irecommend using the latest tools and software for websitedevelopment, and regularly updating the website with freshand unique contents so that the websites stay compliant withthe ever-changing Internet trends. Ensuring that the web-pagescontain the most recent keyword that are more pertinent totheir contents bring about value addition in the SEOmethodology. The keywords used for optimization should alsoreflect in all the linking structures on the web-pages. Similarly,videos, images, audios and other contents should be optimizedon the same lines.

Off-site optimization relates to the practice through which thewebsite and its contents are propagated over the Internet toincrease its traffic which in turn helps enhance Google rankingof the website [1], [2].

2. CRITICAL EVALUATIONThe critical evaluation of the SEO techniques is shown in thetable 1.

3. PROPOSED GUIDELINES FOR SEARCH ENGINEOPTIMIZATION

SEO relates to the activity of optimizing individual web pagesor the entire website to make them more search engine friendly,thus, getting higher position/ranking in search results. All themajor search engines such as Google, Yahoo, Ask and Bingrank web-pages with the right combination of ranking factorsor signals; therefore, SEO aims at generating the right type ofsignals on the web-pages [3]. The core methodology used inSEO is to upgrade both contents and associated coding of thewebsite to improve its visibility and prominence in organic/natural searches made by the search engines. The optimizedwebsites appear at better ranks in search engines and ordinarilyget the higher number of visitors. For practical purposes, the

Page 38: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

38

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

thumb rule to optimize the website is to make it entire contentsas “ search engine only index text.” WordPress (http://wordpress.com) is a website generator/creator and optimizertool which has the following functionalities:• Create a website• Create pages and directories• Update (pages and directories), edit and delete• Create website links same as the title page or an article• Install new themes (layout, fonts, textual representation

etc.)

SEO technique entails two major processes: on-siteoptimization and off-site optimization.a. On-site Optimization: On-site optimization pertains to

the practices which are used during the development ofa website. Listed below are the important aspects thatneed meticulous attention during the development of awebsite in order to make it search engine friendly. Thefollowing guidelines are suggested for on-siteoptimization:

- Domain Name and Title of the website/web-pages shouldcontain the keyword for which the website is required tobe optimized.

- Choose descriptive text for the website contents.- Figure out keyword phrases through which website is

meant to be optimized and put these keywords inheadings, anchor tag and Alternative (Alt) tags as wellas in the main contents/text.

- Create a perfect architecture of the website that can beeasily crawled. Use latest tools and software for websitedevelopment (e.g., WordPress).

- Regularly update website with fresh and unique contentsso that the websites contents should comply with theever-changing Internet trends.

- Ensure that the web-pages contain the latest keywords/buzzwords that are more pertinent to their websitecontents.

- The keywords used for optimization should also reflectin all the linking structures placed on the web-pages.

- Similarly, video, images, audio and other contents shouldbe optimized on the same lines.

- Select keywords after analyzing its density by using SEOTools (such as Quake, Xenu). Use keywords that can becrawled easily by the Google spider.

- Place keyword in title tag of the page.- Content of the page should contain such keywords that

can be easily found on pages during the optimization.- Use brief and concise URL for the websites.b. Off-site Optimization: Off-site optimization relates to the

practices through which the website and its contentsare propagated/proliferated over the Internet to increaseits traffic which in turn helps enhancing Google rankingof the website. The following considerations aresuggested for off-site optimization:

1. Back linking: Place your website links on other well reputedwebsites. This helps enhance page rank. Some other course ofaction includes:- Choose a reputed web hosting company.- Website should have its own static IP address.- Ensure Article Distribution to other websites.

Table 1. The Critical Evaluation of the SEO Techniques

Page 39: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

39

- Include Forum and Blog Commenting.- Submit your Blog and RSS Feed.- List your site in directories.- Get “.gov” and “.edu” links.- Include social bookmarking.- Check shares of your website content on different media

periodically.- Use clear text for links that should be found on site.

2. Social Network: Place links of the social networkingwebsites such as Twitter, Facebook and Google+ on yourwebsite. Proliferate your website’s content onto these socialnetworking sites by creating your web-links/web-pages onthese sites. This helps other people to add to your accountsand ultimately visit your website.

3. Groups: Join groups such as Yahoo, MSN, Google and postcomments on these groups. Your comments should alwayshave a reference/link of your website or the specific webpagethat you wish readers to read. Sending email to masses is alsocovered in this strategy.

4. Online community: Join online community and discussionboards, and place comments containing links of your website.

5. Trusted websites: Use trusted websites to share your linksand also insert links of the reputed sites in your web pages.

6. Country location: Check country location where yourwebsite is uploaded.

4. TIPS TO IMPROVE GOOGLERANKING OF WEBSITE

- The Panda update was designed by Google to improvethe user experience by catching and demoting low-qualitysites that did not provide useful original content orotherwise add much value. At the same time, it providesbetter rankings for high-quality sites — i.e., sites withoriginal content and information such as research, in-depth reports, thoughtful analysis and so on [4].

- Social media plays a role in influencing the search engineresults by giving preference based on the authority ofthe author and the number of times a piece of content isshared on social networking sites. Therefore, this aspectmust be practiced in order to get higher ranks in thesearch results.

- Websites which are already online can progress upwardswiftly than uploading a new website because Googleand other search engines carry out indexation of oldwebsites more frequently, which ultimately helps in easyand fast crawling. It is recommended to link your websitewith some old famous website.

- In general, webmasters can improve the rank of their sitesby increasing the number of high-quality sites that linkto their pages (a search engine optimization characteristicknown as back links or back linking).

- Alexa provides traffic data, global rankings and otherinformation on thousands of websites. It is suggestedto check periodically the ranking of your website throughsuch websites that provide web analytics.

- Google Penguin algorithm is used to counter spammingand other illegal ranking factors.

- Link building methods are usually used for creation ofsites.

5. CONCLUSIONThe optimized websites appear at better ranks in search engineand ordinarily get a higher number of visitors. This research isbased on reviewing different available techniques foroptimizing individual web-pages or the entire website to makethem search engine friendly. I have also suggested our ownobserved methods for search engine optimization. As a futuredimension to this research, I intend to develop an effectiveand accurate system for search engine optimization forobtaining a higher rank for the websites in the search results.

REFERENCES[1] M. Cui and S. Hu, “Search Engine Optimization Research

for Website promotion”, ICM Transp. Manage., China,vol. 4, (2011), pp. 100-103.

[2] S. Wang, K. Xu, Y. Zhang and F. Lie, “Search EngineOptimization based on Algorithm of BP Neural networks”,CIS Inst. of Eng. & Tech., China, (2011), pp. 390-394.

[3] H. L. Hsing, H. J. Chen, L. Me. Huang and H. H. Yi, “Theapplication of search engine optimization for internetmarketing: An example of the motel web sites”, ICCAEChung Hua University Taiwan, vol. 1, (2010), pp. 380-383.

[4] P. T. Chung, S. H. Chung and C. K. Hui, “A web serverdesign using search engine optimization techniques forweb intelligence for small organizations”, LISAT IEEELong Island University Brooklyn NY USA, (2012), pp. 1-6.

[5] M. Yunfeng, “A Study on Tactics for Corporate WebsiteDevelopment Aiming at Search Engine Optimization”,ETCS Zhejiang University China

Dr. Pooja Arora is working as assistantprofessor in MCA department in AjayKumar Garg Engineering College,Ghaziabad. She did B.Sc. (computerscience) from Delhi University in 1998,MCA from Banasthali Vidyapith in 2001and Ph.D. in computer science in 2014from Banasthali Vidyapith. Her area ofinterest is Information retrieval.

The SEO Guidelines for Achieving Higher Ranking in Search Results

Page 40: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

40

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

AMBIENT INTELLIGENCESaroj Bala

Assistant Professor, MCA Department, AKGEC, GZBE-mail: [email protected]

Abstract— Ambient intelligence is an emerging discipline thatbrings intelligence to our everyday environments and makesthose environments sensitive to us. Ambient intelligence (AmI)research builds upon advances in sensors and sensor networks,pervasive computing, and artificial intelligence. Because thesecontributing fields have experienced tremendous growth in thelast few years, AmI research has strengthened and expanded.Because AmI research is maturing, the resulting technologiespromise to revolutionarize daily human life by making people'ssurroundings flexible and adaptive. In this article, we provide asurvey of the technologies that comprise ambient intelligence.

1. INTRODUCTIONComputer science is a relatively new branch of science and assuch it has gone through rapid and yet importanttransformations during the first decades of its existence. Thosetransformations have produced a very interesting mix ofavailable experiences, and expectations which are makingpossible the creation and deployment of technology toultimately improve the way our environments help us. Thistechnical possibility is being explored in an area called AmbientIntelligence[1-3]. Here we survey the field of AmbientIntelligence. Specifically, we review the technologies that ledto and that support research in AmI.

The article is structured as follows: section 2 introducesAmbient Intelligence, Section 3 discusses contributingtechnologies that are explored in section 4 to 8.

2. WHAT IS AmI?‘‘Ambient Intelligence’’ (AmI) is not a prediction of the futurebut a vision. It refers to the future of the information societystemming from the convergence of ubiquitous computing,ubiquitous communication and intelligent user-friendlyinterfaces as envisaged by the European Information SocietyTechnology Advisory Group. It puts the emphasis on user-friendliness, user-empowerment and support for humaninteractions. ICT-based artifacts and computers would fadeinto the background while people would be surrounded byintelligent and intuitive interfaces embedded in all kinds ofobjects. The environment would recognize individuals andsome of their needs and wants, as well as changes inindividuals, changes in needs and wants, and changes in theenvironment. It would respond in a seamless, unobtrusive andoften invisible way, nevertheless remaining under the control

of humans. Intelligent agents would eventually make decisionsthat automatically serve a person or notify a person of a needto make a decision or to carry out an action. The AmI vision isby definition normative and aims more at contributing toshaping a desirable future than describing it realistically.However, as AmI puts a major claim on being people-oriented,user-friendly, unobtrusive and controllable, it thereby alsoassumed it AmI will be inclusive for a large share of the society).The key question is therefore whether AmI will fulfil some ormost of the promises researchers are making or whether it isjust an illusion—offering apparently easy living while actuallyincreasing the complexity of life.

3. CONTRIBUTING TECHNOLOGIESAmI has a decisive relationship with many areas in computerscience. We organize the contributing technologies into fiveareas[1], shown in Fig. below.

A key factor in AmI research is the presence of intelligence.The AmI algorithm perceives the state of the environment andusers with sensors, reasons about the data using a variety ofAI techniques, and acts upon the environment using controllersin such a way that the algorithm achieves its intended goal.Hence, we focus on technologies that assist with sensing,reasoning, and acting. On the other hand, while AmI drawsfrom the field of AI, it should not be considered synonymous

Page 41: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

41

with AI. The IST Advisory Group lists five key technologiesthat are required to make AmI a reality. Two of thesetechnologies clearly fall outside the typical scope of AI research.These are human-centric computer interfaces and securesystems and devices.

The agent perceives the state of the environment and residentsusing sensors. The agent models and reasons about thisinformation, ultimately using it to make a decision about howto act on the information. The state of the environment ischanged through actions such as power line control of devicesand robotic assistance.

4. SENSINGBecause Ambient Intelligence is designed for real-world,physical environments, effective use of sensors is vital. Withoutphysical components that allow an intelligent agent to senseand act upon the environment, we end up with theoreticalalgorithms that have no practical use. Sensors are the key thatlink available computational power with physical applications.Ambient Intelligence algorithms rely on sensory data from thereal world. The software algorithm perceives the environmentand uses this information to reason about the environmentand the action that can be taken to change the state of theenvironment. Perception is accomplished using a variety ofsensors. Sensors have been designed for positionmeasurement, for detection of chemicals and humidity sensing,and to determine readings for light, radiation, temperature,sound, strain, pressure, position, velocity, and direction, andphysiological sensing to support health monitoring.

5. REASONINGSensing and acting provide links between intelligent algorithmsand the real world in which they operate. In order to make suchalgorithms responsive, adaptive, and beneficial to users, anumber of types of reasoning must take place. These includeuser modeling, activity prediction and recognition, decisionmaking, and spatial-temporal reasoning. One feature thatseparates general computing algorithms from those that areresponsive to the user is the ability to model user behavior. Ifsuch a model can be built, it can be used to customize thebehavior of the AmI software toward the user. A secondcontribution that reasoning algorithms offer is the ability topredict and recognize activities that occur in AmI environments.Much of this work has occurred in smart environments research,where the AmI application is focused on a single environmentwhich is outfitted with sensors and designed to improve theexperience of the resident in the environment. Researchershave found that different types of sensor information areeffective for classifying different types of activities. Whentrying to recognize actions that involve repetitive body motions(e.g., walking, running, sitting, standing, climbing stairs), datacollected from accelerometers positioned on the body has beenused. Over the last few years, supporting technologies for

Ambient Intelligence have emerged, matured, and flourished.Building a fully automated AmI application on top of thesefoundations is still a bit of a rarity. Automated decision makingand control techniques are available for this task. Very littlecan be done within an AmI system without an explicit or implicitreference to where and when the meaningful events occurred.For a system to make sensible decisions it has to be aware ofwhere the users are and have been during some period of time.These insights, together with other information, will provideimportant clues on the type of activities the user is engaged inand the most adequate response. Both dimensions, space andtime, are useful to understand key elements of a situation underdevelopment.

6. ACTINGAmI systems tie reasoning to the real world through sensingand acting. Intelligent and assistive devices provide amechanism by which AmI systems can executive actions andaffect the system users. Another mechanism is through robots.Relationships between human and machines have beenexplored extensively in science fiction stories.

7. HUMAN COMPUTER INTERACTIONA characteristic that the IST Advisory Group highlighted asnecessary to further societal acceptance of AmI is that AmIshould be made easy to live with. This is further detailed as aneed to define human-centric computer interfaces that arecontext aware and natural.

8. SECURITY CHALLENGESAmbient Intelligence offers great benefits to users bycustomizing their environments and unobtrusively meetingtheir needs. AmI potentially gives more control to humans bymaking their environments more responsive to intended actions,by supplying humans with customized information, and byreducing the cognitive or physical effort that is required toperform a task. At the same time, AmI can take away controlwhen the environment performs the wrong action, when itforces humans to perform extra or corrective actions, when itshares information with third parties, and when it givesmonitoring and data collection access to third parties.Researchers argue that delivering personalized services opensup the possibility for the corresponding personal informationto be stored and shared. In a 2003 survey, respondees indicatedthat privacy protection was more important to them than anypotential benefits provided by technologies found in AmbientIntelligence applications.

9. CONCLUSIONAmbient Intelligence is fast establishing as an area where aconfluence of topics can converge to help society throughtechnology. We have summarized the flexibility of the idea, thecurrent state of the art and current trends at research labs andcompanies. There are still many challenges ahead and

Ambient Intelligence

Page 42: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

42

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

improvements are needed at all levels: infrastructure, algorithmsand human_computer interaction for AmI systems to be widelyaccepted and more important of all, be useful to society. Weare conscious that the realization of the aims set up for AmI isnot easily reachable but the field is gaining momentum. Manyimportant elements are advancing and we are optimistic thatthis will bring the synergy that is needed to materialize thegoal of Ambient Intelligence.

REFERENCES[1] Diane J. Cook, Juan C. Augusto, Vikramaditya R. Jakkula,

Ambient intelligence: Technologies, applications, andopportunities, Pervasive and Mobile Computing 5, 277-298, 2009.

[2] Michael Friedewald , Elena Vildjiounaite, Yves Punie,David Wright, Privacy, identity and security in ambientintelligence: A scenario analysis, Telematics andInformatics 24, 15–29, 2007.

[3] Michael Friedewald, Olivier Da Costa, Yves Punie,Perspectives of ambient intelligence in the homeenvironment, Telematics and Informatics 22, 221–238,2005.

Saroj Bala is working as an AssistantProfessor in Ajay Kumar Garg EngineeringCollege, Ghaziabad, U.P.(India). She hasobtained her MCA from Punjabi University,Patiala and B.Sc from Kurukshetra University,Kurukshetra. She has over 16 years of teachingexperience. Her research area includes dataclustering, swarm intelligence, imageprocessing and green computing. She has

attended several seminars, workshops and conferences at variouslevels. She has published many papers in national and internationaljournals.

Page 43: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

43

HOW TO DESIGN A COMPILER FOR ANY LANGUAGENeha Tayal

Student, MCA 3rd Year, AKGEC, GZBE-mail: [email protected]

Abstract— This article overviews how to generate a compiler forany language. It includes basic steps to construct a compiler ina systematic way. After every step a output is obtain which servesas a input for next step. This article focuses on the designing ofcompiler for any language.

Index Terms— Compiler, High Level Language, Low LevelLanguage.

1. INTRODUCTIONA compiler is a computer program that implements aprogramming language specification to "translate" programs,usually as a set of files which constitute the source codewrittenin source language, into their equivalent machine readableinstructions (the target language, often having a binary formknown as object code). This translation process is calledcompilation. We compile the source program to create thecompiled program. The compiled program can then be run (orexecuted) to do what was specified in the original sourceprogram. The source language is always a higher-levellanguage in comparison to machine code, written using somemixture of English words and mathematical notation, assemblylanguage being the lowest compliable language (an assemblerbeing a special case of a compiler that translates assemblylanguage into machine code). Higher-level languages are themost complex to support in a compiler/interpreter, not onlybecause they increase the level of abstraction between thesource code and the resulting machine code, but becauseincreased complexity is required to formalize those abstractstructures. The target language is normally a low-level languagesuch as assembly, written with somewhat cryptic abbreviationsfor machine instructions in these cases it will also run anassembler to generate the final machine code. But somecompilers can directly generate machine code for some actualor virtual computer e.g. byte-code for the Java Virtual Machine.

2. PHASES OF COMPILER

There are six phases of compiler1) Lexical AnalysisThe first phase of scanner works as a text scanner. This phasescans the source code as a stream of characters and convertsit into meaningful lexemes. Lexical analyzer represents theselexemes in the form of tokens as:

<token-name, attribute-value>

2) Syntax AnalysisThe next phase is called the syntax analysis or parsing. It takesthe token produced by lexical analysis as input and generatesa parse tree (or syntax tree). In this phase, token arrangementsare checked against the source code grammar, i.e. the parserchecks if the expression made by the tokens is syntacticallycorrect.

3) Semantic AnalysisSemantic analysis checks whether the parse tree constructedfollows the rules of language. For example, assignment of valuesis between compatible data types, and adding string to aninteger. Also, the semantic analyzer keeps track of identifiers,their types and expressions; whether identifiers are declaredbefore use or not etc. The semantic analyzer produces anannotated syntax tree as an output.

4) Intermediate Code GenerationAfter semantic analysis the compiler generates an intermediatecode of the source code for the target machine. It represents aprogram for some abstract machine. It is in between the high-level language and the machine language. This intermediatecode should be generated in such a way that it makes it easierto be translated into the target machine code.

5) Code OptimizationThe next phase does code optimization of the intermediatecode. Optimization can be assumed as something that removesunnecessary code lines, and arranges the sequence ofstatements in order to speed up the program execution withoutwasting resources (CPU, memory).

6) Code GenerationIn this phase, the code generator takes the optimizedrepresentation of the intermediate code and maps it to thetarget machine language. The code generator translates theintermediate code into a sequence of (generally) re-locatablemachine code. Sequence of instructions of machine codeperforms the task as the intermediate code would do.

Symbol TableIt is a data-structure maintained throughout all the phases of acompiler. All the identifier's names along with their types arestored here. The symbol table makes it easier for the compilerto quickly search the identifier record and retrieve it. The symbol

Page 44: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

44

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

table is also used for scope management.

Error HandlerIt is use to detect errors at various phases of compiler and takeactions to recover from the errors.

3. CROSS COMPILER IN COMPILER DESIGNA cross-compiler is a compiler that runs on one machine andproduces object code for another machine. The cross-compileris used to implement the compiler, which is characterized bythree languages:1. The source language,2. The object language, and3. The language in which it is written.

If a compiler has been implemented in its own language, thenthis arrangement is called a "bootstrap" arrangement.

Suppose we want to write a compiler for compiling a languageS and producing the target code T in the language I. If we writesuch a compiler in the language I then such a compiler isdenoted by CI

ST. It is also represented by a T diagram.

TS

I

4. CONCLUSIONCompiler construction is not as complex as it is commonlyperceived to be. In this paper, we showed that constructing acompiler for a large subset of Scheme that targets a real hardwareis simple. The basic compiler is achieved by concentrating onthe essential aspects of compilation and freeing the compilerfrom sophisticated analysis and optimization passes. This helpsthe novice compiler writers build the intuition for the inner-workings of compilers without being distracted by details. First-hand experience in implementing a basic compiler gives theimplementer a better feel for the compiler’s shortcomings andthus provides the motivation for enhancing it. Once the basiccompiler is mastered, the noviceImplementer is better equipped for tackling more ambitioustasks.

5. REFERENCES[1]. AHO, A. V., SETHI, R., AND ULLMAN, J. D. Compilers:

Principles, Techniques, and Tools. 1986.[2]. APPEL, A. W. Modern Compiler Implementation in ML.

Cambridge University Press, Cambridge, UK, 1998.[3]. MUCHNICK, S. S. Advanced Compiler Design and

Implementation. Morgan Kaufmann Publishers, 1997.

Neha Tayal is a Student of MCA 3rd year inAKGEC, Ghaziabad, an affiliated college ofUttar Pradesh Technical University,Lucknow, (India). She did her BCA from CCSUniversity Meerut.

Page 45: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

45

PIRACYAshish Keshari

Student, MCA 3rd Year, AKGEC, GZBE-mail: [email protected]

Abstract— This article is all about Piracy which refers to theunauthorized duplication of copyrighted content that is thensold at substantially lower prices in the 'grey' market. Ouremphasis is on different types of piracy. One of the type ofPiracy i.e., Software Piracy will be much more elaborated. Theother issue we cover here is the economic impact of piracy onthe welfare of a society and the loss of revenue due to it.

1. INTRODUCTIONPiracy is the illegal reproduction (copy or counterfeiting) ofwork such as software, recordings or motion pictures. Withthe advanced evolution of technology, piracy has becomeeasier and, at the same time, more prevalent. Protected musiccan be downloaded from the Internet without paying for it.Computer software can be programmed by cheap workersoffshore copying the design behind the original software.Although most know about software piracy, many people don’tfully understand its impact. In other word Piracy is a duplicationof anything (i.e. which looks similar to the original product)and sell it into the market at low prices

2. TYPES OF PIRACY

Piracy is done in many ways:-A. SOFTWARE PIRACYSoftware piracy is a serious matter. Besides violating the lawand the intellectual rights of software authors, counterfeitsoftware can seriously damage our PC and compromise itssecurity.Counterfeit software is usually sold on bogus websitesor through classified ads. Pirated software can crash yourcomputer. You lose time. You could lose irreplaceable files ordata. You could even destroy your PC and all your othersoftware. Counterfeit software can contain spyware that loadsonto your computer and reports personal information withoutyour knowledge--like credit card and bank account numbers,passwords, and address books. Stolen information can beexploited immediately by identity thieves.

An entirely different approach to software piracy, calledshareware, acknowledges the futility of trying to stop peoplefrom copying Software and instead relies on people's honesty.Shareware publishers encourage users to give copies ofprograms to friends and colleagues but ask everyone whouses a program regularly to pay a registration fee to theprogram's author directly.

Some common types of software piracy include• Counterfeit software,• Soft lifting,• Hard disk loading,• Corporate software piracy and• Internet software piracy.

B. DVD/VCD PIRACYDVD/VCD piracy of Indian films happens in the internationalmarkets. The prints sent for overseas screening of the film arepirated, typically at any of the Middle East country airports.DVD/VCD prints are prepared and are send to Pakistan. FromPakistan, these prints may even travel to Nepal and enter thecountry by land.

C. MUSIC PIRACYMusic piracy refers to the unauthorized replication of musiccassettes that flood the market as soon as the launch of a newrelease. Music company’s revenues hit hard by the piratedcompact discs and cassettes, as these are available atsubstantially lower prices compared to that at stores.

D. CABLE PIRACYCable piracy refers to unauthorized transmission of filmsthrough cable network. Very often, films, especially the newreleases, are shown through cable without permission fromthe rights holder. Piracy is a rare phenomenon in satellitechannels because these are organized and generally do notshow films without buying proper rights

E. VIDEO PIRACYVideo piracy takes place when a film is produced in the form ofa videocassettewithout proper authorization from the rightholder i.e. the producer. Often, film producers sell video rightsto another party (generally after six weeks or more of release intheatres), which makes video cassettes for selling, or lending.

3. LOSS OF REVENUE DUE TO PIRACYPiracy is a real problem for many businesses. While thenumbers on a global scale can be staggering, those numbersalso equate to real losses for the small businesses that sellproducts that compete with pirated versions. Whether you'rein the business of selling recorded music or designer clothing,your sales are affected by competition from illegal productsand, sometimes, by competition from legal solutions to piracy.

Page 46: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

46

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

A. SOFTWARE PIRACYThe pervasiveness of the illegal copying of software is indeeda worldwide phenomenon. It is not only having profound effecton the users of the software but also on the software industryas a whole. If you're in the software business, piracy is alsohurting you. According to the BusinessSoftware Alliance's2011 Piracy Study, the global piracy rate is 42 percent. Almosthalf of the copies of software in use are pirated. Their studyclaims that the pirated software in use has a total value of $63.4billion. The piracy rate is much higher in emerging marketsthan it is in developed countries, such as the U.S.

B. MUSIC PIRACYAccording to the Recording Industry Association of America,music piracy results in $12.5 billion of economic loss everyyear. While this loss directly affects music retailers, it is onlyone of the three ways that music piracy could be affectingyour business. One of the solutions to music piracy has beento ramp up online sales of legal digital downloads. While thisis good for the industry, it displaces sales that would otherwisecome from retail stores. Even if you aren't in the music retailbusiness, music piracy could be impacting your business ifyour employees are doing it at work and overtaxing your Internetconnection, requiring you to buy more bandwidth to provideadequate performance for your business applications.

C. VIDEO PIRACYThe impacts of video piracy on both the greater market and onsmall businesses are similar to those of music piracy. A reportfrom NetNames indicates that 96.3 million people streamedpirated video in January 2013, while 148.6million peopledownloaded pirated video content. According to the "WallStreet Journal," Stan Liebowitz of the University of Texas atDallas estimates the cost of video piracy to be as much as$18.5 billion in lost sales every year. If you're in the business ofselling video content, you're probably losing sales because ofthis piracy.

D. OTHER FACTS AND FIGURES REGARDING PIRACYIn 1995, the Business Software Alliance (BSA) claimed that theindustry lost “$13 billion per year,” “$35 million per day,” and“$407 per second” from software piracy.The 1998 GlobalSoftware Piracy Report released (in May 1999) by the BSA andthe Software& Information Industry Association (SIIA),thetwo leading trade associations for the software industry,estimates that of 615 million new business softwareapplications installed worldwide during 1998, 231 million- or38% -were pirated.In 2001, the corresponding figure remains at40%.

As a matter of fact, Pricewaterhouse Coopers in the year 1998estimated that if world governments had reduced softwarepiracy rates to a certain benchmark levels then direct andindirect employment would have increased by 5,21,663 jobs

and tax revenues by as much as $13.7 billion in 1996/1997 alonefor the U.S economy. For the U.S economy, reducing wouldhave generated an additional 1, 30,000 jobs and nearly $1.0billion in tax revenues in 1996.

A 2011 study conducted by Frontier Economics estimated thatpiracy cost G20 nations between $425 billion and $575 billionper year and estimated that piracy's cost in 2015 would bebetween $1.14 trillion and $1.53 trillion. This excludes digitalpiracy but includes a broad range of pirated and counterfeitedproducts such as fashion products, pharmaceuticals and evenfood and drink. The broad reach of the piracy industry affectsbusinesses in two ways. First, legitimate products competeagainst lower-priced pirated competitors that may seem aliketo nondiscerning customers. Second, you're at risk ofunintentionally purchasing pirated products for your businessand having to absorb the cost of disposing of them when yourealize they aren't the real thing.

4. CONCLUSIONIn this article we have briefly discussed about piracy and theirtypes. We have also focused on some facts and figures oflosses to the economy due to this piracy. It also reflects lossesof job opportunities in the country like USA due to piracy.

REFERENCES[1] Recording Industry Association of America: For Students

Doing Reports[2] NetNames: Sizing the Piracy Universe[3] The Wall Street Journal: Putting a Price Tag on Film Piracy[4] Business Software Alliance: 2011 Piracy Study[5] Frontier Economics / BASCAP: Estimating the Global

Economics and Social Impacts of Counterfeiting andPiracy

[6] http://economictimes.indiatimes.com/definition/piracy[7] https://books.google.co.in/books?id=Vsi2DeudL0C&

pg=PA176&lpg=PA176&dq=loss +due+ to+piracy+in+world+in+software+industry

[9] http://smallbusiness.chron.com/effect-piracy-business-24541.html

[10] h t t p : / / w w w . w e b o p e d i a . c o m / T E R M / S /software_piracy.html

[11] http://smallbusiness.chron.com/loss-revenue-due-piracy-76178.html

Ashish Keshari is a student pursuing MCAfrom AKGEC, Ghaziabad, an affiliated collegeof Uttar Pradesh Technical University,Lucknow, (India). I did my BCA fromMahatma Gandhi Kashi VidyapeethUniversity, Varanasi.

Page 47: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

47

DNA CAN STORE YOUR DIGITAL DATA FORUPTO 2,000 YEARS

Ankit VermaMCA Student, MCA 3rd Year, AKGEC, GZB

Email: [email protected]

Abstract— Scientists have developed a way of storing vastquantities of information for up to a million years in a singlemolecule of DNA. DNA digital data storage refers to any schemeto store digital data in the base sequence of DNA. Thistechnology uses artificial DNA made using commerciallyavailable oligonucleotide synthesis machines for storage andDNA sequencing machines for retrieval.This type of storagesystem is more compact than currentmagnetic tape or hard drivestorage systems due to the data density of the DNA.

INTRODUCTIONThe idea and the general considerations about the possibilityof recording, storage and retrieval of information on DNAmolecules were originally made by Mikhail Neiman andpublished in 1964–65 in the Radiotekhnika journal.

On August 16, 2012, the journal Science published research byGeorge Church and colleagues at Harvard University, in whichDNA was encoded with digital information that included anHTML draft of a 53,400 word book written by the lead researcher,eleven JPG images and one JavaScript program. Multiple copiesfor redundancy were added and 5.5 petabits can be stored ineach cubic millimetre of DNA.

The researchers used a simple code where bits were mappedone-to-one with bases, which had the shortcoming that it ledto long runs of the same base, the sequencing of which iserror-prone. This research result showed that besides its otherfunctions, DNA can also be another type of storage mediumsuch as hard drives and magnetic tapes.

DESCRIPTIONYour DNA holds an incredible amount of information in a verysmall space. Recently, researchers have looked to DNA as amethod to store large amounts of digital information that arecurrently saved on hard drives. DNA storage has anotheradvantage, too: It can last up to 2,000 years without breakingdown. They need no active maintenance. They are highlyreliable. They stores digital files without electricity forthousands of years. They are long-lived, stable and easilysynthesized.

Just one gram of DNA can store the equivalent of 14,000 Blu-

ray discs. In 2013, researchers demonstrated that data couldbe saved and read from DNA.He concluded that this DNA isprotected because it is 'encapsulated and protected'.Scientistshave stored audio and text on fragments of DNA and thenretrieved them with near-perfect fidelity—a technique thateventually may provide a way to handle the overwhelmingdata of the digital age.

METHODOLOGYA bioengineer and geneticist at Harvard’s Wyss Institute havesuccessfully stored 5.5 petabits of data — around 700 terabytes— in a single gram of DNA, smashing the previous DNA datadensity record by a thousand times.The work, carried out byGeorge Church and Sri Kosuri, basically treats DNA as justanother digital storage device. Instead of binary data beingencoded as magnetic regions on a hard drive platter, strandsof DNA that store 96 bits are synthesized, with each of thebases (TGAC) representing a binary value (T and G = 1, A andC = 0).To read the data stored in DNA, you simply sequence it— just as if you were sequencing the human genome — andconvert each of the TGAC bases back into binary. To aid withsequencing, each strand of DNA has a 19-bit address block atthe start — so a whole vat of DNA can be sequenced out oforder, and then sorted into usable data using the addresses.A team of Swiss engineers hope to employ DNA as a methodto store more data. Today, our hard drives can store up to fiveterabytes of data, encoded with the zeros and ones of binarycode. But if data were stored in DNA, the four chemicalnucleotides (A, C, G, and T) could theoretically hold up to300,000 terabytes. The researchers also wanted to see if theDNA would be able to retain information longer than a harddrive does now (which is about a few decades). They encodedDNA with 83 kilobytes of text written in the 13th and 10thcenturies. They stored the DNA in silica spheres to protect it,and then warmed it to 160 degrees Fahrenheit for a week—theequivalent of keeping it at 50 degrees for 2,000 years. Whenthe researchers decoded the DNA, they found no errors,indicating that the DNA held up well and the information theyencoded stayed intact.

For Future enhancement DNA can be used in DNA computingand DNA nanotechnology.

Page 48: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

48

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

CONCLUSIONThe longer you want to store information, then, the moreattractive DNA becomes. And the cost of sequencing andsynthesising DNA is falling fast. Digital archives may startusing DNA sooner than you think; it no longer makes sense tothrow our digitized cultural documents into an informationalblack hole.We can’t get obsessed with the fact that it may notbe practical today. If we do any reasonable projection of currenttrends 5 to 10 years into the future you see that this is in thesweet spot.

“Natural selection and evolution have used DNA to store andread out to make our bodies and we are now using DNA tostore and archive the products of our brains.” – What a twist!

REFERENCES[1]. Goldman, N.; Bertone, P.; Chen, S.; Dessimoz, C.;

Leproust, E. M.; Sipos, B.; Birney, E. (2013). "Towardspractical, high-capacity, low-maintenance informationstorage in synthesized DNA"

[2]. Gao, Y.; Kosuri, S. (2012). "Next-Generation DigitalInformation Storage in DNA"

[3]. http://www.popsci.com/dna-data-storage-can-last-2000-years

[4]. https://en.wikipedia.org/wiki/DNA_digital_data_storage[5]. Edwards, Lin (August 17, 2012). "DNA used to encode a

book and other digital information"

Ankit Verma pursuing M.C.A. fromAKGEC, Ghaziabad, an affiliated collegeof Uttar Pradesh Technical University,Lucknow, India. I am Gold Medallist fromMahatma Gandhi Kashi VidyapithUniversity, Varanasi, India in graduationand have pursued B.C.A. degree. Myfavourite thought is “Success meanshaving the courage, the determination,

and the will to become the person you believe you were meantto be.”

Page 49: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

49

THE RISE OF ETHICAL HACKINGHimani Jindal

Student, MCA 3rd Year, AKGEC, GZBE-mail: [email protected]

Abstract— This article overviews the brief idea of the EthicalHacking and its affairs with the corporate security. The intentof ethical hacking is to discover vulnerabilities from hacker’sviewpoint so systems can be better secured. The need ofprotecting the systems from the nuisance of hacking generatedby the hackers is to promote the persons who will punch backthe illegal attacks on your computer systems, the EthicalHackers. This article tries to develop the centralized idea ofthe ethical hacking and all its aspects as a whole.

1. INTRODUCTIONIn today‘s context where the communication techniques havebrought the world together; have also brought into beinganxiety for the system owners all over the globe. The mainreason behind this insecurity is Hacking-more specificallycracking the computer systems. If you were to ask someonewhat they thought a hacker is, the response would most likelyrevolve around negative values such as data fraud, identitytheft, and maybe even cyber terrorism.

However, not all hacking is necessarily the criminal, destructiveact that we see increasingly reported in our media. Indeed,forms of ‘hacking’ have often been carried out by organizationsor institutions as a method of testing defences. For example, inthe 1970s, the United States gave a green light for a team ofexperts to attack its own computer systems to assess theirsecurity robustness.

Fast-forward to the present day, and we now have a thrivingindustry that is roughly based around the concept of ethicalhacking.

2. HOW DID ETHICAL HACKING COME ABOUT?In 2011, there was a breach in the online security frameworkrelated to Sony Networks: around 77 million customer recordswere released online.

It goes without saying, then, that alongside the ubiquity ofthe internet and the inexorable rise of the digital age, quitenaturally governments, institutions, and individuals havegrowing concerns about cyber security. Despite securityprotocols and rigorously designed information encryptiontechniques, can the vast realms of data flowing around theglobe remain safe?

In order to better protect that information, companies will oftenbring in ethical hacking experts from outside to test IT security.These experts carry out ‘penetration tests’ that seek to probesecurity weak spots, so that companies can rectify theproblems, and upgrade their IT security protocols before the‘evil’ hackers can infiltrate their systems.

3. HAS ETHICAL HACKING BECOMEA CAREER CHOICE?

Ethical hacking is perhaps a derogatory term for a raft of itspecialists better referred to as computer security experts.Sought-after skill sets provide forward-thinking companieswith access to personnel who are well-versed in code andprogramming languages, have intimate knowledge of operationsystems as well as those experts who can decipher TCP/IPprotocols.

There are a growing number of accredited courses wannabeethical hackers can take. Once turned professional, in the UKcontracting penetration testers can make as much as £500 perday, with a wide range of international projects attracting morelegitimate hacking talent to our shores.

4. THE ETHICAL HACKING INDUSTRYIn the USA alone, ethical hacking represents an industry worthnearly $4billion. The analyst firm, Frost and Sullivan estimatethe industry is growing at 21% per year, with nearly 2.3 millionsecurity professionals providing internet security across theglobe.

Governments also chip in to try and grab talent before it isscooped up by the private sector. In 2011, the UK governmentlaunched a hacking challenge as a way of launching arecruitment drive targeting the most talented hackers in thecountry. Gchq has also announced several new groupings ofintelligence personnel purely focused on fighting cyber attacks.Businesses that frequently need the service of ethical hackersare most notably, apart from Governments and Banks. Theyspend around $25billion annually defending themselvesagainst cyber attacks.

5. TRICKS OF THE TRADEEthical hackers are expert coders, capable of immersingthemselves completely in whatever computer security network

Page 50: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

50

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

confronts them. Well co-ordinate penetration tests can oftenbe months in the making, and include outlining the systems tobe targeted and the amount of ‘damage’ to be inflicted. Hackershave also developed a fine art in monitoring the social mediause of employees to give them an edge in cracking companyIT systems. This technique is known as water-holing, andinvolves ethical hackers scouring the social media hangoutsof company employees. There they mine for information thatcould give them an ‘in’ to an employee login or more generalcompany details.

Ethical hacking is an overall response to the threats offered tocyber security from the other, less palatable face of hacking.Back in 2010, in a relatively unheralded announcement, the UKGovernment pledged to spend £650million to bolster cybersecurity. With the country as a whole, as well as theGovernment, facing in excess of 50 million cyber attacks eachyear, it is no wonder institutions are turning to that very hackingcommunity in a bid to shore up defences.

6. TYPES OF HACKSThere are four basic kinds of hacks we can done:• IP Hack: You hire someone to hack a specific IP address,

giving them little or no information beforehand (Be carefulif the IP address is an overseas server. You don't wanthackers hacking the wrong IP address, like a foreigngovernment's computers, causing an internationalincident.);

• Application Hack: A much more sophisticated hack thatcan delve deep into databases and down productionservers. Only experienced hackers, with strict guidelinesgoverning their actions, should be allowed to performsuch tests. Never hire a "reformed" black-hat hacker forthis type of test;

• Physical Infrastructure Hack: This is where people tryto get into your facilities to access your systems or godumpster diving looking for confidential informationsuch as passwords discarded on sticky notes; and

• Wireless Hack: War-driving is the new term to describethis type of attack where wireless access points areexploited from the back of a van. Ethical hackers do thesame thing, but report their findings back to you insteadof stealing your passwords. Have them check out yourteleworkers as well to see if home offices are a source ofentry to your network.

7. CONCLUSIONEthical Hacking is the process of analyzing the imposed threaton a given system or network by modeling the actions of anadversary. The ethical hacking is explained in detail. Successfulethical hackers possess a variety of skills. First and foremost,they must be completely trustworthy. The ethical hacker oftenholds the “keys to the company”, Modern security effortshave to plan for the unplanned and anticipate attacks beforethey occur. Ethical Hacking is in the boom and it’s high timeevery company recognizes the need of a potential professionalethical hacker. Ethical hacking is not just necessary, it isinevitable.

8. REFERENCES[1]. Twincling Society Ethical Hacking Seminar. 2006.

Retrieved March 27, 2009.[2]. Krutz, Ronald L. and Vines, Russell Dean. The CEH Prep

Guide: The Comprehensive Guide to Certified EthicalHacking. Published by John Wiley and Sons, 2007.

[3]. Palmer, Charles. Ethical Hacking. Published in IBMSystems Journal: End-to-End Security, Volume 40, Issue3, 2001.

[4]. Tiller, James S. The ethical hack: a framework for businessvalue penetration testing. Published by CRC Press, 2005.

[5]. Beaver, Kevin and McClure, Stuart. Hacking ForDummies. Published by For Dummies, 2006.

[6]. Certified Ethical Hacking Seminar. 2006. Retrieved March27, 2009.

[7]. Certified Ethical Hacking EC-Council. 2009. RetrievedMarch 27, 2009.

[8]. Certified Ethical Hacking EC-Council. 2009. RetrievedMarch 27, 2009.

[9]. Ethical Hacking Jobs. 2009. Retrieved March 27, 2009.[10]. D'Ottavi, Alberto. Interview: Father of the Firewall. 2003.

Retrieved March 27, 2009.[11]. Livermore, Jeffery. What Are Faculty Attitudes Toward

Teaching Ethical Hacking and Penetration Testing?.Published in Proceedings of the 11th Colloquium forInformation Systems Security Education, 2007.

Himani Jindal is a Student in AKGEC,Ghaziabad, an affiliated college of UttarPradesh Technical University, Lucknow(India). She did her BCA from ChaudharyCharan Singh University Meerut andpursuing MCA from AKGEC, Ghaziabad.Her area of interest includes DatabaseManagement System.

Page 51: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

51

STUDY OF ALGORITHMS FOR IDENTIFYING ANDREPLACING WEB SERVICE USING SERVICECOMPOSITION AND GENETIC TECHNIQUE

Krishna Kumar SinghAssistant Professor, MCA Department, AKGEC, GZB

Email: [email protected]

Abstract— Web service is a middleware service between businesstransactions. If the service causes some errors like deadlock,network failure in current system the business transactiontotally fails. The problem is easily identified and toleratethrough fast bully algorithm. The algorithm identifies the failedservice and replace the new service with help of election processconducted by Fast Bully, which minimizes the response time.The master service will allocate a composite of slave servicewith availability ,best band width and best fit. When a masterservice sends IAMUP message to slave service, it will replace itsposition with master service. The Fast Bully algorithm used forthe purpose of services are identified and replaced with thebusiness transaction by using Genetic Techniques. Avoids theviolation of QoS [1] constraints after replanning by definingand evaluating a replace-ability property. A proactive approachthat searches for an optimal plan with a lower risk of violatingthe constraints in the event that re-composition is needed.

1. INTRODUCTIONWeb service is a method of communication between twoelectronic devices over a network. Web services are openstandard (XML, SOAP, HTTP etc.) based Web applicationsthat interact with other web applications for the purpose ofexchanging data.

Service providerIt needs to describe the web service in a standard format, whichin turn is XML (extendable mark-up language) and publish it ina central Service Registry (UDDI-Universal DiscoveryDescription Integration[2]).

Service registryWeb services can be developed and deployed by manyvendors there are often multiple web services that can performsimilar tasks with varying Quality of Service (QoS) attributes.When composing services into a workflow or plan to jointlyaccomplish processing toward a final result, the servicesmust be compatible with respect to their input, output, andfunctionality for a temporally ordered interaction that cansuccessfully complete the required task or query. The QoS ofthe plan depends on individual service selections for thesedesignated interactions.

Service consumerIt retrieves the information from the registry and uses theservice description obtained to bind to and invoke the webservice.

Benefits of using Web Services• Exposing the existing function on to network• Connecting Different Applications is Interoperability[3]• Standardized Protocol• Low Cost of communication• Loosely Coupled• Ease of Integration• Service Reuse

A major problem with web service composition is that QoSvalues can also change at execution time from originalestimations. The service may become unavailable, unreliable,or no longer provide the best solution fit. Other services mustbe dynamically evaluated to complete the plan. These servicesare chosen from the same abstract type, a group of serviceswith functionalities that can substitute or replace any servicein their type.

Changing QoS values can disrupt the expected compliance ofthe plan to maintain certain thresholds, such as costs andresponse time. The impact is even more dramatic if the servicelies within a loop of large number of iterations in thecomposition. These non-periodic changes require a dynamicplanning environment in which certain events force reselectionfrom the physical services of the same abstract type in whichthe service change occurred to form a new, yet compliant plan.

QoS attributes are increasing-dimension or decreasing-dimension. Availability and reliability are increasing-dimensionattributes because the resulting plan should incorporate thehighest values associated with them. Cost and response timeare decreasing-dimension attributes because the plan shouldincorporate the lowest values associated with them.Techniques for web services composition based on QoSoptimization aim to maximize increasing-dimension attributes

Page 52: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

52

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

and minimize decreasing-dimension attributes, while at thesame time maintaining any quality constraints imposed onthe plan itself. These characteristics make the compositionfall into the domain of multi-objective optimization. However,none of those different techniques explored in this domaintakes into account redundancy as an inherent property ofcomposition[4].

A proactive approach that searches for an optimal plan with alower risk of violating the constraints in the event that re-composition is required. Our approach introduces replaceabilityas a metric applied to plan composition. We define eplaceabilityas the degree to which a plan or a service is changeable withone that accomplishes the same goal or processing,respectively. By including a replaceability metric in the selectionprocess, we significantly reduce the potential violation ofconstraints during plan execution that can result from QoSchanges requiring service reselection. A major challenge is theimpact of service reselection on the plan, since substituting aservice for one whose QoS values have caused the plan toviolate at least one constraint consumes time. Pre-knowledgeof alternatives based on replaceability values counteracts thisadded factor and reduces the time spent in the process[5].

2. LITERATURE SURVEY

2.1 On the Evolution of ServicesTo manage and control the evolution of services is an importantgoal for the Service-Oriented paradigm. Unifying TheoreticalFramework for controlling the evolution of services that dealswith structural, behavioral, and QoS level-induced servicechanges in a type-safe manner, ensuring correct versioningtransitions so that previous clients can use a versioned servicein a consistent manner.

2.2 A fast and elitist multiobjective genetic algorithm:

NSGA-IIWeb services are grouped as a community to facilitate andspeed up the process of Web services discovery. The WebService Community can continue providing services even whenmaster Web service fails operationally. Solution customizes adistributed election algorithm called Fast Bully Algorithm toidentify a temporary master Web service when there is anyoperational failure in existing master Web service of Community.Permanent master Web service takes back the masteringresponsibilities from temporary master Web service when itresumes.

2.3 Understanding Approaches for Web Service Compositionand ExecutionWeb services have received much interest due to their potentialin facilitating business-to-business or enterprise applicationintegration. The creation

of a work owns that realizes the functionality of a new serviceand its subsequent deployment and execution on a runtimeenvironment. A significant number of solutions have analyzedfor Composition and Execution of Web Services and thestrengths and weaknesses of the solutions. Based on multiplemetrics that critical for a WSCE system, e.g. composition effortcomposition control, and ability to handle failures.

2.4 QoS Aware Middleware for Web Services CompositionThe paradigmatic shift from a Web of manual interactions to aWeb of programmatic interactions driven by Web services iscreating opportunities for the formation of online Business-to-Business collaborations. Many available Web servicesprovide identical functionality, though with different Qualityof Service. Ag Flow's algorithm Select Web services for thepurpose of their composition in a way that maximizes usersatisfaction expressed as utility functions over QoS attributes.Two selection approaches are described and compared: onebased on local (task-level) selection of services, and the otherbased on global allocation of tasks to services using integerprogramming.

2.5 Quality of Service and Semantic Composition ofWorkflowsFor the composition of Web services non-functionalcharacteristics are commonly considered criteria for findingand selecting available services Work Flow Pattern forAggregation of QOS. Work focuses on a mechanism thatdetermines the overall Quality-of-Service(QoS) of acomposition by aggregating the QoS of the individual services.With aggregated QoS it can be verified whether a set of servicessatisfies the QoS requirements for the whole composition ornot.[6] The aggregation performed builds upon abstractcomposition patterns, which model basic structural elementsof a composition like parallel paths, a sequence, or a loopedexecution.

3. CONCLUSIONThe most crucial stage in achieving a new successful systemand in giving confidence on the new system for the users thatwill work efficiently and effectively. The system can beimplemented only after thorough testing is done and if foundto work according to the specification.

If involves careful planning, investigation of the current systemand its constraints on implementation, design of methods toachieve the changeover, an evaluation of changeover methodsapart from planning. Two major tasks of preparing theimplementation are education, training of the users and testingthe systems. System analysis and design efforts will be morefor complex systems beings implemented. Based on policiesof individual organization an implementation coordinatingcommittee has been appointed.

Page 53: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

53

The implementation process begins with preparing a plan forthe implementation system. According to this plan, the otheractivities are to be carried out. In this plan, discussion hasbeen made regarding the equipment, resources and how totest the activities. Thus a clear plan is preparing for theactivities.

REFERENCES[1] Gennaro (Jerry) Cuomo, IBM SOA “on the Edge”

SIGMOD 2005.[2] Mira Kajko-Mattsson, Grace A. Lewis, Dennis B. Smith,

A Framework for Roles for Development, Evolution andMaintenance of SOA-Based Systems, InternationalWorkshop on Systems Development in SOAEnvironments (SDSOA'07), 2007

[3] K Deb, A Pratap, S Agarwal, and T Meyarivan,“A fast and elitist multiobjective genetic algorithm:NSGA-II,” in IEEE Trans. On EvolutionaryComputation, 2002.

[4] V. Agarwal, G. Chafle, S. Mittal, and B. Srivastava,“Understanding Approaches for Web ServiceComposition and Execution,” Proc. 1st BangaloreComputing Conference, 2008.

[5] G. Canfora, M. Penta, R. Esposito, and M. L. Villani, “AnApproach for QoS-aware Service Composition basedon Genetic Algorithms,” 2006 .

[6] L. Zeng, B. Benatallah, A. H. H. Ngu, M. Dumas, J.Kalagnanam, and H.Chang, “QoSAware Middleware forWeb Services Composition,” in IEEE trans. on SoftwareEngineering, 30(5), 2004.

Krishna Kumar Singh is working as anassistant professor in Ajay Kumar GargEngineering College, Ghaziabad, U.P(India), he has obtained his M-Tech(Information Technology)from GGS IPUniversity and MCA from VBSPurvanchal University, Jaunpur (U.P.). Hehas been in teaching from last 11+ years;

he has been member of several academic and administrativecommittees.During his teaching tenure he has worked as a committeemember in organizing a National Conference and manyTechnical fests at college level. He has attended severalseminars, workshops and conferences at various levels. Alsohe has some papers published at national and internationalconferences and journals. His area of research includesSoftware reliability, Semantic web, Cloud Computing etc.

Study of Algorithms for Identifying and Replacing Web Service Using Service Composition And Genetic Technique

Page 54: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

54

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

WIRELESS CHARGING OF BATTERIES USINGMICROWAVES

Anuj Kumar DwivediAssistant Professor, MCA Department, AKGEC, GZB

Email: [email protected]

Abstract— It is a hectic task to carry everywhere the charger ofmobile phones or any electronic gadget while travelling, or it isvery cruel when your mobile phone getting off by the time youurgently need it. It is the major problem in today’s electronicgadgets. Though the world is leading with the developments intechnology, but this technology is still incomplete because ofthese limitations. Today’s world requires the completetechnology and for this purpose here we are proposing thewireless charging of batteries using Microwaves.Now in the recent days we come across some solutions for thisproblem by using the Witricity (Wireless Transmission ofElectricity).Nokia has launched Nokia Lumia 920 smart phoneswhose special feature is its wireless charging. But this is possibleonly when the device is placed on to the plate given for thewireless charging. So it is also somewhat difficult to travel withthose charging plates. There may chance has forgetting thecharging plates, and then we require something which cancharge our electronic gadgets whenever they get used.The proposed method gives the solution for this problem. Oncethink that how it will be when your electronic gadget gets chargedon using it? Then the label will come as “CHARGE ON USE”.This wireless charging method works on the principle ofMICROWAVE OVEN. As the things when placed in microwaveoven gets heated, in the same way these batteries should workusing microwaves which are the medium of communicationfrom long back. We are getting our network in terms ofmicrowaves and it is proved that the total radiation comingfrom the cellular mobile communication is not been using andthe remaining radiation is creating hazardous problem forhuman beings. So here we are working on the concept that whycan’t we use those remaining radiations in order to charge ourbatteries? This will be the best solution to reduce the effect ofradiation.

1. INTRODUCTIONThe principle of wireless charging has been around for over acentury but only now are we beginning to recognize its truepotential. First, we need to be careful about how liberal we use"wireless" as a term; such a word implies that you can justwalk around the house or office and be greeted by waves ofenergy beamed straight to your phone. We're referring, largely,to inductive charging the ability to manipulate anelectromagnetic field in order to transfer energy a very shortdistance between two objects (a transmitter and receiver). It'slimited to distances of just a few millimeters for the moment,but even with this limitation, such a concept will allow us topower up phones, laptops, keyboards, kitchen appliances, andpower tools from a large number of places: in our homes, ourcars, and even the mall.

There are three types of wireless charging.1. Inductive charging2. Radio charging3. Resonance charging

1. Inductive charging: Inductive charging charges electricalbatteries using electromagnetic induction. A charging stationsends energy through inductive coupling to an electricaldevice, which stores the energy in the batteries.2. Radio charging: Radio charging is only effective for smalldevices. The battery of a laptop computer, for example, requiresmore power than radio waves can deliver. The range also limitsthe effectiveness of radio charging,3. Resonance charging : Resonance charging is used for itemsthat require large amounts of power, such as an electric car,robot, vacuum cleaner or laptop computer. In resonancecharging, a copper coil attached to a power source is thesending unit.

2. APPROACH OVERVIEWMicrowave Region:Microwaves are good for transmitting information from oneplace to another because microwave energy can penetrate fog,light rain and snow, clouds, and smoke.

Microwaves are Radio wave which has the wave length rangeof 1 mm to 1 meter and the frequency is 300MHz to 300GHz.Microwaves have wavelengths that can be measured incentimeters! The longer microwaves, those closer to a feet inlength, are the waves which heat our food in a microwave oven.

Microwave waves are used in the communication industry andin the kitchen as a way to cook foods. Microwave radiation isstill associated with energy levels that are usually consideredharmless except for people with pace makers.

Microwave region of electromagnetic spectrum

Page 55: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

55

Table 1.1 Microwave spectrumDesignation Frequency rangeL Band 1 to 2 GHzS Band 2 to 4 GhzX Band 4 to 8 GhzKu Band 12 to 18 GhzK Band 18 to 24 GhzKa Band 26 to 40 GhzQ Band 30 to 50 GhzU Band 40 to 60 GHz

3. WORKING PROCEDUREOverall procedure is classified in two sections :1. Transmitter section2. Receiving Section

3.1 Transmitter Section :The transmitter section consists of two parts. They are:(i) Magnetron(ii) Slotted waveguide antenna

MagnetronMagnetron is the combination of a simple diode vacuum tubewith built in cavity resonators and an extremely powerfulpermanent magnet. The anode is usually made of copper andis connected to a high-voltage positive direct current. In thecenter of the anode, called the interaction chamber, is a circularcathode.

A magnetron, therefore, is an oscillator, not an amplifier. Atakeoff loop in one cavity provides the output.

Magnetron

Slotted Waveguide AntennaThe slotted waveguide is used in an omni-directional role. It isthe simplest ways to get a real 10dB gain over 360 degrees ofbeam width. The Slotted waveguide antenna is a HorizontallyPolarized type Antenna, light in weight and weather proof.Tuning screws are placed for tweaking the SWR and can beused to adjust the centre frequency downwards from 2320MHznominal to about 2300 MHz .This antenna is available fordifferent frequencies.

This antenna, called a slotted waveguide, is a very low losstransmission line. It allows propagating signals to a number ofsmaller antennas (slots). The signal is coupled into thewaveguide with a simple coaxial probe, and as it travels alongthe guide, it traverses the slots.

Each of these slots allows a little of the energy to radiate. Theslots are in a linear array pattern. The waveguide antennatransmits almost all of its energy at the horizon, usually exactlywhere we want it to go. Its exceptional directivity in theelevation plane gives it quite high power gain. Additionally,unlike vertical collinear antennas, the slotted waveguidetransmits its energy using horizontal polarization, the best typefor distance transmission.

Slotted waveguide antenna

3.2 Receiver SectionThe basic addition to the mobile phone is going to be theRectenna.A Rectenna is a rectifying antenna, a special type of antennathat is used to directly convert microwave energy into DCelectricity.Its elements are usually arranged in a mesh pattern, giving it adistinct appearance from most antennae. A simple Rectennacan be constructed from a Schottky diode placed betweenantenna dipoles. The diode rectifies the current induced in theantenna by the microwaves. Rectenna are highly efficient atconverting microwave energy to electricity.

Block DiagramSensor CircuitThe sensor circuitry is a simple circuit, which detects if themobile phone receives any message signal. This is required,as the phone has to be charged as long as the user is talking.

Wireless Charging of Batteries Using Microwaves

Page 56: From The Desk of HOD - Ajay Kumar Garg … 2015.pdf · From The Desk of HOD ... 4 Skinput Technology 15-16 Harnit Saini, Assistant Professor, MCA Department, AKGEC, GZB 5 System Overview

56

Journal of Computer Application, Volume 6, No. 2, July-December 2015Ajay Kumar Garg Engineering College, Ghaziabad

Thus a simple F to V converter would serve our purpose.In India the operating frequency of the mobile phone operatorsis generally 900MHz or 1800MHz for the GSM system for mobilecommunication. Thus the usage of simple F to V converterswould act as switches to trigger the Rectenna circuit to on.

Block diagram for the LM2907

LM2907 IC

4. PROCESS OF RECTIFICATIONThis study consists in designing and simulating a single diodepower rectifier in “hybrid technology” with improved sensitivityat low power levels. We achieved it in Schottky diode.

Rectification

Rectenna Array

5. ADVANTAGESCharging of mobile phone is done wirelessly.We can saving time for charging mobiles.Wastage of power is less.Better than witricity as the distance the witricity can coveris about 20 meters whereas in this technology we are usingbase station for transmission that can cover more area.Mobile get charged as we make call even during longjourney.Only one microwave transmitter can serve to all the serviceproviders in that area.

6. DISADVANTAGESRadiation problems may occur.Network traffic may cause problems in charging.Charging depends on network coverage.Rate of charging may be of minute range.Process is of high cost

7. CONCLUSIONThus this proposed method successfully demonstrates a novelmethod of using the power of microwave to charge mobilephones without use of wired chargers. It provides greatadvantage to mobile phone users to carry their phonesanywhere even if the place is devoid of facilities for charging.It has effect on human beings similar to that from cell phonesat present. The use of Rectenna and sensor in mobile phonecould provide new dimension in the revolution of mobile power.

REFERENCES[1] Lin,JC, 2013. Wireless Power Transfer for Mobile

Applications, and Health EffectsJournal, (IEEE), Chikago, pp: 2164-2173.

[2] S.Sheik Mohammed, K. Ramasamy, T.Shanmuganantham.,2010.Wireless Power Transmission– A Next Generation Power Transmission System ,International Journal of Computer Applications (0975 –8887)

[3] Computer world.com[4] en.wikipedia.org

Anuj Kumar Dwivedi is an AssistantProfessor in the Department of MCAAKGEC Ghaziabad. He is holdingMaster’s Degree in IT and pursuing PhDfrom Jodhpur National University,Jodhpur.