Souvenir CIS 2021
Transcript of Souvenir CIS 2021
i
CIS 2021
Jointly Organized by
CHRIST (Deemed to be
University), Bangalore
and
Soft Computing Research
Society
September 04-05, 2021
Congress on Intelligent Systems (CIS 2021)
Souvenir
ii
TABLE OF CONTENTS
Chief Patron ..................................................................................................................... 1
Patrons .............................................................................................................................. 1
Honorary Chair................................................................................................................. 1
General Chairs .................................................................................................................. 2
Organising Chairs ............................................................................................................ 3
Program Chairs ................................................................................................................ 3
Publicity Committee ........................................................................................................ 4
Publication Committee ..................................................................................................... 5
Session Management Committee ..................................................................................... 5
Organizing Committee ..................................................................................................... 6
Advisory Board ................................................................................................................ 8
Abstract of Accepted Papers .......................................................................................... 12
Implementation of Morphological Gradient Algorithm For Edge Detection ................ 13
Pythagorean Fuzzy Information Measure with Application to Multicriteria Decision
Making ........................................................................................................................... 13
Leaf Disease Identification in Rice Plants Using CNN Model ...................................... 14
Twitter Sentiment Analysis Based on Neural Network Techniques .............................. 14
Support Vector Machine Performance Improvements by Using Sine Cosine Algorithm
........................................................................................................................................ 15
Enhanced Stock Market Prediction using Hybrid LSTM Ensemble ............................. 16
Centrist Traffic Management Protocol within the Opportunist Network ...................... 16
Impact of Business Intelligence on Organizational Decision-Making and Growth
Analysis .......................................................................................................................... 17
CONCISE: An Algorithm for Mining Positive and Negative Non-Redundant Association
Rules ............................................................................................................................... 17
iii
Developing an Improved Software Architecture Framework for Smart Manufacturing
........................................................................................................................................ 18
Intelligent Water Drops Algorithm Implementation using Mathematical Function ...... 18
French Covid-19 Tweets Classification Using FlauBERT Layers ................................ 19
A Deliberation on the Stages of Artificial Intelligence .................................................. 19
A Novel Weighted Extreme Learning Machine for Highly Imbalanced Multiclass
Classification .................................................................................................................. 20
Prediction and Analysis of Recurrent Depression Disorder: Deep Learning Approach 20
Energy Efficient ACO-DA Routing Protocol Based on IoEABC-PSO Clustering in WSN
........................................................................................................................................ 21
An Enhanced Pixel Intensity Range based Reversible Data Hiding Scheme for
Interpolated Images ........................................................................................................ 21
Modelling Critical Success Factors for Smart Grid Development in India ................... 22
Analysing a Raga Based Bollywood Song: A Statistical Approach .............................. 22
Stability Analysis of Emerged Seaside Perforated Quarter Circle Breakwater using ANN,
SVM and AdaBoost Models .......................................................................................... 23
Advanced Spam Detection using NLP & Deep Learning .............................................. 23
A Risk-Budgeted Portfolio Selection Strategy Using Novel Metaheuristic Optimization
Approach ........................................................................................................................ 24
An Optimization Reconfiguration Reactive Power Distribution Network based on
Improved Bat Algorithm ................................................................................................ 24
Security Prioritized Heterogeneous Earliest Finish Time Workflow Allocation
Algorithm for Cloud Computing .................................................................................... 25
An Approach for Enhancing Security of Data over Cloud Using Multilevel Algorithm
........................................................................................................................................ 25
Dropout-VGG based Convolutional Neural Network for Traffic Sign Categorization . 26
iv
A Systematic Literature Review on Image Pre-Processing and Feature Extraction
Techniques in Precision Agriculture .............................................................................. 26
Assessment of the Spatial Variability of Air Pollutant Concentrations at Industrial
Background Stations in Malaysia Using Self-organizing Map (SOM) ......................... 27
A Comprehensive Study on Computer Aided Cataract Detection, Classification and
Management using Artificial Intelligence ...................................................................... 27
Attention Based Ensemble Deep Learning Technique for Prediction of Sea Surface
Temperature ................................................................................................................... 28
Ordered Ensemble Classifier Chain for Image and Emotion Classification .................. 28
Improving Black Hole Algorithm Performance by Coupling with Genetic Algorithm for
Feature Selection ............................................................................................................ 29
A Real-Time Traffic Jam Detection and Notification System Using Deep Learning
Convolutional Networks ................................................................................................ 29
A Novel Deep Learning SFR Model for FR-SSPP at Varied Capturing Conditions and
Illumination Invariant..................................................................................................... 30
Design of a Robotic Flexible Actuator Based on Layer Jamming ................................. 30
UAV Collaboration for Autonomous Target Capture .................................................... 31
Attention Based Ensemble Deep Learning Technique for Prediction of Sea Surface
Temperature ................................................................................................................... 31
Women’s Shield ............................................................................................................. 32
Sentiment Analysis on Diabetes Diagnosis Health Care using Machine Learning
Technique ....................................................................................................................... 32
Predicting the Health of the System based on the Sounds ............................................. 33
Fake News Detection Using Machine Learning Technique .......................................... 33
A Model Based on Convolutional Neural Network (CNN) for Vehicle Classification . 34
Study of Impact of COVID-19 on Students Education .................................................. 34
v
A Transfer Learning Approach for Face Recognition using Average Pooling and
MobileNetV2 ................................................................................................................. 35
A Deep Learning Approach for Splicing Detection in Digital Audios .......................... 35
Classifying Microarray Gene Expression Cancer Data using Statistical Feature Selection
and Machine Learning Methods..................................................................................... 36
Ontology Formation and Comparison for Syllabus Structure Using NLP .................... 36
A Leaf Image based Automated Disease Detection Model ........................................... 37
An Optimized Active Learning TCM-KNN Algorithm Based on Intrusion Detection
System ............................................................................................................................ 37
A framework for analyzing crime dataset in R using Unsupervised Optimized K-means
Clustering Technique ..................................................................................................... 38
Grading, classification, and sorting of South Indian Mango Varieties based on the stage
of Ripeness ..................................................................................................................... 38
Multi-Criteria Decision Theory based Cyber Foraging Peer Selection for Content
Streaming ....................................................................................................................... 39
Multi Agent Co-operative Framework for Autonomous Wall Construction ................. 39
An Efficient Comparison on Machine Learning and Deep Neural Networks in Epileptic
Seizure Prediction .......................................................................................................... 40
Seed Set Selection in Social Networks using Community Detection and Neighborhood
Distinctness .................................................................................................................... 40
Ensemble Model of Machine Learning for Integrating Risk in Software Effort Estimation
........................................................................................................................................ 41
Analysis of Remote Sensing Satellite Imagery for Crop Yield Mapping using Machine
Learning Techniques ...................................................................................................... 41
Construction of a Convex Polyhedron from a Lemniscatic Torus ................................. 42
An Ant System Algorithm based on Dynamic Pheromone Evaporation Rate for Solving
0/1 Knapsack Problem ................................................................................................... 42
vi
Deducing Water Quality Index (WQI) by Comparative Supervised Machine Learning
Regression Techniques for India Region ....................................................................... 43
Artificial Ecosystem-based Optimization for Optimal Location and Sizing of Solar
Photovoltaic Distribution Generation in Agriculture Feeders ....................................... 43
Optimized Segmentation Technique for Detecting PCOS in Ultrasound Images ......... 44
Framework for Estimating Software Cost using Improved Machine Learning Approach
........................................................................................................................................ 44
A Questionnaire-based Analysis of Network Forensic Tools ........................................ 45
The Extraction of Automated Vehicles Traffic Accident Factors and Scenarios using
Real-World Data ............................................................................................................ 45
Analysis of Lung Cancer Prediction at an Early Stage: A Systematic Review ............. 46
Sentimental Analysis of Code-Mixed Hindi Language Tweets ..................................... 46
A Comprehensive Survey on Machine Reading Comprehension: Models, Benchmarked
Datasets, Evaluation Metrics and Trends ....................................................................... 47
Cognitive Computing and its Relationship to Computing Methods and Advanced
Computing from a Human-Centric Functional Modeling Perspective .......................... 47
A Novel Feature Descriptor: Color Texture Description with Diagonal Local Binary
Patterns Using New Distance Metric for Image Retrieval ............................................. 48
OntoINT: A Framework for Ontology Integration based on Entity Linking from
Heterogeneous Knowledge Sources ............................................................................... 48
Digital Building Blocks using Perceptrons in Neural Networks ................................... 49
KnowCommerce: A Semantic Web Compliant Knowledge-Driven Paradigm for Product
Recommendation in E-Commerce ................................................................................. 49
Ant System Algorithm with Output-Validation for Solving 0/1 Knapsack Problem .... 50
Removal of Occlusion in Face Images Using PIX2PIX Technique for Face Recognition
........................................................................................................................................ 50
Pandemic Simulation and Contact Tracing: Identifying Superspreaders ...................... 50
vii
Age, Gender and Emotion Estimation Using Deep Learning ........................................ 51
Assessment of Attribution in Cyber Deterrence: A Fuzzy Entropy Approach .............. 51
Predictive Maintenance of Bearing Machinery using MATLAB .................................. 52
Application of Data Mining and Temporal Data Mining Techniques: A Case Study of
Medicine Classification .................................................................................................. 52
Fuzzy Keyword Search over Encrypted Data in Cloud Computing: An Extensive
Analysis .......................................................................................................................... 53
A Deep Learning Approach for Plagiarism Detection System using BERT ................. 53
Enhanced Security Layer for Hardening Image Steganography .................................... 54
Machine Learning Techniques on Disease Detection and Prediction Using the Hepatic
and Lipid Profile Panel Data: A Decade Review ........................................................... 54
Matrix Games with Linguistic Distribution Assessment Payoffs .................................. 55
Performance Analysis of Machine Learning Algorithms for Website Anti-phishing ... 55
Analytical Analysis of Two Ware-House Inventory Model Using Particle Swarm ...... 56
Towards an Enhanced Framework to Facilitate Data Security in Cloud Computing .... 56
Political Optimizer Based Optimal Integration of Soft Open Points and Renewable
Sources for Improving Resilience in Radial Distribution System ................................. 57
Kinematics and Control of a 3 DOF Industrial Manipulator Robot............................... 57
Enhanced Energy Efficiency in Wireless Sensor Networks .......................................... 58
Social Structure to Artificial Implementation: Honeybees ............................................ 58
Depth and Breadth of Artificial Bee Colony Optimization ............................................ 58
Lifetime Aware Secure Data Aggregation Through Integrated Incentive-based
Mechanism in IoT based WSN Environment ................................................................ 59
A Multi-attribute Decision Approach in Triangular Fuzzy Environment under TOPSIS
Method for All-rounder Cricket Player Selection .......................................................... 59
viii
Multi-Temporal Analysis of LST-NDBI Relationship with Respect to Land Use-Land
Cover Change for Jaipur City, India .............................................................................. 60
Analysis and Performance of JADE on Interoperability Issues Between Two Platform
Languages ...................................................................................................................... 60
Interval-valued Fermatean Fuzzy TOPSIS Method and its Application to Sustainable
Development Program ................................................................................................... 61
A TAM Based Study on the ICT Usage by the Academicians in Higher Educational
Institutions of Delhi NCR .............................................................................................. 61
An Empirical Study of Signal Transformation Techniques on Epileptic Seizures Using
EEG Data ....................................................................................................................... 62
An Investigation on Impact of Gender in Image based Kinship Verification................ 62
Classification of Covid-19 Chest CT images using Optimized Deep Convolutional
Generative Adversarial Network and deep CNN ........................................................... 63
Intelligent Fractional Control System of a Gas Diesel Engine ...................................... 63
Diabetes Prediction using Logistic Regression & K-Nearest Neighbor ........................ 64
Linear Regression for Car Sales Prediction in Indian Automobile Industry ................. 64
Load Balancing Algorithms in Cloud Computing Environment – An Effective Survey
........................................................................................................................................ 65
Agent driven Traffic Light Sequencing System using Deep Q Learning ...................... 65
Rainfall Estimation and Prediction using Artificial Intelligence: A Survey .................. 66
System Partitioning with Virtualization for Federated and Distributed Machine Learning
on Critical IoT Edge Systems ........................................................................................ 66
A Review on Preprocessing Techniques for Noise Reduction in PET-CT Images for Lung
Cancer ............................................................................................................................ 67
Analysis on Advanced Encryption Standard with Different Image Steganography
Algorithms: An Experimental Study .............................................................................. 67
ix
Optimal DG Planning and Operation for Enhancing Cost Effectiveness of Reactive
Power Purchase .............................................................................................................. 68
Image Classification using CNN to Diagnose Diabetic Retinopathy ............................ 68
Real-Time Segregation of Encrypted Data Using Entropy ............................................ 69
Performance Analysis of Different Deep Neural Architectures for Automated Metastases
Detection of Lymph Node Sections in Hematoxylin and Eosin-stained Whole-slide
images ............................................................................................................................ 69
Model Order Reduction of Continuous Time Multi Input Multi Output System Using
Sine Cosine Algorithm ................................................................................................... 70
Smart e-waste Management in China: a Review ........................................................... 70
A Study of Decision Tree Classifier to Predict Learner’s Progression .......................... 71
Prediction of User’s Behavior on the Social Media Using XGBRegressor ................... 71
Artificial Intelligence Framework for Content Based Image Retrieval: Performance
Analysis .......................................................................................................................... 72
Comparing the Pathfinding Algorithms A*, Dijkstra’s, Bellman-Ford, Floyd-Warshall,
and Best First Search for the Paparazzi Problem ........................................................... 72
Optimizing an Inventory Routing Problem Using a Modified Tabu Search ................. 73
Handwritten Digit Recognition Using Very Deep ......................................................... 73
Convolutional Neural Network ...................................................................................... 73
Classification of Breast Cancer Histopathological Images Using Pretrained CNN Models
........................................................................................................................................ 74
The Necessity to Adopt Bigdata Technologies for Efficient Performance Evaluation in
the Modern Era ............................................................................................................... 74
Forecasting Stock Market Indexes Through Machine Learning using Technical Analysis
Indicators and DWT ....................................................................................................... 75
Slotted Coplanar Waveguide-Fed Monopole Antenna for Biomedical Imaging
Applications ................................................................................................................... 75
x
Artificial Intelligence in E-commerce: A Literature Review ......................................... 76
CoFFiTT-Covid-19 Fake News Detection using Fine-Tuned Transfer Learning
Approaches ..................................................................................................................... 76
Improved Telugu Scene Text Recognition with Thin Plate Spline Transform .............. 77
On the Industrial Clustering: A View From an Agent- based Version of Krugman Model
........................................................................................................................................ 77
Linguistic Classification Using Instance-Based Learning ............................................. 78
A Framework for Enhancing Classification in Brain-Computer Interface .................... 78
Measuring the Accuracy of Machine Learning Algorithms when Implemented on
Astronomical Data ......................................................................................................... 79
Modified Non-Local Means Model for Speckle Noise Reduction in Ultrasound Images
........................................................................................................................................ 79
Improved Color Normalization Method for Histopathological Images ......................... 80
Analyzing Voice Patterns to Determine Emotion .......................................................... 80
Face and Emotion Recognition from Real-Time Facial Expressions using Deep Learning
Algorithms ..................................................................................................................... 81
Internet Based Healthcare Things Driven Deep Learning Algorithm for Detection and
Classification of Cervical Cells ...................................................................................... 81
Review on Novel Coronavirus Disease COVID-19....................................................... 82
Brain Tumor Analysis and Reconstruction Using Machine Learning ........................... 82
Development of Multiple Regression Model for Rainfall Prediction ............................ 83
Qualitative Classification of Wheat Grains using Supervised Learning ........................ 83
Fitness based PSO for Large Scale Job Shop Scheduling Problem ............................... 84
An Overview of Blockchain and IoT in e-Healthcare System ....................................... 84
Priority Based Replication Management for HDFS....................................................... 85
Limacon Inspired PSO for LSSMTWTS Problem ........................................................ 85
xi
Visualizing Missing Data ............................................................................................... 86
1
Chief Patron
Fr Dr Abraham VM
Vice-Chancellor, CHRIST (Deemed to be University),
Bangalore, India
Patrons
Fr Dr Benny Thomas
Director, School of Engineering and Technology,
CHRIST (Deemed to be University), Bangalore, India
Fr Joseph Varghese
Dean Research, CHRIST (Deemed to be University),
Bangalore, India
Honorary Chair
Prof. Iven Jose
Dean, School of Engineering and Technology, CHRIST
(Deemed to be University), Bangalore, India
2
General Chairs
Prof. Balachandran K
CHRIST (Deemed to be University), Bangalore, India
Prof. Joong Hoon Kim
Korea University, South Korea
Prof. Jagdish Chand Bansal
South Asian University Delhi, India
Dr. Harish Sharma
Rajasthan Technical University, Kota, India
Dr. Mukesh Saraswat
Jaypee Institute of Information Technology, Noida,
India
3
Organising Chairs
Sandeep Kumar, CHRIST (Deemed to be University), Bangalore, India
Ramesh Vatambeti, CHRIST (Deemed to be University), Bangalore,
India
Kusum Kumari Bharti, Indian Institute of Information Technology,
Design and Manufacturing, Jabalpur, India
Program Chairs
Addapalli V N Krishna, CHRIST (Deemed to be University), Bangalore,
India
Ajit Danti, CHRIST (Deemed to be University), Bangalore, India
Balamurugan M, CHRIST (Deemed to be University), Bangalore, India
Daniel D, CHRIST (Deemed to be University), Bangalore, India
Harish V. Gorewar, RTM Nagpur University, Nagpur
Manohar M, CHRIST (Deemed to be University), Bangalore, India
Raju G, CHRIST (Deemed to be University), Bangalore, India
Ravindra N. Jogekar, RTM Nagpur University, Nagpur
Shantanu A. Lohi, Government College of Engineering, Amravati
Snehal A. Lohi-Bode, Sarvepalli RadhaKrishnan University, Bhopal
4
Publicity Committee
Diana Jeba Jingle I, CHRIST (Deemed to be University), Bangalore,
India
Debarka Mukhopadhyay, CHRIST (Deemed to be University),
Bangalore, India
Debasish Mukherjee, CHRIST (Deemed to be University), Bangalore,
India
Aruna S K, CHRIST (Deemed to be University), Bangalore, India
Bejoy B J, CHRIST (Deemed to be University), Bangalore, India
Bijeesh T V, CHRIST (Deemed to be University), Bangalore, India
BR Prathap, CHRIST (Deemed to be University), Bangalore, India
Chinthakunta Manjunath, CHRIST (Deemed to be University),
Bangalore, India
Alok Kumar Pani, CHRIST (Deemed to be University), Bangalore, India
Anirban Das, University of Engineering & Management, Kolkata, India
C. Rani, VIT Vellore, India
Neha, National Institute of Technology, Hamirpur, India
D. L. Suthar, Wollo University, Ethiopia
Faruk Ucar, Marmara University
Ponnambalam P, VIT Vellore, India
Ramesh C. Poonia, CHRIST (Deemed to be University), Bangalore, India
V. K. Vyas, Sur University College, Oman
5
Publication Committee
Mukesh Saraswat, Jaypee Institute of Inormation Technology, India
Harish Sharma, Rajasthan Technical University, Kota, India
Balachandran K, CHRIST (Deemed to be University), Bangalore, India
Joong Hoon Kim, Korea University, South Korea
Jagdish Chand Bansal, South Asian University Delhi, India
Ganesh Kumar R, CHRIST (Deemed to be University), Bangalore, India
Gnana Prakasi O S, CHRIST (Deemed to be University), Bangalore,
India
Session Management Committee
PS Rana, Thapar Institute of Engineering & Technology, India
Sumit Kumar, Amity University, Noida
Raju Pal, Jaypee Institute of Information Technology, Noida, India
Ajay Sharma, Government Engineering College Jhalawar, India
Himanshu Mittal, Jaypee Institute of Information technology, India
Praveen Naik, CHRIST (Deemed to be University), Bangalore, India
Raghavendra S, CHRIST (Deemed to be University), Bangalore, India
Merin Thomas, CHRIST (Deemed to be University), Bangalore, India
Michael Moses T, CHRIST (Deemed to be University), Bangalore, India
Mithun B N, CHRIST (Deemed to be University), Bangalore, India
Natarajan K, CHRIST (Deemed to be University), Bangalore, India
6
Naveen J, CHRIST (Deemed to be University), Bangalore, India
Praveen Kulkarni, CHRIST (Deemed to be University), Bangalore, India
Mary Anitha EA, CHRIST (Deemed to be University), Bangalore, India
Mausumi Goswami, CHRIST (Deemed to be University), Bangalore,
India
Rekha V, CHRIST (Deemed to be University), Bangalore, India
Organizing Committee
Gokulapriya R, CHRIST (Deemed to be University), Bangalore, India
Gurudas V R, CHRIST (Deemed to be University), Bangalore, India
Jayapandian N, CHRIST (Deemed to be University), Bangalore, India
Sathish P K, CHRIST (Deemed to be University), Bangalore, India
Savitha S, CHRIST (Deemed to be University), Bangalore, India
Sujatha A K, CHRIST (Deemed to be University), Bangalore, India
Sumitha V S, CHRIST (Deemed to be University), Bangalore, India
Sundara Pandiyan S, CHRIST (Deemed to be University), Bangalore,
India
Vandana Reddy, CHRIST (Deemed to be University), Bangalore, India
Vinai George Biju, CHRIST (Deemed to be University), Bangalore, India
Cherukuri Ravindranath Chowdary, CHRIST (Deemed to be University),
Bangalore, India
Xavier C, CHRIST (Deemed to be University), Bangalore, India
7
Kukatlapalli Pradeep Kumar, CHRIST (Deemed to be University),
Bangalore, India
Sathish Kumar R, CHRIST (Deemed to be University), Bangalore, India
Jyothi Thomas, CHRIST (Deemed to be University), Bangalore, India
Kanmani P, CHRIST (Deemed to be University), Bangalore, India
Karthikeyan H, CHRIST (Deemed to be University), Bangalore, India
Julian Benadit P, CHRIST (Deemed to be University), Bangalore, India
Jyothi Mandala, CHRIST (Deemed to be University), Bangalore, India
Joy Paulose, CHRIST (Deemed to be University), Bangalore, India
Samiksha Shukla, CHRIST (Deemed to be University), India
Sumitra Binu, CHRIST (Deemed to be University), India
J Chandra, CHRIST (Deemed to be University), Bangalore, India
Dhiraj Sangwan, Sr. Scientist, CSIR-CEERI, PILANI
K G Sharma, Government Engineering College Ajmer
Satya Narayan Tazi, Government Engineering College Ajmer India
Ravindra N. Jogekar, RTM Nagpur University, Nagpur
Harish V. Gorewar, RTM Nagpur University, Nagpur
Shantanu A. Lohi, SGB Amravati University, Amravati
8
Advisory Board
A. K. Singh, Motilal Nehru National Institute of Technology Allahabad
(MNNIT), Allahabad, India
A K Verma, Western Norway University of Applied Sciences,
Haugesund, Norway
Abdel Salam Gomaa, Head of Student Data Management Section,
Department of Mathematics, Statistics and Physics, College of Art and
Sciences, Qatar University, Doha
Aboul Ella Hassanien, Cairo University, Egypt
Adarsh Kumar, UPES, Dehradun, India
Ajay Vikram Singh, AIIT, Amity University Uttar Pradesh
Akhil Ranjan Garg, MBM Engg. College, Jodhpur, India
Ali A. Al –Jarrah, Sur University College, Oman
Ali Mirjalili, Torrens University Australia
Alok Kanti Deb, Indian Institute of Technology Kharagpur, India
Anand Nayyar, Scientist, Graduate School, Duy Tan University, Da
Nang, Viet Nam
Anand Paul, Kyungpook National University, South Korea
Anuradha Ranasinghe, Liverpool Hope University, UK
Anurag Jain, GGSIP University, Delhi, India
Aruna Tiwari, Indian Institute of Technology Indore, India
Arun Solanki, Gautam Buddha University, Greater Noida, India
9
Ashish Kr. Luhach, The PNG University of Technology, PNG
Ashvini Chaturvedi, NIT Suratkal, India
Atulya K. Nagar, Liverpool Hope University, UK
Ayush Dogra, CSIR NPDF, CSIR-CSIO Research Lab, India
B. Padmaja Rani, JNTU Hyderabad
Basant Agarwal, IIIT Kota, Rajasthan India
Carlos A Coello Coello, Investigador CINVESTAV 3F (Professor with
Distinction)
D.L. Suthar, Wollo University, Ethiopia
Dan Simon, Cleveland State University USA
Debasish Ghose, IISc Bangalore, India
Deepak Garg, Bennett University, India
Dhirendra Mathur, RTU Kota, India
Dinesh Goyal, Poornima Institute of Engineering & Technology, Jaipur
Dumitru Baleanu, Cankaya University
Faruk Ucar, Marmara University
Garima Mittal, IIM Lucknow, India
Gonçalo Marques, University of Beira Interior, Portugal
Hanaa Hachimi, Ibn Tofail University, Morocco
J. Senthilnath, Scientist, Machine Intellection, Institute for Infocomm
Research (I²R) | Agency for Science, Technology and Research
(A*STAR), Singapore
10
Janmenjoy Nayak, Aditya Institute of Technology and Management
(AITAM), Andhra Pradesh-532201, India
Janos Arpad Kosa, Neumann Janos University, Hungary
K. S. Nisar, Prince Sattam bin Abdulaziz University, Riyadh, Saudi
Arabia
Kapil Sharma, Head Department of IT, DTU, India
Kedar Nath Das, National Institute of Technology Silchar, India
Kusum Deep, Indian Institute of Technology, Roorkee, India
Lipo wang, NTU Singapore
Mahesh Bundele, Poornima College of Engineering, Jaipur
Manju, JIIT, Noida
Manoj Thakur, IIT Mandi
Mario José Diván, Data Science Research Group, Universidad Nacional
de La Pampa, Coronel Gil 353, Primer Piso - Santa Rosa (CP 6300), La
Pampa, Argentina
Maurice Clerc, Independent Consultant, France
Mohammad S Khan, Director of Network Science and Analysis Lab
(NSAL), Department of Computing, East Tennessee State University
Johnson City, TN 37614-1266, USA
N. R. Pal, Indian Statistical Institute, Kolkata, India
Neil Buckley, Liverpool Hope University, UK
Nilanjan Dey, Techno India College of Technology, India
Nishchal K. Verma, Indian Institute of Technology Kanpur, India
11
Noor Zaman, Taylor's University, Malaysia
P. Vijaykumar, University College of Engineering Tindivanam, India
Pankaj Srivastava, MNNIT, Prayagraj, India
Prashant Jamwal, Nazarbayev University, Kazakhstan
R. C. Mittal, Jaypee Institute of Inormation Technology, India
Ravinder Rena, NWU School of Business, North West University,
Mafikeng Campus, South Africa
Ravi Raj Choudhary, Central University of Rajasthan, India
S. Sundaram, IISc Bangalore, India
Said Salhi, Kent Business School | University of Kent
Sarbani Roy, Jadavpur University, Kolkata, India
Satish Chand, Jawaharlal Nehru University, India
Sanjeevikumar Padmanaban, Department of Energy Technology
Aalborg University, Esbjerg, Denmark
Sudeep Tanwar, NIRMA University, Gujrat, India
Sunita Agrawal, Motilal Nehru National Institute of Technology
Allahabad, India
Suresh Satapathy, KIIT Deemed to be University, Bhubaneswar, India
Swagatam Das, Indian Statistical Institute, Kolkata, India
T. V. Vijay Kumar, Jawaharlal Nehru University, India
V. K. Vyas, Sur University College, Oman
Vivek Jaglan, Dean Research, GEHU, Dehraun, India
13
Implementation of Morphological Gradient Algorithm
For Edge Detection
Mirupala Aarthi Vardhan Rao, *Debasish Mukherjee,
Savitha S
Department of Computer Science and Engineering, School of
Engineering, CHRIST (Deemed-to-be-University), Bangalore, India
Abstract. This paper shows the implementation of a morphological gradient in
MATLAB and colab platforms to analyse the time consumed on different sizes of
grayscale images and structuring elements. A morphological gradient is an edge
detecting technique that can be derived from the difference of two morphological
operations called dilation and erosion. In order to apply the morphological operations
to an image, padding is carried out which involves inserting 0 for dilation operation
and 225 for erosion. Padding for the number of rows or columns is based on the size
of the structuring element. Further, dilation and erosion are implemented on the image
to obtain morphological gradient. Since central processing unit (CPU) implementation
follows sequential computing, with the increase in the image size, the time
consumption also increases significantly. To analyse the time consumption and to
verify the performance across various platforms, the morphological gradient algorithm
is implemented in MATLAB and colab. The results demonstrate that colab
implementation is 10 times faster when Constant structuring element with varying
image size is used and 5 times faster when constant image size with varying structuring
element size is used than the MATLAB implementation.
Pythagorean Fuzzy Information Measure with
Application to Multicriteria Decision Making
Anjali Munde
Amity University Uttar Pradesh, Noida, India
Abstract. The theory of Pythagorean fuzzy sets presented a unique technique to
demonstrate ambiguity and imprecision with higher accuracy and correctness in
comparison with Intuitionistic fuzzy sets. The notion was particularly constructed to
characterize ambiguity and imprecision with mathematical technique and to provide a
validated tool for dealing fuzziness to real issues. In this paper, a Pythagorean Fuzzy
Information Measure is recommended. The axiomatic definitions and properties for
the Pythagorean Fuzzy Information Measure of order α and type β are established. In
contrast to few prior measures, the recent proposed measure is not complicated, nearer
to the statistical importance and it shows enhanced fuzzy properties. The monotonic
performance of the recommended Pythagorean Fuzzy Information Measure is
examined by assigning distinct values to α and β. Further, a numerical example for
elucidating the Multicriteria decision-making problem with the support of the
proposed Information Measure has been effectively demonstrated.
14
Leaf Disease Identification in Rice Plants Using CNN
Model
Allam Sushanth Reddy, Jyothi Thomas
CHRIST (Deemed-to-be-University), Bangalore, India
Abstract. Rice is a staple food crop for more than 10 countries. High consumption of
rice demands better yield of crop. Fungal, Bacterial and Viral are different classes of
diseases damaging rice crops which results in low and bad yield as per quality and
quantity of the crop. Some of the most common diseases affecting plants are Fungal
Blast, Fungal-Brownspot, Fungal-Sheath Blight, Bacterial-Blight and Viral-Tungro.
The Deep Learning CNN model with ResNet50V2 architecture was used in this paper
to identify disease on the paddy leaves. Mobile application proposed in this paper
will help farmers to detect disease on the leaves during their regular visit. Images
were captured using this application. The captured images were tested using the
trained deep learning model embedded with mobile application. This model predicts
and displays input images along with the probabilities compared to each dis-ease. The
mobile application also provides necessary remedies for the identified disease with
the help of hyperlink available in mobile application. The achieved probability that
the model can truly classify the input image in this project was 97.67% and the
obtained validation accuracy was 98.86%. A solution with which farmers can identify
diseases in rice leaves and take necessary actions for better crop yield has been
demonstrated in this paper.
Twitter Sentiment Analysis Based on Neural Network
Techniques
Ashutosh Singal and Michael Moses Thiruthuvanathan
CHRIST (Deemed-to-be-University), Bangalore, India
Abstract. Our whole world is changing everyday due to the present pace of
innovation. One such innovation was the Internet which has become a vital part of
our lives and is being utilized everywhere. With the increasing demand to connected
and relevant we can see a rapid increase in the number of different social networking
sites, where people shape and voice their opinions regarding daily issues.
Aggregating and analysing these opinions regarding buying products and services,
news, and so on are vital for today’s businesses. Sentiment analysis otherwise called
Opinion mining is the task to detect the sentiment behind an opinion. Today analysing
the sentiment of different topics like products, services, movies, daily social issues
has become a very important for businesses as it helps them understand their users.
Twitter is the most popular microblogging platform where users put voice to their
opinions. Sentiment analysis of twitter data is a field that has gained a lot of interest
over the past decade. This requires breaking up “tweets” to detect the sentiment of
15
the user. This paper delves into various classification techniques to analyse twitter
data and get their sentiments. Here different features like unigrams and bigrams are
also extracted to compare the accuracies of the techniques. Additionally, different
features are represented in dense and sparse vector representation where sparse vector
representation is divided into presence and frequency feature type are also used to do
the same. This paper compares the accuracies of Naïve Bayes, Decision Tree, SVM,
Multilayer Perceptron (MLP), Recurrent Neural Network (RNN), Convolutional
Neural Network (CNN) and their validation accuracies ranging from 67.88 to 84.06
for different classification techniques and neural network techniques.
Support Vector Machine Performance Improvements by
Using Sine Cosine Algorithm
Miodrag Zivkovic1, Nikola Vukobrat1, Amit Chhabra2,
Tarik A. Rashid3, K. Venkatachalam4, and Nebojsa
Bacanin1
1Singidunum University, Danijelova 32, 11000 Belgrade, Serbia
2Guru Nanak Dev University, Amritsar, India
3Computer Science and Engineering Department, University of
Kurdistan Hewler, Erbil, KRG, Iraq
4Department of Computer Science and Engineering, CHRIST (Deemed
to be University), Bangalore India
Abstract. The optimization of parameters has a crucial influence on the solution
efficacy and the accuracy of the support vector machine (SVM) in the machine
learning domain. Some of the typical approaches for determining the parameters of
the SVM consider the grid search approach (GS) and some of the representative
swarm intelligence metaheuristics. On the other side, most of those SVM
implementations take into the consideration only the margin, while ignoring the
radius. In this paper, a novel radius-margin SVM approach is implemented, that
incorporates the enhanced sine cosine algorithm (eSCA). The proposed eSCA-SVM
method takes into the account both maximizing the margin and minimizing the radius.
The eSCA has been used to optimize the penalty and RBF parameter in SVM. The
proposed eSCA-SVM method has been evaluated against four binary UCI data-sets,
and compared to seven other algorithms. The experimental results suggest that the
proposed eSCA-SVM approach has superior performances in terms of the average
classification accuracy than other methods included in the comparative analysis.
16
Enhanced Stock Market Prediction using Hybrid LSTM
Ensemble
Reuben Philip Roy and Michael Moses Thiruthuvanathan
CHRIST (Deemed-to-be-University), Bangalore, India
Abstract. Stock market value prediction is the activity of predicting future market
values so as to increase gain and profit. It aids in forming important financial
decisions which help make smart and informed investments. The challenges in stock
market predictions come due to the high volatility of the market due to current and
past performances. The slightest of variation in current news, trend or performance
will impact the market drastically. Existing models fall short in computation cost and
time thereby making them less reliable for large datasets on a real time basis. Studies
have shown that a hybrid model performs better than a standalone model. Ensemble
models tend to give im-proved results in terms of accuracy and computational
efficiency. This study is focused on creating a better yielding model in terms of stock
market value prediction using technical analysis and it is done by creating an
ensemble of Long Short-Term Memory (LSTM) model. It analyses the results of
individual LSTM models in predicting stock prices and creates an ensemble model
in an effort to improve the overall performance of the prediction. The proposed model
is evaluated on real world data of 4 companies from yahoo finance. The study has
shown that the ensemble has performed better than the stacked LSTM model by the
following percentages: 21.86% for the TESLA dataset, 22.87% for the AMAZON
dataset, 4.09% for NIFTYBANK and 20.94% for the TATA dataset. The model’s
implementation has been justified by the above results.
Centrist Traffic Management Protocol within the
Opportunist Network
Shivani Sharma and Nanhey Singh
Netaji Subhas University of Technology, University of Delhi, New Delhi, India
Abstract. Advanced Networks (Oppnets) firstly, store then, transport followed by forwarding to
deliver messages. This method can increase the chances of message delivery but is more
powerful as messages are stored at the bottom of the node until the next appropriate hop location
is found. This can lead to high lift and buffer overload causing congestion. To this end, a system
based on the size and parameters of the messages called the Centrality based Congestion
Controlled Routing Protocol (CCCRP) has been suggested in this paper. Allows the recipient
node to receive the message of the middle sender node message and the message to be sent
carries "value" compared to others from a set of neighbouring sender nodes. CCCRP compares
the Epidemic protocol with respect to reduced messages and overhead ratio parameters. The
results obtained indicate that the CCCRP violates the epidemic law in terms of the above.
17
Impact of Business Intelligence on Organizational
Decision-Making and Growth Analysis
Piyush Sharma1, Rajat Mohan2* and Nisha Wadhawan1
1Jagannath Institute of Management Sciences, New Delhi, India
2Guru Gobind Singh Indraprastha University, New Delhi, India
Abstract. Business Intelligence supports managers by increasing the effectiveness of
the decision-making process in services providing companies. OLAP (Online
Analytical Processing), a software, for Business Intelligence act as an exceptionally
valuable tool, as organizations invest in this to predict their future and it gives
companies a competitive environment, by providing specialized information to fulfil
the requirements of company. In the construction industry, business intelligence helps
in providing a lean construction process, in order to minimize waste while maximizing
profitability of the firm. To satisfy the business requirement of construction or
infrastructure industries, firms may require Business Intelligence software to handle
resources, finance and budget, various milestones, operations, team management and
schedules of certain projects. The main aim of this paper is to analyse utilization of
business intelligence by various firms (construction, supply chain, operations,
software and IT project management) to facilitate their decision-making process and
improve company performance. As the number of industries is increasing at rapid
pace, their increased workload has become difficult to handle. BI has benefited
companies to solve complex problems and make into strategic decision making. This
paper analyses the usage of BI components in various companies and its impact on
their decision making and productivity.
CONCISE: An Algorithm for Mining Positive and
Negative Non-Redundant Association Rules
BEMARISIKA Parfait, TOTOHASINA André
Laboratoire de Mathématiques et Informatique de l’ENSET, Université
d’Antsiranana, Madagascar
Abstract. One challenge problem in association rules mining is the huge size of the extracted
rule set many of which are uninteresting and redundant. In this paper, we propose an efficient
algorithm Concise for generating all non-redundant positive and negative association rules. We
introduce GC2M algorithm for enumerating simultaneously all frequent generator item sets,
frequent closed item sets, frequent maximal item sets, and infrequent minimal item sets. We
then define four new bases representing non-redundant association rules. We prove that these
bases significantly reduce the number of extracted rules. We show the efficiency of our
algorithm by computational experiments compared with existing algorithms.
18
Developing an Improved Software Architecture
Framework for Smart Manufacturing
Gareth A. Gericke, Rangith B. Kuriakose and Herman J.
Vermaak
Center for Sustainable Smart Cities, Central University of Technology,
Free State, South Africa
Abstract. Software architectures have long been touted as a major requirement to
accurately recreate software and network set-ups that line up with best practices,
proper functioning of protocols and coding structures by software developers. The
burst of expansion in Industry 4.0 has resulted in many new technologies and therefore
requires a re-evaluation of current software architectures. This paper looks at software
architectures which are currently used within Smart Manufacturing and analytically
compares them to each other. The aim of the paper is to outline the shortcomings of
the existing software architectures with respect to their ability to be incorporated for
Industry 4.0, Smart Manufacturing communication. This paper goes on to propose a
new software architecture which addresses some key concerns and concludes by
making a comparison of the proposed software architecture with the ones in use
currently. The experiments that garnered these results were conducted in a Smart
Manufacturing Lab, which has produced several key results in this research niche area.
Intelligent Water Drops Algorithm Implementation using
Mathematical Function
Sathish Kumar Ravichandran1, Archana Sasi2, Ramesh
Vatambeti1
1Department of Computer Science and Engineering, CHRIST (Deemed-
to-be-University), Bangalore, India
2School of Engineering, Department of Computer Science and
Engineering, Presidency University, Karnataka, India
Abstract. The Intelligent Water Droplets (IWD) algorithm is based on the dynamic of
river system actions and reactions that occur among river water drops. IWD algorithm
is a constructive-based technique in which a group of individual’s moves in discrete
stages from one node to the next until a complete population of solutions is obtained.
Imitated velocity and soil, two important features of natural water drop in the IWD
algorithm, are modified over a sequence of transitions relating to water drop
movement. To obtain the optimal values of numerical functions, the IWD technique is
supplemented with a mutation-based local search in this paper. The experimental
results are promising, and this encourages more research in this area.
19
French Covid-19 Tweets Classification Using FlauBERT
Layers
Sadouanouan Malo, Thierry Roger Bayala, and Zakaria
Kind
Department of Computer Science, Nazi BONI University, Bobo-
Dioulasso, Burkina Faso
Abstract. Late in 2019, Wuhan a city in China recorded its first case of corona virus.
Over time, the virus has spread to all continents, resulting in numerous victims. Many
techniques have been developed to contain the spread of the virus, ranging from
preventive to curative approaches. However, these solutions are still a luxury for
developing countries. In this work, we have proposed a framework based on the
Twitter datasets that can allow them to follow the propagation of the virus in real time.
This low-cost solution could allow them to have information on the tendencies of
contamination of the virus and consequently to take measures to contain it. In this
framework, we proceeded to a tweet cleaning using the pre-trained Glove and FastText
models. The text coverage rate was respectively 73.95% for the Glove model and
73.35.8% for the FastText in the training dataset. We then trained a deep neural
network using the BERT layers with a hyper-parameter lot size of 32 and a Hidden
Layer of 12. This allowed us to obtain an accuracy of 0.98%.
A Deliberation on the Stages of Artificial Intelligence
Jiran Kurian, Rohini V
CHRIST (Deemed to be University), Bangalore, India
Abstract. Artificial Intelligence (AI) is a technology that can be programmed to mimic
humans' natural intelligence, which helps the machines perform the tasks that a human
being can do. After a long research period from 1955, the researchers have achieved
remarkable achievements like machine learning and deep learning in this field. Other
areas like education, agriculture, medical etc. to name a few, also utilizing these
technologies for its improvements. All the achievements made in this field are not even
comparable to the actual depth of this technology, where the depth of Artificial
Intelligence is yet to measure; that is, a long way to go to develop a fully functional
AI. To identify the extent of its depth, firstly, the path to the AI's core should be visibly
defined, and secondly, the milestones are to be placed in between. There are some
general stages and types of AI introduced by other researchers, but it cannot be used
for further research due to the inconsistency in the information. So, to bring
standardized information in the Stages of AI is as important as setting up a good base
in this field. The paper proposes and defines new stages of AI that could help set the
milestones. The work also places a general standard, brings more clarity, and
eliminates the inconsistencies in the Stages of AI.
20
A Novel Weighted Extreme Learning Machine for Highly
Imbalanced Multiclass Classification
Siddhant Baldota1 and Deepti Aggarwal2
1Department of Computer Science and Engineering, SRM Institute of
Science and Technology, Kattankulathur, Tamil Nadu, India - 603203
2Department of Software Engineering, Delhi Technological University,
Bawana, Road, Delhi, India - 110042
Abstract. Imbalance of classes in data distributions has proved to be a hindrance for their
accurate classification. Remedies for balancing classes such as oversampling and under
sampling have resulted in inaccurate delineation of data. A paradigm shift from data level
transformation to cost based learning intends to solve this issue. Classification models such as
Artificial Neural Networks tend to have limited performance despite hyperparameter tuning.
Extreme Learning Machines which do not rely on traditional training methods have provided a
solution to these shortcomings. Weighted Extreme Learning Machines (WELMs) have handled
the issue of class imbalance well. This study proposes two forms of a WELM using penalization
and regularization techniques. The proposed methods are compared and contrasted with the
existing ones on 45 multiclass and binary datasets from machine learning repositories. The
proposed methods were evaluated using the AUC, Precision, Recall and F-measure metrics.
Friedman tests applied to each of these metrics show that the proposed methods significantly
exceed the performance of the existing WELMs, even for multiclass datasets having extremely
high imbalance ratios (> 850). Thus, the methods proposed in the study serve well for
imbalanced multiclass classification problems.
Prediction and Analysis of Recurrent Depression
Disorder: Deep Learning Approach
Anagha Pasalkar, Dhananjay Kalbande
Sardar Patel Institute of Technology, Andheri (w), Mumbai, India
Abstract. Mental illness, such as depression, is rampant and has been shown to affect a person’s
physical health. With the growth in artificial intelligence (AI) various methods are introduced
to assist mental health care providers, including psychiatrists to construct proper decisions based
on patient’s chronicle information including sources like medical records, behavioural data,
social media usage, etc. Many researchers have come up with various strategies that include
various machine learning algorithms for data analysis of depression. Although there have been
less attempts previously to perform the same task without making the use of pre- classified data
and Word-Embedding optimization Approach. For these reasons, this study aims to identify the
deep formation of the neural network among a few selected structures that will successfully
complement natural language processing activities to analyse and predict depression.
21
Energy Efficient ACO-DA Routing Protocol Based on
IoEABC-PSO Clustering in WSN
Vasim Babu M1, Vinoth Kumar C N S2, Baranidharan B2,
Madhusudhan Reddy Machupalli1, Ramasamy R3
1KKR & KSR Institute of Technology and Sciences, Guntur, India 2SRM Institute of Science and Technology, Kattankulathur, Chennai
3Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology,
Chennai, India
Abstract. In recent years, clustering of sensor nodes in WSN is an effective approach for
designing routing algorithms, which enhances energy efficiency and net-work lifetime. While
clustering of sensor nodes, key nodes and Cluster Head (CH) need to perform multiple tasks, so
that it requires more energy. To over-come this issue, in this proposed methodology, optimal
CH is adopted based on the residual energy, node density and the location of the node. Before
that, clustering of sensor nodes in the network is achieved through proposed Integration of
Enhanced Artificial Bee Colony with Particle Swarm Optimization (IoEABC-PSO) Clustering
algorithm to enhance performance efficiency of proposed methodology. In addition, to
overcome the clustering problem, the proposed IoEABC-PSO algorithm use honey source
updating principles in ABC approach while electing the CH. In the meanwhile, CH gather all
the information from member nodes for better communication. After the completion of CH
election, the routing is performed to transmit gathered information between elected CH and base
station by using Ant Colony Optimization with Dijkstra Algorithm to obtain better performance
result. Finally, the polling control mechanism is presented to provide low energy consumption
and high network lifetime. Practical implication of the findings and future investigation are
discussed.
An Enhanced Pixel Intensity Range based Reversible Data
Hiding Scheme for Interpolated Images
Rama Singh and Ankita Vaish
Banaras Hindu University, Varanasi, (U.P.), India
Abstract. This paper line ups an improved interpolation based reversible data hiding technique
(IRDH). The modified neighbour mean interpolation (MNMI) method is used for up sampling
of original cover media that uses weighted average method for evaluating the estimated values
of interpolated image. The proposed technique works on overlapping block-division by taking
advantage of spatial correlation. The pixel intensity range is divided based on concept that
whenever the number of bits of estimated pixels are replaced by the secret information to be
embedded, they again fall within the same range with ease of extracting the secret information
and recover the original cover media without any distortion. Proposed technique discusses the
cases of secret information that has yet not been discussed in existing interpolation techniques.
The auxiliary data of storing secret information to be less than, greater than or equal to case is
embedded in the first estimated pixel only by just replacing the 4th or 5th bit according to pixel
intensity division. It removes use of location map which is an overhead. Accordingly, our
proposed technique preserves the visual perceptibility of stego-image with improved embedding
capacity. The experimental results indicate that proposed scheme attains PSNR greater than
36dB.
22
Modelling Critical Success Factors for Smart Grid
Development in India
Archana1*, Shveta Singh2
1Bharti School of Telecommunication Technology and Management, IIT Delhi,
India 2 Department of Management Studies, IIT Delhi, India
Abstract. In the last few decades, technological advancement in the energy sector has
accelerated the evolution of the smart grid, leading to the need for inter-disciplinary research in
power system and management. India, the third-largest country in the production and
consumption of electricity, is facing numerous challenges related to electricity like high
transmission and distribution loss, electricity theft, and pollution concerns. Due to these
challenges, the energy sector is looking to adopt new technologies to make the grid more
efficient, sustainable, and secure. In this regard, this research aims to identify factors that can
be considered enablers for developing smart grid technology in India. The present work has
explored a systematic and scientific approach that includes content analysis, exploratory factor
analysis, and total interpretive structural modelling. This paper primary contributes to
developing a hierarchical model of the identified enabling factors, which will help the industry
persons visualise the roadmap for implementing smart grid technology, especially in a
developing country like India.
Analysing a Raga Based Bollywood Song: A Statistical
Approach
Lopamudra Dutta and Soubhik Chakraborty
Department of Mathematics, Birla Institute of Technology, Mesra, Ranchi-
835215, Jharkhand, India
Abstract. Indian classical music, being our national heritage and glorious tradition has an
extensive reach, and Bollywood music has a significant role in promoting Indian classical music
among common man. Since musical data can be subjected to statistical analysis because music
creates patterns and statistics is also a study of patterns in numerical data, hence we are
motivated to do a statis-tical analysis on a recording of a specific raga based Bollywood song
and com-pare it with the characteristics features of the concerned raga, e.g., studying where it
is following the raga rules and where it is deviating, which song melodies are statistically similar
to the raga melodies, whether the musical notes rendered in the recording follow a Multinomial
or a Quasi Multinomial distribution, whether the melody length is uniformly distributed over
the lines of the song, study of rhythm through IOI (inter onset interval) graph, study of note
duration etc. Furthermore, the variation in melody and rhythm throughout the song at different
intervals would be investigated using the statistical parameterization approach with the help of
Andrews’s plot. The song used by us for our analysis is Aaoge Jab Tum from the movie Jab We
Met (2007) originally sung by Ustad Rashid Khan, an Indian classical musician in the tradition
of Hindustani music who was awarded the Padma Shri and the Sangeet Natak Akademi Award
in the year 2006.
23
Stability Analysis of Emerged Seaside Perforated Quarter
Circle Breakwater using ANN, SVM and AdaBoost
Models
Sreelakshmy Madhusoodhanan and Subba Rao
Department of Water Resources and Ocean Engineering, National Institute of
Technology Karnataka, Surathkal, 575025, India
Abstract. Breakwaters are constructed to address a variety of coastal requirements ranging from
maintaining tranquility conditions for a port or harbor area to preventing coastal recession.
Quarter circle breakwater (QCB) is a composite structure, with a rubble mound foundation and
a super structure consisting of quarter circle surface facing incident waves, with a horizontal
bottom and a rear vertical wall. Be it any structure, it is essential that the design is economic,
safe and functional. Thus, the accurate estimation of minimum (critical) weight of the structure
required to resist the sliding is vital. Also, physical model studies can be laborious and time
consuming whereas numerical modelling can be complex. Therefore, under such circumstances
soft computing techniques prove to be handy if sufficient data is available. In this study the
dimensionless stability parameter (W/γHi 2) of an emerged seaside perforated QCB for varying
S/D ratios (spacing to perforation diameter ratio) are estimated using Artificial Neural Network
(ANN), Support Vector Machine (SVM) and AdaBoost models. Incident wave steepness (Hi
/gT2), relative water depth (d/hs) and perforation (p %) are chosen as input parameters with the
dimensionless stability parameter (W/γHi 2) as the output parameter. Further the obtained
results are compared using performance indicators such as Root Mean Square Error, Coefficient
of Determination and Mean Absolute Error following which the best model is selected. The data
that is used for the present study is collected from the laboratory investigation conducted in the
Marine Structures Lab of the Dept. of Water Resources and Ocean Engineering, National
Institute of Technology Karnataka, Surathkal.
Advanced Spam Detection using NLP & Deep Learning
Aditya Anil, Ananya Sajwan, Lalitha Ramchandar and
Subhashini. N
Vellore Institute of Technology, Chennai, India
Abstract. Rapidly advancing technology is a double-edged sword as both friend and foe get
access to said technology. Spam has become more prevalent than ever with malicious actors
using advanced technology to create extremely convincing spam that can lead to major
cybersecurity breaches. It has become imperative that we use advanced techniques to combat
the proliferation of spam. The objective of our work is to present a systematic overview of the
effectiveness of different machine learning and deep learning models integrated with natural
language processing concepts. The paper analyses the different approaches that can be used to
identify spam accurately and identifies the most efficient techniques to achieve it. The
discussion utilizes a wide range of datasets from email and SMSs to tweets and implements
different algorithms like Naïve Bayes, XGBoost, Random Forest, and Convolutional Neural
Networks among others to perform a comprehensive analysis of the best-suited methods to
achieve high efficiency in identifying spam. It was observed that deep learning models displayed
the highest accuracies for spam detection in SMS and emails, while random forest was the most
accurate for detecting spam in tweets.
24
A Risk-Budgeted Portfolio Selection Strategy Using Novel
Metaheuristic Optimization Approach
Mohammad Shahid1, Zubair Ashraf2, Mohd Shamim1,
Mohd Shamim1, Ansari1, Faisal Ahmad3
1Dept. of Commerce, Aligarh Muslim University, Aligarh, India 2Dept. of Computer Science, Aligarh Muslim University, India
3Workday Inc, USA
Abstract. Portfolio construction by selecting the right combination of securities is the subject
matter of portfolio optimization making efforts to optimize expected return on risk. The
investors are risk averse and they always make an effort to select those combinations of
securities in the portfolio which will lead to minimization of risk and maximization of expected
return. Thus, risk budgeting is one of the common phenomena in portfolio optimization. With
the passage of time, a number of mathematical techniques have been developed for risk
budgeting portfolio selection models due to complexities. In this paper, a Gradient Based
Optimization (GBO) approach which is newly established technique, has been proposed for risk
budgeting portfolio optimization to maximize expected return. An effort has been made to do
experiment by using GBO on the real data set collected from the S&P BSE Sensex of Indian
stock exchange (30 stocks) to make a comparison with the results of Genetic Algorithms. Study
confirms the superior performance of proposed approach to its considered peer.
An Optimization Reconfiguration Reactive Power
Distribution Network based on Improved Bat Algorithm
Thi-Kien Dao1, Trinh-Dong Nguyen2, Trong-The
Nguyen2, Jothiswaran Thandapani3
1Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian
University of Technology, Fuzhou, China 2University of Information Technology, VNU-HCM, Vietnam 3The Kavery Engineering College, Salem, Tamil Nadu, India
Abstract. Reducing active distribution network power loss has been a significant concern in
distribution networks' safe and efficient functioning. This study pro-poses a solution to optimize
reconfiguration to overcome substantial network loss in local areas based on an improved bats
algorithm (IBA) for reactive power compensation optimization. The bat algorithm (BA) is
adjusted with an adaptive inertia weighting factor and the stochastic operator to improve
convergence speed and precision. An exponential fitness function is con-structed by considering
the topological structure of the distribution network. According to the experimental results of a
33-node case study, both the global optimization accuracy and the voltage quality of the regional
network are improved, e.g., the active network loss of the distribution network dropped from 6.
56 percent to 5.36 percent, and the voltage qualification rate increased from 80. 61 percent to
92. 86 percent. Compared results also demonstrate that the proposed scheme provides a better-
optimized reconfiguration of reactive power compared to the others.
25
Security Prioritized Heterogeneous Earliest Finish Time
Workflow Allocation Algorithm for Cloud Computing
Mahfooz Alam1, Mohammad Shahid2, Suhel Mustajab1
1Department of Computer Science, Aligarh Muslim University, Aligarh, India
2Department of Commerce, Aligarh Muslim University, Aligarh, India
Abstract. Cloud computing, hurriedly, has become an essential platform for many scientific
applications. In the domain, many existing works has been developed for optimizing the
parameters from the quality of service (QoS) of the cloud system. However, effective workflow
allocation with security requirements is emerging as a challenging issue in the cloud system.
Consequently, security requirements satisfaction also becomes an essential in the mapping of
workflow tasks having high priorities onto the virtual machines. Hence, secure and efficient
workflow tasks execution entertaining their priorities is the need of the hours. In this chapter, a
Security Prioritized Heterogeneous Earliest Finish Time (SPHEFT) algorithm has been
proposed to optimize the security overhead and guarantee ratio of the workflow tasks in cloud
system. Here, SPHEFT offers higher priorities to the tasks having more security requirements
and, therefore, assigned on the more reliable virtual machines. In experimental evaluation,
SPHEFT is compared with the standard HEFT algorithm for varying set of tasks. The
experimental results show that SPHEFT has better performance on security overhead and more
excellent efficiencies on improving the tasks guarantee ratio.
An Approach for Enhancing Security of Data over Cloud
Using Multilevel Algorithm
Binita Thakkar, Blessy Thankachan
School of Computer and Systems Sciences, Jaipur National University, Jaipur,
Rajasthan, India
Abstract. Today, users work with many types of data whether it be text, audio, video or picture
file. Storing of such data is of crucial importance. The current trend that provides with easy
storage and access to our data is cloud. Cloud computing is a way to manage all such data at a
place and access them as and when required. With this storage, it is important that the data stored
be secured. Security can be provided by means of encryption and decryption process. Many
algorithms are used on cloud to provide security of such data. In this paper, a multilevel
approach is proposed by using three levels of encryption. This is achieved by using a combined
transposition technique at first level, followed by DES at second level and later by Blowfish at
third level. With the increase in the number of encryption level, the security of data also
increases as it takes more time to crack the algorithm. The analysis of proposed multilevel
algorithm is done with DES and Blowfish based on encryption time, decryption time and
memory utilization.
26
Dropout-VGG based Convolutional Neural Network for
Traffic Sign Categorization
Inderpreet Singh, Sunil Kr. Singh, Sudhakar Kumar,
Kriti Aggarwal
Department of Computer Science and Engineering, Chandigarh College of
Engineering & Technology, P.U., Chandigarh, India
Abstract. In the modern era of motor vehicles where number of cars running on road are
increasing exponentially, the safety of the people driving or walking along the road is being
endangered. Traffic signs plays the most important role in ensuring their safety. The signs
provide the necessary warning and in-formation to help the driver to drive in order and prevent
any potential dan-ger. With the rise in modern technology, the concept of Self-Driving cars is
the new hot topic. To ensure the feasibility of such vehicles, the concept of autonomous traffic
sign detection and classification needs to be implemented with maximum efficiency and
accuracy in real-time. Thus, from the past few years, researchers have shown keen interest in
solving as well as optimizing traffic sign classification problem. Numerous approaches were
intended in the past to deal with this problem, yet there is still an immense scope of performance
optimization to meet the needs in real-time scenarios. Among all solutions proposed,
Convolutional Neural Networks (CNN) have emerged to be the most successful approach to
classify traffic signs. In this paper, we have proposed a novel CNN model termed as dVGG.
This technique is in-spired by the Visual Geometry Group-16 (VGG-16) architecture. VGG-16
is based on dropout regularization approach. Moreover, other data processing techniques like
shuffling, normalization and gray scaling are applied, resulting in a more consistent dataset
which led to faster model generalization. ‘dVGG’ is able to perform better than the VGG-16
model which was implemented through Transfer Learning. We have applied the proposed model
on the German Traffic Sign Recognition Benchmark Dataset (GTRSB). The model proposed
have gave an average accuracy of 98.44% on the GTRSB Dataset.
A Systematic Literature Review on Image Pre-Processing
and Feature Extraction Techniques in Precision
Agriculture
Sharmila G and Kavitha Rajamohan
Department of Computer Science, CHRIST (Deemed to be University),
Bengaluru -560029, India
Abstract. Revolutions in information technology have been helping agriculturists to increase the
productivity of the cultivation. Many techniques exist for farming, but PAg (Precision
Agriculture) is one technique that has gained popularity and has become a valuable tool for
agriculture. Nowadays farmers find it difficult to get expert advice regarding crops on time. As
a solution, IPTs (Image Processing Techniques) embedded PAg applications are developed to
support farmers for the benefit of agriculture. In recent years, IPT has contributed a lot to provide
a significant solution in PAg. This systematic review provides an understanding on pre-
processing and feature extraction in PAg applications along with limitations. Pre-processing and
feature extraction are the major steps of any application using IPTs. This study gives an overall
view of the different pre-processing, feature extraction, and classification methods proposed by
the researchers for PAg.
27
Assessment of the Spatial Variability of Air Pollutant
Concentrations at Industrial Background Stations in
Malaysia Using Self-organizing Map (SOM)
Loong Chuen Lee1,2 and Hukil Sino1
1Forensic Science Program, CODTIS, FSK, Universiti Kebangsaan Malaysia 2Institute IR4.0, Universiti Kebangsaan Malaysia, 40300, Bangi, Malaysia
Abstract. Air pollution is a crucial problem for both national and international regions.
Understanding spatial variability of air pollutants could contribute to the ac-curate prediction of
air quality. Often the concentrations of air pollutants are governed by the type of human
activities in the local region. In this research paper, spatial variability of air pollutants recorded
at five industrial stations in Malaysia is studied using self-organizing map (SOM) technique.
Principal component analysis (PCA) technique was also performed to complement results
obtained from SOM. The spatial variability has been evaluated using yearly, monthly and daily
profiles. Before statistical mapping, the missing values of the data were treated with the mean-
imputation method. The five industrial background stations showed significantly different
pollutant concentrations based on the SOM and PCA results. And the meteorological parameters
are weakly correlated with the air pollutant concentrations. In conclusion, the five stations could
be classified into three classes, i.e., low, moderate and slightly high polluted stations.
A Comprehensive Study on Computer Aided Cataract
Detection, Classification and Management using Artificial
Intelligence
Binju Saju1,2 and Rajesh R1
1Department of Computer Science, CHRIST (Deemed to be University),
Bengaluru -560029, India 2Naipunnya College, Kerala, India
Abstract. The day-to-day popularity of computer aided detection is increasing the medical field.
Cataract is a leading cause of blindness worldwide. Compared with other eye diseases, computer
aided development in the domain of cataract is still remaining underexplored. Several previous
studies are done for automated cataract detection. Many study groups have proposed many
computers aided systems for detecting cataract, classifying the type, identification of stages and
calculation for pre-cataract surgery lens power selection. With the advancement in Artificial
intelligence and Machine Learning, future cataract-related developmental work can undergo
very significant achievements in the future. The paper studies various recent researches done
related to cataract detection, classification and grading using various Artificial Intelligence
techniques. Various comparisons are done based on the methodology used, type of dataset and
the accuracy of various methodologies. Based on the comparative study, Research gap is
identified and a new method is proposed which can overcome the disadvantages and gaps of the
studied work.
28
Attention Based Ensemble Deep Learning Technique for
Prediction of Sea Surface Temperature
Ashapurna Marndi1,2,*, G K Patra1,2
1Academy of Scientific and Innovative Research, Ghaziabad, UP, India 2Council of Scientific and Industrial Research-Fourth Paradigm Institute,
Bengaluru-560037, Karnataka, India
Abstract. Blue economy is slowly emerging as an integral part of overall economic projection
of a country. Significant portion of the world’s population relies on the marine resources for
their livelihood. Prediction of Sea Surface Temperature (SST) has many applications in the field
of forecasting ocean weather and climate, fishing zones identification, over exploitation of
ocean environment and also strategic sectors like defence. Over the years many approaches
based on dynamic models and statistical models have been attempted to predict Sea Surface
Temperature. Generally dynamic models are compute and time intensive. On the other hand, as
statistical approaches are lightweight, sometimes they may fail to model complex problems.
Recently considerable success of Artificial Intelligence in many applications, especially Deep
Learning (DL) technique, motivates us to apply the same for prediction of Sea Surface
Temperature. We have built an attention-based ensemble model over a set of basic models based
on different DL techniques that consume uniquely prepared variant datasets to produce better
predictions. Outcomes from this experiment and the comparative result with existing techniques
justify the efficiency of the proposed methodology.
Ordered Ensemble Classifier Chain for Image and
Emotion Classification
Himthani Puneet, Gurbani Puneet, Raghuwanshi Kapil
Dev, Patidar Gopal and Mishra Nitin Kumar
Department of CSE, TIEIT, Bhopal, MP, India
Abstract. Ensemble techniques play a significant role in the enhancement of Machine Learning
models; hence they are highly applicable in Multi-Label Classification; a more complex form
of classification compared to Binary or Multi-Class Classification. Classifier Chain is the most
prevalent and oldest technique that utilizes correlation among labels for solving multi-label
classification problems. The ordering of class labels plays a significant role in the performance
of the classifier chain; however, deciding the order is a challenging task. A more recent method,
Ensemble of Classifier Chains (ECC), solves this problem by using multiple CC’s with a
different random order of labels for each CC as the base classifier. However, it requires at least
ten CC’s, and it is computationally expensive. Improving the prediction accuracy with less than
ten CC’s is a challenging task that this paper addresses and proposes a Classifier Chain’s
Ensemble model termed Ecc_Wt_Rase. It uses a weighted ensemble of only four classifier
chains. The performance of Ecc_Wt_Rase is compared with the traditional CC and ECC over
three standard multi-label datasets, belonging to image and emotion (music) domains using four
performance parameters. On the one hand, Ecc_Wt_Rase reduces the computational cost and
on the other hand, improves the classification accuracy. The improvement in Hamming Loss is
approx. 6%, which is exceptional for multi-label classification; the training time is also reduced
by approx. 40%, as the number of CC’s in the proposed model are four; compared to ten in
traditional ECC.
29
Improving Black Hole Algorithm Performance by
Coupling with Genetic Algorithm for Feature Selection
Hrushikesh Bhosale1, Prasad Ovhal2, Aamod Sane1
and Jayaraman K Valadi1*
1Flame University, Pune-412115, Pune, India 2Centre for Modeling and Simulation, Savitribai Phule Pune University, Pune
411007, Pune, India
Abstract. Feature selection is a very important pre-processing step in machine learning tasks.
Selecting the most informative features provides several advantages like removing redundancy,
picking up important domain features, improving algorithm performance etc. Recently the Black
Hole algorithm mimicking the real-life behaviour of stars and black holes was proposed in
literature for solving several optimization tasks which includes feature selection. In this novel
feature selection algorithm, each star represents a distinct subset and the black hole represents
the subset having the best fitness. The iterative movement of stars towards the black hole
facilitates discovering the best subset. In this work we have presented a hybrid feature selection
algorithm coupling the existing binary Black Hole algorithm with an existing Binary Genetic
Algorithm. In this new algorithm the control switches between the Black Hole and Genetic
Algorithms. We have introduced the concept of switching probability parameters to facilitate
the switching between the Black Hole and Genetic Algorithms. Our optimally tuned hybrid
algorithm in terms of the switching probability improves the algorithm performance
considerably. We have compared the results of the new algorithm with the existing algorithms
with the help of nine publicly available benchmarking datasets. The results indicate that the
synergistic coupling apart from improving accuracy selects smaller subsets. The coupled
algorithm also has been found to have smaller a variance in accuracies.
A Real-Time Traffic Jam Detection and Notification
System Using Deep Learning Convolutional Networks
Sedish Seegolam and Sameerchand Pudaruth
ICT Department, University of Mauritius, Mauritius
Abstract. Mauritius faces traffic jams regularly which is counterproductive for the country. With
an increase in the number of vehicles in recent years, the country faces heavy congestion at peak
hours which leads to fuel and time wasting as well as accidents and environmental issues. To
tackle this problem, we have proposed a system which consists of detecting and tracking
vehicles. The system also informs users once a traffic jam has been detected using popular
communication services such as SMS, WhatsApp, phone calls and emails. For traffic jam
detection, the time a vehicle is in the camera view is used. When several vehicles are present at
a specified location for more than a specified number of seconds, a traffic jam is deemed to have
occurred. The system has an average recognition accuracy of 93.3% and operates at an average
of 14 frames per second. Experimental results show that the proposed system can accurately
detect a traffic jam in real-time. Once a traffic jam is detected, the system dispatches
notifications immediately and all the notifications are delivered within 15 seconds. Compared
to more traditional methods of reporting traffic jams in Mauritius, our proposed system offers a
more economical solution and can be scaled to the whole island.
30
A Novel Deep Learning SFR Model for FR-SSPP at
Varied Capturing Conditions and Illumination Invariant
Bhuvaneshwari R, Geetha P, Karthika Devi M S, Karthik
S, Shravan G A and Surenthernath J
Dept. of Computer Science and Engg., College of Engineering, Guindy, Anna
University, Chennai-25, India
Abstract. Face Recognition systems attempt to identify individuals of interest as they appear
through a network of cameras. Application like immigration management, fugitive tracing and
video surveillance dominates the technique of Face Recognition - Single Sample Per Person
(FR-SSPP) which has become an important research topic in academic era. The issue of face
recognition can be divided with two groups. The first is, recognition of face with Multiple
Samples Per Person, also known as conventional face recognition. The second method is to
recognise faces using only one sample per person (SSPP). However, in SSPP since there is only
one training sample, it is difficult to predict facial variations such as illumination, disguise, etc.
Pose, illumination, low resolution, and blurriness are considered to be the challenges that face
recognition system encounters. All these problems related to face recognition with Single
Sample Per Person will be dealt with the proposed Synthesized Face Recognition (SFR) model.
The SFR model initially pre-processes the input facial image followed by the techniques like
4X Generative Adversarial Network (4XGAN) to enhance the resolution and Sharp Generative
Adversarial Network (SharpGAN) technique to sharpen the images. In Image formation, 3D
virtual synthetic images are generated consisting of various poses and Position Regression Map
Network Technique (PRN) provides the dense alignment of the generated face images. Finally
with face detection and the deep feature extraction using convolution neural network the
proposed SFR model provides a better solution to the problems involved with recognition of
face with single sample per person. Triplet Loss Function helps to recognize or identify aged
faces which yields more importance to achieve a good functioning face recognition system
which also overcomes the facial features changes. The model will be assessed in terms of
accuracy and size with the aim to provide a detailed evaluation which covers as many
environmental conditions and application requirements as possible.
Design of a Robotic Flexible Actuator Based on Layer
Jamming
Kristian Kowalski and Emanuele Lindo Secco
Robotics Lab, School of Mathematics, Computer Science and Engineering,
Liverpool Hope University, Liverpool, UK
Abstract. This research paper provides an insight into one of the most promising fields of
robotics, which brings together two main elements: the traditional or rigid robotics and the soft
robotics. A branch of soft-rigid robots can perform and modulate soft and rigid configurations
by means of an approach called jamming. Here we explore how to use layer jamming, namely
a set of layers within a flexible membrane, in order to design soft robotics. The paper introduces
a quick overview of the history of soft robotics, then it presents the design of a functional
prototype of soft-rigid robotic arm with the results of preliminary trials and discussion of future
advances where we show the capability of the system in order to lift up possible loads.
31
UAV Collaboration for Autonomous Target Capture
Lima Agnel Tony1, Shuvrangshu Jana1, Varun V. P.2,
Shantam Shorewala2, Vidyadhara B. V.1, Mohitvishnu S.
Gadde1, Abhishek Kashyap1, Rahul Ravichandran1,
Raghu Krishnapuram2, and Debasish Ghose1
1Guidance Control and Decision Systems Laboratory (GCDSL), Department of
Aerospace Engineering, Indian Institute of Science, Bangalore-12, India
2Robert Bosch Center for Cyber Physical Systems (RBCCPS), Indian Institute of
Science, Bangalore-12, India
Abstract. Capturing moving objects using Unmanned Aerial Vehicles (UAVs) is a challenging
task. Many UAV applications require the capture of dynamic aerial targets. Successful
interception requires accurate detection of the object, continuous tracking, and safe engagement
without damaging any involving vehicles. This work presents the algorithmic details and
hardware implementation for capturing a moving ball in a collaborative framework in an
outdoor environment. The tracking and grabbing algorithm are developed using image-based
guidance from the information of a monocular camera. The target image is detected and tracked
using ML based algorithm and Kalman filter. Finally, the proposed framework is simulated in
ROS-Gazebo to evaluate the performance of individual algorithms and further implemented on
hardware to validate the system's real-time performance. The proposed system could be utilised
for several applications like counter-UAV systems, fruit picking, among many others.
Attention Based Ensemble Deep Learning Technique for
Prediction of Sea Surface Temperature
Ashapurna Marndi1,2,*, G K Patra1,2
1Academy of Scientific and Innovative Research, Ghaziabad, Uttar Pradesh,
201002, India
2Council of Scientific and Industrial Research-Fourth Paradigm Institute,
Bengaluru-560037, Karnataka, India
Abstract. Blue economy is slowly emerging as an integral part of overall economic projection
of a country. Significant portion of the world’s population relies on the marine resources for
their livelihood. Prediction of Sea Surface Temperature (SST) has many applications in the field
of forecasting ocean weather and climate, fishing zones identification, over exploitation of
ocean environment and also strategic sectors like defence. Over the years many approaches
based on dynamic models and statistical models have been attempted to predict Sea Surface
Temperature. Generally dynamic models are compute and time intensive. On the other hand, as
statistical approaches are lightweight, sometimes they may fail to model complex problems.
Recently considerable success of Artificial Intelligence in many applications, especially Deep
Learning (DL) technique, motivates us to apply the same for prediction of Sea Surface
Temperature. We have built an attention-based ensemble model over a set of basic models based
on different DL techniques that consume uniquely prepared variant datasets to produce better
predictions. Outcomes from this experiment and the comparative result with existing techniques
justify the efficiency of the proposed methodology.
32
Women’s Shield
Shuchi Dave, Aman Jain, Deepak Sajnani and Saksham
Soni
Poornima College of Engineering, Jaipur, Rajasthan, India
Abstract. The women’s majesty is always respectable but presently the cruelty of humans and
the jeopardy on women are not leading us towards hell only but also degrading our sublimity.
A lot of women were sexually and mentally harassed while some are beaten to death and many
of the cases are even not registered in the papers. Generally, when the wrong behaviour
happening to the woman, at that time she alone fought with them. No one comes to save her life
because no one saw her and if she went to the police, she has no proof. So, for the safety of the
women, we think to design such a device so that the woman could get help from the police
department and her family too. The device contains e-components like GSM, GPS, ESP32
camera module, battery, etc. The whole design will be coded in the IC ATmega328p. Later on,
the design will be printed on the PCB. The device will be in a very small design in the form of
“Borla” a Rajasthani traditional wear. Because of the device, we will make a direct touch with
the lady and along with the live tracking of that woman. It will also help us in tracking
delinquents to put them behind the bars with adequate proofs. With the help of GSM, GPS the
nearest police department and her family members will get the location of the woman via mobile
message and the police will lead the rest. It will give a complete shield to the woman just with
a single touch. In this paper, we tried to reduce the cost of women’s shields with help of value
analysis and optimization.
Sentiment Analysis on Diabetes Diagnosis Health Care
using Machine Learning Technique
P. Nagaraj1, P. Deepalakshmi1, V. Muneeswaran2, K.
Muthamil Sudar1
1Department of Computer Science and Engineering, Kalasalingam Academy of
Research and Education Krishnankoil, Virudhunagar, India
2Department of Electronics and Communication Engineering, Kalasalingam
Academy of Research and Education Krishnankoil, Virudhunagar, India
Abstract. Sentiment analysis is a natural language processing technique that extricated data from
the text to identify the positive and negative polarity of information. This work aims at analysing
sentiments in health care information on diabetes. This automatic analysis assists in better
understanding of the patient’s health condition. Machine Learning based sentiment analysis is
pro-posed in this work which uses SVM classifier for classifying the sentiments based on the
medical opinion for diagnosis. The probability for getting diabetes is estimated using Gaussian
distribution based on the health condition of patients. Experimental evaluation shows that the
SVM classifier achieves high performance with greater accuracy.
33
Predicting the Health of the System based on the Sounds
Manisha Pai and Annapurna P Patil
M. S. Ramaiah Institute of Technology, Bengaluru, India
Abstract. A fundamental challenge in artificial intelligence is to predict the system's state by
detecting anomalies generated due to the faults in the systems. Sound data that deviates
significantly from the default sounds generated by the system is referred to as anomalous
sounds. Predicting anomalous sounds has gained importance in various applications as it helps
in maintaining and monitoring machine conditions. The goal of anomaly detection involves
training the system to distinguish default sounds from abnormal sounds. As self-supervised
learning helps in improvising representations when labelled data are used, it is employed where
only the normal sounds are collected and used. The largest interval on the feature space defines
the support vector machine, which is a linear classifier. We propose a self-supervised support
vector machine (SVM) to develop a health prediction model that helps understand the current
status of the machinery activities and their maintenance, enhancing the system's health accuracy
and efficiency. This work uses a subset of MIMII and ToyADMOS datasets. The implemented
system would be tested for the performance measure by obtaining the training accuracy,
validation accuracy, testing accuracy, and overall mean accuracy. The proposed work would
benefit from faster prediction and better accuracy.
Fake News Detection Using Machine Learning Technique
Dammavalam Srinivasa Rao1, M.Koteswara Rao1,
N.Rajasekhar2, D. Sowmya1, D. Archana1, T. Hareesha1,
S. Sravya1
1Department of IT, VNR Vignana Jyothi Institute of Engineering & Technology
Hyderabad, India
2Gokaraju Rangaraju Institute of Engineering & Technology, India
Abstract. People got to know about the world from newspapers to today’s digital media. From
1605 to 2021 the topography of news has evolved at an immense. People forgotten about
newspapers and habituated to digital devices so that they can view it at anytime and anywhere
soon it be-came a crucial asset for people. From the past few years fake news also evolved and
people always being believed by the available fake news who are being shared by fake profiles
in digital media. There are several methods for detecting fake news by neural networks in one-
directional model. We proposed BERT- Bidirectional Encoder Representations from Trans-
formers is the bidirectional model where it uses left and right content in each word so that it is
used for pre-train the words into two-way representations from unlabelled words it shown an
excellent result when dealt with fake news it attained 99% of accuracy and outperform logistic
regression and K-Nearest Neighbours. This method became a crucial in dealing with fake news
so that it improves categorization easily and reduces computation time. Through this proposal,
we are aiming to build a model to spot fake news present across various sites. The motivation
behind this work to help people improve the consumption of legitimate news while discarding
misleading information relationship in social media. Classification accuracy of fake news may
be improved from the utilization of machine learning ensemble methods.
34
A Model Based on Convolutional Neural Network (CNN)
for Vehicle Classification
F. M. Javed Mehedi Shamrat1, Sovon Chakraborty2,
Saima Afrin3, Md. Shakil Moharram3, Mahdia Amina4,
Tonmoy Roy5
1Department of Software Engineering, Daffodil International University,
Bangladesh 2Department of Computer Science and Engineering, European University of
Bangladesh 3Department of Computer Science and Engineering, Daffodil International
University, Bangladesh 4Department of Computer Science and Engineering, University of Liberal Arts
Bangladesh 5Department of Computer Science and Engineering, North South University,
Bangladesh
Abstract. The Convolutional Neural Network (CNN) is a form of artificial neural network that
has become very popular in computer vision. We proposed a convolutional neural network for
classifying common types of vehicles in our country in this paper. Vehicle classification is
essential in many applications, including surveillance protection systems and traffic control
systems. We raised these concerns and set a goal to find a way to eliminate traffic-related road
accidents. The most challenging aspect of computer vision is achieving effective outcomes in
order to execute a device due to variations of data shapes and colors. We used three learning
methods to identify the vehicle: MobileNetV2, DenseNet, and VGG 19, and demonstrated the
methods detection accuracy. Convolutional neural networks are capable of performing all three
approaches with grace. The system performs impressively on a real-time standard dataset—the
Nepal dataset, which contains 4800 photographs of vehicles. DenseNet has a training accuracy
of 94.32 % and a validation accuracy of 95.37%. Furthermore, the VGG 19 has a training
accuracy of 91.94 % and a validation accuracy of 92.68 %. The MobileNetV2 architecture has
the best accuracy, with a training accuracy of 97.01% and validation accuracy of 98.10%.
Study of Impact of COVID-19 on Students Education
Deepali A. Mahajan and C. Namrata Mahender
Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, India
Abstract. The COVID-19 pandemic conditions affected adversely throughout the world. All
areas like Economy, Education, and Sports. Among all these sec-tors the most affected sector is
education. Not only in the India but in all over the world the Educations system gets collapse
during this COVID -19 conditions. We have conducted a survey research for study of all these
situations on student’s academics. Online questionnaires are prepared and distributed online to
the students to collect their responses. Collected 181 responses for study, and found that more
preference is given to the classroom study in-stead of the online study. In higher education,
online education may be beneficial as they are grown students, but for school level it becomes
quite difficult to understand the concept and continuously attend the lectures.
35
A Transfer Learning Approach for Face Recognition
using Average Pooling and MobileNetV2
F. M. Javed Mehedi Shamrat1, Sovon Chakraborty2, Md.
Shakil Moharram3, Tonmoy Roy4, Masudur Rahman3,
Biraj Saha Aronya1
1Dept. of Software Engineering, Daffodil International University, Bangladesh 2Dept. of Computer Science and Engineering, European University of
Bangladesh, Bangladesh 3Dept. of Computer Science and Engineering, Daffodil International University,
Bangladesh 4Department of Computer Science and Engineering, North South University,
Bangladesh
Abstract. Facial recognition is a fundamental method in facial-related science such as face
detection, authentication, monitoring, and a crucial phase in computer vision and pattern
recognition. Face recognition technology aids in crime prevention by storing the captured image
in a database, which can then be used in various ways, including identifying a person. With just
a few faces in the frame, most facial recognition systems function sufficiently when the
techniques have been tested under artificial illumination, with accurate facial poses and non-
blurry images. in our proposed system, a face recognition system is proposed using Average
pooling and MobileNetV2. The classifiers are implemented after a set of pre-processing steps
on the retrieved image data. To compare the model is more effective, a performance test on the
result is performed. It is observed from the study that MobileNetV2 triumphs over Average
pooling with an accuracy rate of 98.89% and 99.01% on training and test data, respectively.
A Deep Learning Approach for Splicing Detection in
Digital Audios
Akanksha Chuchra, Mandeep Kaur, Savita Gupta
University Institute of Engineering and Technology, Panjab University,
Chandigarh, 160014, India
Abstract. The authenticity of digital audios has a crucial role when presented as evidence in the
court of law or forensic investigations. Fake or doctored audios are commonly used for
manipulation of facts and causing false implications. To facilitate passive-blind detection of
forgery, the current paper presents a Deep learning approach for detecting splicing in digital
audios. It aims to eliminate the process of feature extraction from the digital audios by taking
the Deep learning route to expose forgery. A customized dataset of 4200 spliced audios is
created for the purpose, using the publicly available Free Spoken Digit Dataset (FSDD). Unlike
the other related approaches, the splicing is carried out at a random location in the audio clip
that spans 1sec to 3sec. Spectrograms corresponding to audios are used to train a deep
Convolutional Neural Network that classifies the audios as original or forged. Experimental
results show that the model can classify the audios correctly with 93.05% classification
accuracy. Moreover, the proposed deep learning approach also overcomes the drawbacks of
feature engineering and reduces manual intervention significantly.
36
Classifying Microarray Gene Expression Cancer Data
using Statistical Feature Selection and Machine Learning
Methods
S. Alagukumar1 and T. Kathirvalavakumar2
1Department of Computer Applications, Ayya Nadar Janaki Ammal College,
Sivakasi 626124, Tamil Nadu, India
2Research Centre in Computer Science, V.H.N. Senthikumara Nadar College,
Virudhunagar 626001, Tamil Nadu, India.
Abstract. A breast microarray data is a repository of thousands of gene expressions with
different strengths of each cancer cell. It is necessary to detect the genes which are responsible
for cancer growth. The proposed work aims to identify a statistical test for extracting the
differentially expressed genes from a microarray gene expression and a suitable classifier for
classifying the gene as diseased and control genes. Cancerous genes are identified by Six
statistical tests namely Welch test, Analysis of variance (ANOVA) test, Wilcoxon signed rank
sum test, Kruskal–Wallis, Linear Model for Microarray (LIMMA) and F-test using their p-
values. The identified cancer genes are used to classify cancer patients using seven classifiers
namely linear discriminant analysis (LDA), K-Nearest Neighbour, Naïve Bayesian, Linear
support vector machine, Support vector machine with Radial Basis Function, C5.0 and C5.0
with boosting technique. Performance is evaluated using accuracy, sensitivity and specificity.
The microarray breast cancer dataset of 32 cancer patients and 28 non cancer patients are
considered in the experiment. Microarray contains 25575 numbers of genes for each patient.
When LIMMA test is used to extract differentially expressed cancer genes and KNN is used for
classification the maximum classification accuracy 100% is obtained.
Ontology Formation and Comparison for Syllabus
Structure Using NLP
Masoom Raza, Aditee Patil, Mangesh Bedekar, Rashmi
Phalnikar, Bhavana Tiple
School of Computer Science and Engineering, MIT World Peace University.
Pune, India
Abstract. Ontologies are largely responsible for the creation of a framework or taxonomy for a
particular domain which represents the shared knowledge, concepts and how these concepts are
related with each other. This paper shows the usage of ontology for the comparison of a syllabus
structure of universities. This is done with the extraction of the syllabus, creation of ontology
for the representing syllabus, then parsing the ontology and applying Natural language
processing to remove unwanted information. After getting the appropriate ontologies, a
comparative study is made on them. Restrictions are made over the extracted syllabus to the
subject “Software Engineering” for convenience. This depicts the collection and management
of ontology knowledge and processing it in the right manner to get the desired insights.
37
A Leaf Image based Automated Disease Detection Model
Aditi Ghosh1 and Parthajit Roy2
1The Department of Master of Computer Applications, Techno India Hooghly,
Chinsurah, West Bengal, 712101, India
2The Department of Computer Science, The University of Burdwan, Purba
Bardhaman, West Bengal, 713104, India
Abstract. Detection of plant diseases is an important aspect in agriculture. Traditional disease
detection methods are time consuming as well as it needs domain knowledge. Like any other
intelligent model, plant diseases can also be recognized automatically. Disease may appear on
leaf, stem, fruits or any other parts of the plants. Present study proposes an automated disease
detection model based on affected leaf images. Plant village dataset of apple leaves have been
taken in this research work. Three types of diseases Scab, Black Rot and Cedar Apple Rust
found in apple leaf have been categorized along with healthy leaves. Artificial Neural Network
is used as a classifier with an overall classification accuracy as 94.79%. Our model has been
compared with other existing models.
An Optimized Active Learning TCM-KNN Algorithm
Based on Intrusion Detection System
Reenu Batra1, Manish Mahajan1 and Amit Goel2
1SGT University, Gurugram, Haryana, India
2Galgotias University, Greater-Noida, Uttar Pradesh, India
Abstract. A new network structure can be designed for optimization of network flow
management known as Software Defined Network (SDN). Many of network technologies have
been moved from traditional networks to SDN because of their static architecture and
decentralized property. An efficient network management coupled with network monitoring can
be achieved with help of SDN based network structure. The overall network performance can
be increased by configuring the network programmatically. Most of applications mainly relies
on SDN based network structures as it isolates the forwarding packet mechanism from the
routing task of the network. This results in reduced network loads on single module and
generates efficient network. With the rapid growth of internet technology flow rate of data over
network is also increasing. This increase in flow rate results in rapid increase in Distributed
Denial of Service (DDoS) attacks over the network. As a result, performance of network may
degrade because of non-availability of resources to the intended user. DDoS attacks consumes
the network bandwidth and resources resulting in disrupted network service to the device
connected to internet. Machine Learning and Data Mining techniques can be used for detection
of attacks over network. Simulation of Open Flow switches, RYU Controllers and Other
modules over SDN can result in better network management and detection of attack over
network.
38
A framework for analyzing crime dataset in R using
Unsupervised Optimized K-means Clustering Technique
K. Vignesh, P. Nagaraj, V. Muneeswaran, S. Selva
Birunda, S. Ishwarya Lakshmi, R. Aishwarya
Kalasalingam Academy of Research and Education, Krishnankoil,
Virudhunagar, India
Abstract. At present, the criminals are becoming more and more sophisticated in com-mitting
any sort of crime. Now-a-days, the intelligence, law enforcement agencies and the police
department are facing issues in analysing large volumes of data and classifying the crimes
separately. Analysis of crime is very important so that we can identify patterns or trends in the
crimes committed. For this reason, we can use a data mining unsupervised technique known as
K- means clustering. Data mining is the process of extracting unknown knowledge from a small
or huge data set, data warehouse or repositories. Clustering is a process in which the data items
are grouped based on a specific attribute. K-means clustering is done based on the means of the
data items. In this paper, one can understand about k-means clustering, the procedure and
implementation of the clustering method. This system can be used for analysing the crimes,
understanding the trends and the most crime prone places.
Grading, classification, and sorting of South Indian
Mango Varieties based on the stage of Ripeness
Michael Sadgun Rao Kona, V. Sreeja Priyaraj, G.
Sowmya Manasa, K. Gowtham
Department of Information Technology, Lakireddy Bali Reddy College of
Engineering (Autonomous), Andhra Pradesh, India
Abstract. Mango (Mangifera indica L.) is a crucial tropical fruit having a huge demand across
the global market. As of now, this grouping is being accomplished physically which is not
unique and willing to human blunders. The objective of the investigation is to make a framework
that can arrange mangoes based on their ripening stage automatically. First, we need to take
images of different varieties of Andhra Pradesh mangoes like Chinnarassam, Peddarasam,
Cherukurasam, kobbari Mamidi, and so on. Calculations are proposed and executed utilizing
OpenCV Python. To predict the stage, using these two methods in this paper those are RGB and
Masking. In the RGB technique, the aging stage is distinguished though in the HSV technique
the Hue saturation value map is investigated for the intensity. Based on the calculations done by
the RGB method and Masking, we can predict the stage of the mango into three stages like High
Ripeness, Medium Ripeness, and Low Ripeness. Then after we can sort them according to their
stage. Then we need to grade the mango according to their defective area of the mango. By
using defective area, we can grade the mango by 3 classes namely Grade1, Grade2, Damaged.
Then after we apply the classification methods to determine the accuracy. Among all
classification methods, we get high accuracy (100%) for SVM for RGB Method and 70% for
Masking. This new framework can expand the fair nature of mangos extensively. We can also
decrease the manual power, decrease cost, increase productivity to provide an average accuracy
rate of up to 100%.
39
Multi-Criteria Decision Theory based Cyber Foraging
Peer Selection for Content Streaming
Parisa Tabassum, Abdullah Umar Nasib and Md. Golam
Rabiul Alam
BRAC University, 66 Mohakhali, Dhaka, Bangladesh
Abstract. COVID-19 has made it necessary for educational institutes to make their materials
available online. Having access to these vast amounts of knowledge and learning materials can
benefit students outside of these institutes greatly. With that in mind, this paper proposes a cyber
foraging system. The pro-posed system is a peer-to-peer streaming system for educational
institute content streaming that selects the best peers based on eight decision criteria. Judgments
from experts are used as data to assign relative weights to these criteria using the Fuzzy
Analytical Hierarchy Process method. Finally, the criteria are ranked based on the assigned
relative weights to find out their importance in the peer selection decision-making process.
Multi Agent Co-operative Framework for Autonomous
Wall Construction
Kumar Ankit, Lima Agnel Tony, Shuvrangshu Jana, and
Debasish Ghosey
Guidance Control and Decision Systems Laboratory (GCDSL), Department of
Aerospace Engineering, Indian institute of Science (IISc), Bangalore-12, India
Abstract. Unmanned Aerial Vehicle (UAV) applications with pick and place operation are
plenty, and the same prevails in Unmanned Ground Vehicle (UGV) domain. But low payload
capacity for a UAV and the limited sensing capability of a UGV limits them to automate heavy-
duty and large-scale construction. This complementary nature of these agents can be utilized
together to cater to the needs of long-term autonomous construction. Thereby, we propose a
software framework with its algorithmic details for multi-vehicle collaboration for autonomous
pick and place operation. Three UAVs and a UGV coordinate among themselves to pick bricks
of different sizes and place them at a specific location in a predetermined orientation. At the
core of the decision-making process, distance-based optimization is done to generate the route
plan for the agents. Generated route plan is then sent to agents via a scheduler which keeps their
operations in check and, in case of failures, helps them recover autonomously. The framework
provides end-to-end details on multi-vehicle pick and place operation, keeping collisions and
failures in check. The software is developed in ROS and Gazebo environment and ready to
implement on hardware. The modeling approach makes it easy to be modified and deployed to
cater to any application such as warehouse stock management, package delivery, etc., besides
several other applications.
40
An Efficient Comparison on Machine Learning and Deep
Neural Networks in Epileptic Seizure Prediction
R. Roseline Mary1, B. S. E Zoraida1, B. Ramamurthy2
1Bharathidasan University, Tiruchirappalli, Tamilnadu, India
2CHRIST (Deemed to be University), Bangalore, Karnataka, India
Abstract. Electroencephalography signals have been widely used in cognitive neuro-science to
identify the brain's activity and behaviour. These signals retrieved from the brain are most
commonly used in detecting neurological disorders. Epilepsy is a neurological impairment in
which the brain's activity becomes abnormal, causing seizures or unusual behaviour. Methods:
The benchmark BONN dataset is used to compare and assess the models. The investigations
were conducted using the traditional algorithms in machine learning algorithms such as KNN,
Naive Bayes, Decision Tree, Random Forest, and the Deep Neural Networks to exhibit the DNN
model's efficiency in epileptic seizure detection. Findings: Experiments and results prove the
deep neural network model performs more than traditional machine learning algorithms,
especially with the accuracy value of 97% and Area Under Curve value of 0.994. Novelty: This
research aims to focus on the efficiency of deep neural network techniques compared with
traditional machine learning algorithms to make intelligent decisions by the clinicians to predict
if the patient is affected by epileptic seizures or not. So, the focus of this paper helps the research
community dive into the opportunities of innovations in Deep Neural Net-works. This research
work compares the machine learning and deep neural network model, which s supports the
clinical practitioners in diagnosis and early treatment in epileptic seizure patients.
Seed Set Selection in Social Networks using Community
Detection and Neighborhood Distinctness
Sanjeev Sharma, Sanjay Kumar
Department of Computer Science and Engineering, Delhi Technological
University, India
Abstract. In recent years, the analysis on social networks has evolved so much. A particular
piece of information can be passed from one user to another and as there are many links between
the nodes of the network, the same information can be received by a large number of users just
by the ongoing process of information transmission between the adjacent nodes of the social
network. But a social network can even have millions or perhaps billions of nodes, so if someone
is to send a particular message to all the users by ourselves, it could be very time consuming
and inefficient. So, it would be better if small set of nodes are chosen initially, called the seed
set, and let them pass the information to the major part of the remaining network. These selected
nodes are also called Spreader nodes, such a set should be chosen from a large number of nodes.
An approach using community detection and local structure of the nodes has been proposed to
find out the seed set.
41
Ensemble Model of Machine Learning for Integrating
Risk in Software Effort Estimation
Ramakrishnan Natarajan1 and Balachandran Krishnan2
1School of Business and Management, CHRIST (Deemed to be University),
Bangalore, India
2Computer Science and Engineering, School of Engineering and Technology,
CHRIST (Deemed to be University), Bangalore, India
Abstract. The development of software involves expending a significant quantum of time, effort,
cost, and other resources, and effort estimation is an important aspect. Though there are many
software estimations models, risks are not adequately considered in the estimation process
leading to wide gap between the estimated and actual efforts. Higher the level of accuracy of
estimated effort, better would be the compliance of the software project in terms of completion
within the budget and schedule. This study has been undertaken to integrate risk in effort
estimation process so as to minimize the gap between the estimated and the actual efforts. This
is achieved through consideration of risk score as an effort driver in the computation of effort
estimates and formulating a machine-learning model. It has been identified that risk score
reveals feature importance and the predictive model with integration of risk score in the effort
estimates indicated an enhanced fit.
Analysis of Remote Sensing Satellite Imagery for Crop
Yield Mapping using Machine Learning Techniques
M. Sarith Divakar1, M. Sudheep Elayidom2 and R.
Rajesh3
1School of Engineering, Cochin University of Science and Technology
(CUSAT), Kochi, India
2Division of Computer Engineering, School of Engineering, Cochin University
of Science and Technology (CUSAT), Kochi, India
3Naval Physical and Oceanographic Laboratory (NPOL), Kochi, India, India
Abstract. Crop yield prediction is essential in agriculture for assessing seasonal crop production
to take strategic decisions to ensure food security. The existing approaches based on manual
inspection of the fields or by deploying multiple sensors in different parts of the agriculture field
are expensive and not scalable. Yield prediction technique using remote sensing satellite
imagery provides a better alternative as it is globally available. Surface Spectral Reflectance and
Land Surface Temperature bands from the Terra Satellite’s MODIS are used for crop yield
forecasting in this work. Correlation analysis showed that features extracted from multispectral
satellite images are highly informative against the yield data. Machine learning approaches were
used to build yield prediction models from the multispectral satellite images with an overall
improvement in prediction performance compared to crop simulation models. Results show that
Random Forest regression outperforms other models. The performance of the model is further
improved by hyper-parameter tuning.
42
Construction of a Convex Polyhedron from a Lemniscatic
Torus
Ricardo Velezmoro-Leon, Robert Ipanaque-Chero,
Felicita M. Velasquez-Fernandez, and Jorge Jimenez
Gomez
Departmento de Matematica, Universidad Nacional de Piura, Urb. Miraflores s/n
Castilla, Piura, Peru.
Abstract. We see polyhedra immersed in nature and in human creations such as: art, architectural
structures, science and technology. There is much interest in the analysis of stability and
properties of polyhedral structures due to their morphogeometry. Faced with this situation, the
following research question is formulated: Can a new polyhedral structure be generated from
another mathematical object such as a lemniscatic torus? To answer this question, during the
analysis we observed the presence of infinite possibilities of generating convex irregular
polyhedral from lemniscatic curves, whose vertices are constructed from points that belong to
the curve found in the lemniscatic torus. Emphasis was made on the Construction of the convex
polyhedron: 182 edges, 70 vertices and 114 faces, using the scientific software Mathematica
11.2. Regarding its faces, it has 68 triangles and 2 tetradecagons; Likewise, if we make cross
sections parallel to the two tetradecagons and passing through certain vertices, sections of
sections are also tetradecagons. The total area was determined to be about 12.2521R2 and the
volume about 3.301584R2. It is believed that the polyhedron has the peculiarity of being
inscribed in a sphere of radius R; its opposite faces are not parallel and the entire polyhedron
can be constructed from 8 faces by isometric transformations.
An Ant System Algorithm based on Dynamic Pheromone
Evaporation Rate for Solving 0/1 Knapsack Problem
Ruchi Chauhan, Nirmala Sharma, and Harish Sharma
Rajasthan Technical University, Kota, Rajasthan, India
Abstract. In this research paper, a meta-heuristic search technique of ant system algorithm based
on dynamic pheromone evaporation rate (ASA-DPER) is introduced for solving 0/1 knapsack
problem (0/1 KP). In ASA-DPER algorithm, the pheromone evaporation rate is dependent on
the per-iteration knapsack profit produced by the algorithm. If the present-iteration knapsack
profit is HIGHER than the previous-iteration knapsack profit, the pheromone evaporation rate
is “ER 1”, and if the present-iteration knapsack profit is EQUAL to the previous-iteration
knapsack profit, the pheromone evaporation rate is “ER 2”. The value of ER 1 is always greater
than the value of ER 2. To validate efficiency of ASA-DPER algorithm, experiments are
performed on thirty small-scale 0/1 KP instances and results prove that the ASA-DPER
improves search quality and produces feasible result converging iteration faster, with respect to
the base meta-heuristic ant system algorithm based on static pheromone evaporation rate (ASA-
SPER).
43
Deducing Water Quality Index (WQI) by Comparative
Supervised Machine Learning Regression Techniques for
India Region
Sujatha Arun Kokatnoor, Vandana Reddy and
Balachandran Krishnan
Department of Computer Science and Engineering, School of Engineering and
Technology, CHRIST (Deemed to be University), Bangalore, Karnataka, India
Abstract. Water quality is of paramount important for the well-being of the society at large. It
plays very important role in maintaining the health of the living being. Several attributes like
Biological Oxygen Demand (BOD), power of Hydrogen (pH), Dissolved Oxygen (DO) content,
Nitrate content (NC) and so on helps to identify the appropriateness of the water to be used for
different purpose. In this research study, the focus is to deduce the Water Quality Index (WQI)
by means of Artificial Intelligence (AI) based Machine Learning (ML) models. Six parameters
namely, BOD, DO, pH, NC, Total Coliform (CO) and Electrical Conductivity (EC) are used to
measure, analyse and predict WQI using nine supervised regression machine learning
techniques. Bayesian Ridge Regression (BRR) and Automatic Relevance Determination
Regression (ARD Regression) yielded a low Mean Squared Error (MSE) value when compared
to other regression techniques. ARD Regression model parameters as independent a priori so
that non-zero coefficients don't exploit vectors that are not just sparse, but they are dependent.
In the estimation pro-cess, BRR contains regularization parameters; regularization parameters
are not set hard, but are adjusted to the relevant data. Due to these reasons, ARD Regression
and BRR models performed better.
Artificial Ecosystem-based Optimization for Optimal
Location and Sizing of Solar Photovoltaic Distribution
Generation in Agriculture Feeders
U Kamal Kumar1,2 and Varaprasad Janamala1
1CHRIST (Deemed to be University), Bangalore – 560 074, KA, India 2Sree Vidyanikethan Engineering College, Tirupati, 517102, AP, India
Abstract. In this paper, an efficient nature-inspired meta-heuristic algorithm called artificial
ecosystem-based optimization (AEO) is proposed for solving optimal locations and sizes of
solar photovoltaic (SPV) systems problem in radial distribution system (RDS) towards
minimization of grid dependency and green-house gas (GHG) emission. Considering loss
minimization as main objective function, the location and size of solar photovoltaic systems
(SPV) is optimized using AEO algorithm. The results on Indian practical 22-bus agriculture
feeder and 28-bus rural feeders are highlighted the need of optimally distributed SPV systems
for maintaining minimal grid-dependency and reduced GHG emission from conventional
energy (CE) sources. Moreover, the results of AEO have been compared with different heuristic
approaches and highlighted its superiority in terms of convergence characteristics and
redundancy features in solving the complex, non-linear, multi-variable optimization problems
in real-time.
44
Optimized Segmentation Technique for Detecting PCOS
in Ultrasound Images
S. Jeevitha and N. Priya
Department of Computer Science, Shrimathi Devkunvar Nanalal Bhatt Vaishnav
College for Women, University of Madras, Chennai, India
Abstract. PCOS-Polycystic Ovary Syndrome is one of the prominent disorders called endocrine
that occurred in the reproductive system of the female lifestyle. Ovulation issues are frequently
created by PCOS, which extends to infertility and endometrial cancers. Recently infertility
problem is enrolling major issues for females. According to a survey, 10 to 15 percent of married
women are affected by infertility and identified by finding the follicles in ovary portions like
count, size, the position of the ovary, and hormonal secretions. Automatic Detection of follicles
is quite a challenging task in predicting Polycystic Ovary (PCO). It happens to lead to inaccuracy
detection because of the more noise and low contrast of ultrasound images. To overcome, this
trouble an optimized segmentation algorithm has been proposed along with suitable pre-
processing techniques respectively, Morphological operations, and Filtering. The proposed
Segmentation techniques fix the accurate boundary box for selecting the area to detect follicles
in the ovary images. The algorithm has been tested with 50 images of ovaries in different types
like Normal cyst, Ovarian cyst, and PCOS and detecting the follicle in the ovaries for addressing
the PCOS accurately.
Framework for Estimating Software Cost using Improved
Machine Learning Approach
Sangeetha Govinda
Department of Computer Science and Applications, Christ Academy Institute for
Advanced Studies, Bengaluru, India
Abstract. At Software Cost Estimation is one of the integral parts of project management in
every software development organization, which deals with accounting for all the measurable
effort required to develop software. This topic in software engineering has been consistently
being investigated for the last decade with the intermittent publication of research papers. After
reviewing existing approaches, it is found that still, the problem is an open end. There-fore, this
paper introduces a machine learning-based approach where a project manager computes the
software cost based on the standard input. In contrast, the project manager has estimated cost is
further fed to neural network processors subjected to multiple learning algorithms to perform
accurate software cost prediction considering all the practical project management scenario. The
comparative study outcome shows extensively better accuracy only in three stages of evaluation
in the presence of multiple learning approaches.
45
A Questionnaire-based Analysis of Network Forensic
Tools
Rachana Yogesh Patil, Manjiri Ranjanikar
Pimpri Chinchwad College of Engineering, Savitribai Phule Pune University,
Pune, Maharashtra, India
Abstract. Digital forensics is all about collecting the evidences from different digital devices for
the purpose of investigating the cybercrimes or security breaches. The aim of digital forensics
is to bring the authentic digital evidences in front of the court of law. It is essential to collect
forensically sound legal digital evidences of criminal activity in order to convict those
responsible for fraudulent activities. Network is a backbone for all types of cyber-attack.
Collecting evidences for cyber-crime by analysing network artifacts is the most crucial step in
network forensics. The up-to-date analysis is required before the development of new tool for
network forensic is required. In this paper the existing tools and techniques of network forensics
are reviewed. In order to get detailed insights of current practices followed by the digital
investigators working on live cases of cybers crimes we have flooded the questionnaire. The
analysis of the collected response is done which proves that one of the biggest challenges in
network forensic investigation is lack of support from ISP and unavailability of tool to identify
true source of cybercrime.
The Extraction of Automated Vehicles Traffic Accident
Factors and Scenarios using Real-World Data
MinHee Kang, Jaein Song and Keeyeon Hwang
Hongik University, Seoul 04066, Republic of Korea
Abstract. As Automated Vehicles (AVs) approach commercialization, the fact that the SAFETY
problem becomes more concentrated is not controversial. Depending on the issue, the scenarios
research that can ensure safety and are related to vehicle safety assessments are essential. In this
paper, based on ‘report of traffic collision involving an AVs’ provided by California DMV
(Department of Motor Vehicles), we extract the major factors for identifying AVs traffic
accidents to derive basic AVs traffic accident scenarios by employing the Random Forest, one
of the machine learning. As a result, we have found the importance of the pre-collision
movement of neighbouring vehicles to AVs and inferred that they are related to collision time
(TTC). Based on these factors, we derived scenarios and confirm that AVs rear-end collisions
of neighbouring vehicles usually occur when AVs are ahead in passing, changing lanes, and
merge situations. While most accident determinants and scenarios are expected to be similar to
those between human driving vehicles (HVs), AVs are expected to reduce accident rates because
‘AVs do not cause accidents'.
46
Analysis of Lung Cancer Prediction at an Early Stage: A
Systematic Review
Shweta Agarwal and Chander Prabha
CSE, Chandigarh University, Mohali, Punjab India
Abstract. Diseases such as cardiovascular diseases, cancers, chronic respiratory diseases,
diabetes, etc. are non-communicable diseases (NCD) that are the leading cause of death
worldwide. They kill approximately 41 million people every year; an equivalent of 71% of
annual global deaths. After cardiovascular disease, cancer is the leading cause of death
worldwide, with lung cancer as the most frequently diagnosed cancer. One of the major factors
for such mortality is its late diagnosis. Hence, mortality can be reduced if cases can be detected
and treated early. The disease can be controlled or cured completely if it is detected at an early
stage, though that remains a major set-back for medical science. Late diagnosis leads to
incurable advanced stages of the disease and possibly depriving actions of successful treatment.
This review paper is aimed at presenting the detailed study of different algorithms implemented
for predicting lung cancer at its earliest stage, and the scope for improvements using medical
image procedures like CT scan imaging, X-rays, datasets, etc.
Sentimental Analysis of Code-Mixed Hindi Language
Tweets
Ratnavel Rajalakshmi, Preethi Reddy, Shreya Khare and
Vaishali Ganganwar
School of Computer Science and Engineering, Vellore Institute of Technology,
Chennai
Abstract. Sentiment Analysis is the task of identifying and classifying sentiments expressed in
texts. Sentiment analysis of code-mixed data is a huge challenge for the NLP community since
it is very different from the traditional structures of standard languages. Code mixing refers to
additions of linguistic units like phrases or words of one language to another. The mixing of
languages takes place not only on sentence level but also at the word level. It is important to
perform sentiment analysis on such code-mixed data for better understanding of the text and for
further classification. We have implemented various basic machine learning algorithms viz.,
decision tree, Linear SVC, logistic regression, Multinomial Naive Bayes and SGD Classifier for
performing sentiment analysis on code mixed Hinglish dataset. To address the issues of phonetic
typing and multi-lingual words, we have proposed an ensemble-based classifier to identify the
sentiment expressed in code-mixed Hinglish tweets. Based on the extensive experimental
analysis, we observed that XGBoost performed well in comparison to other machine learning
algorithms. With the XGBoost ensemble learning algorithm, we obtained an F1 score of
83.10%, which is significantly better than the existing state-of-art works on the Hinglish dataset.
47
A Comprehensive Survey on Machine Reading
Comprehension: Models, Benchmarked Datasets,
Evaluation Metrics and Trends
Nisha Varghese and M Punithavalli
Department of Computer Applications, Bharathiar University, Coimbatore-
641046, India
Abstract. Machine Reading Comprehension (MRC) is a core process in question answering
systems. Question answering systems are capable to extract the answer from relevant resources
automatically for the questions posed by humans and machine reading comprehension brings
attention to a textual understanding with answering questions. Recent advancements in deep
learning and Natural Language processing pave the way to improve the accuracy in Question
answering systems with the major developments in Neural Machine Reading Comprehension,
Transfer Learning, Deep Learning based information retrieval, and knowledge-based
information extraction. Herein, this research analysis included the comprehensive analysis of
MRC tasks, Benchmarked datasets, Classic Models, Performance evaluation Metrics, and
Modern Trends and Techniques on MRC.
Cognitive Computing and its Relationship to Computing
Methods and Advanced Computing from a Human-
Centric Functional Modeling Perspective
Andy E. Williams
Nobeah Foundation, Kenya
Abstract. Recent advances in modeling human cognition have resulted in what is suggested to
be the first model of Artificial General Intelligence (AGI) with the potential capacity for human-
like general problem-solving ability, as well as a model for a General Collective Intelligence or
GCI, which has been described as software that organizes a group into a single collective
intelligence with the potential for vastly greater general problem-solving ability than any
individual in the group. Both this model for GCI and this model for AGI require functional
modeling of concepts that is complete in terms of meaning being self-contained in the model
and not requiring interpretation based on information outside the model. The combination of a
model of cognition to define an interpretation of meaning, and this functional modeling
technique to represent information that way together results in fully self-contained definitions
of meaning that are suggested to be the first complete implementation of semantic modeling.
With this semantic modeling, and with these models for AGI and GCI, cognitive computing and
its capacity for general problem-solving ability become far better defined. However, semantic
representation of problems and of the details of solutions, as well general problem-solving
ability in navigating those problems and solutions is not required in all cases. This paper
attempts to explore the cases in which it is, and how the various computing methods and
advanced computing paradigms are best utilized in each case from the perspective of cognitive
computing.
48
A Novel Feature Descriptor: Color Texture Description
with Diagonal Local Binary Patterns Using New Distance
Metric for Image Retrieval
Vijaylakshmi Sajwan, Rakesh Ranjan
Himgiri Zee University, Dehradun, Uttarakhand, India
Abstract. The growth of digital data exponentially accelerates with each passing day. A storage
media database usually contains large amounts of images and information content, which must
be located and retrieved with relative ease. A novel distance metric and a diagonal local binary
pattern (DLBPC) are introduced in this work for finding high-accuracy data. The device-
independent L*a*b* color space is used in the description. The system's effectiveness has been
tested using the dataset Wang-1K. The findings show that the recommended method is as
effective as other systems that have been studied.
OntoINT: A Framework for Ontology Integration based
on Entity Linking from Heterogeneous Knowledge
Sources
1Manoj N, 2Gerard Deepak, 2Santhanavijayan A
1Department of Computer Science and Engineering, SRM Institute of Science
and Technology, Ramapuram, Chennai, India
2 Department of Computer Science and Engineering, National Institute of
Technology, Tiruchirappalli, India
Abstract. In Artificial Intelligence, knowledge representation can be a crucial field of work,
particularly in the development of the query answering system. Ontology is used to talk about a
particular space for query answering structure of shared knowledge. Ontology integration is
necessary for arrange to fathom this issue of blended information. In the proposed OntoINT
framework, the ontologies are subjected to spectral clustering and ANOVA-Jaccard similarity
index under sunflower optimization as similarity measurement for Ontology matching. The
performance of the proposed OntoINT is evaluated, and it is compared with baseline models
and other variations of the OntoINT and it was found that our approach is superior in terms of
performance. It can be observed that the Precision, Recall, Accuracy, F-measure and Percentage
of New and Yet Relevant Concepts Discovered for OntoINT Network it is noted to 91.97%,
93.02%, 92.89%, 92.45% and 84.78% respectively for dataset 1 and Precision, Recall,
Accuracy, F-measure and Percentage of New and Yet Relevant Concepts Discovered for
OntoINT Network it is noted to 91.97%, 93.02%, 92.89%, 92.45% and 84.78% respectively for
dataset 2.
49
Digital Building Blocks using Perceptrons in Neural
Networks
Shilpa Mehta
ECE, SoE, Presidency University Bangalore, India
Abstract. Most microprocessors and microcontrollers are based on Digital Electronics building
Blocks. Digital Electronics gives us a number of combinational and sequential circuits for
various arithmetic and logical operations. These include Adders, Subtracters, Encoders,
Decoders, Multiplexers, DE multiplexers and Flip Flops. These further combine into higher
configurations to perform advanced operations. These operations are done using logic circuits
in digital electronics. But in this paper, we explore the human reasoning approach using artificial
neural networks. We will look into neural implementations of logic gates implemented with
SLP (Single layer perceptron) and MLP (Multi-Layer Perceptron). We will also look into
recurrent neural architectures to make basic memory elements, viz. Flip Flops which use
feedback and may involve in one or more neuron layers.
KnowCommerce: A Semantic Web Compliant
Knowledge-Driven Paradigm for Product
Recommendation in E-Commerce
1Krishnan N, 2Gerard Deepak, 2Santhanavijayan A
1Department of Computer Science and Engineering, SRM Institute of Science
and Technology, Ramapuram, Chennai, India
2 Department of Computer Science and Engineering, National Institute of
Technology, Tiruchirappalli, India
Abstract. Product Recommendation is changing the way how e-commerce website’s function
and also the way how the products are advertised in a way that maximizes the profit by showing
the targeted product to the target audience by making use of the user queries and user activity
in the website. This paper proposes a semantically driven technique for product recommendation
using Knowledge engineering combined with deep learning and optimization algorithms. The
dataset that is used for training the recommendation system is the users click data and user query
which is combined into a set called an item configuration set which is later used to create an e-
commerce ontology whose semantic similarity is compared to the Neural Network's output and
using this similarity score, products are recommended to the users The efficiency of the
architecture is analysed in comparison to the baseline approaches, and it is shown that the
suggested method outruns the performance, with an F-Measure and FDR of 93.08% and 93.72%
accordingly.
50
Ant System Algorithm with Output-Validation for Solving
0/1 Knapsack Problem
Ruchi Chauhan, Nirmala Sharma, and Harish Sharma
Rajasthan Technical University, Kota, Rajasthan, India
Abstract. In this research paper, a meta-heuristic search technique of ant system algorithm with
output-validation (ASA-OV) is introduced for solving 0/1 knapsack problem (0/1 KP). The
ASA-OV overcomes the drawbacks of the ant system algorithm (ASA) namely: the invalid
output (i.e., Knapsack profit equals zero) and the abnormal termination of the algorithm i.e.,
algorithm termination due to “Exception”. In ASAOV algorithm, the validation of output is
done by adding filter in the code that prevents invalid values from entering the solution vector
and the normal algorithm termination of the algorithm is assured by handling the run-time
exceptions. Experiment is performed on thirty small-scale 0/1 KP instances to analyse the ASA-
OV algorithm and results prove that the ASA-OV is more stable than the ASA.
Removal of Occlusion in Face Images Using PIX2PIX
Technique for Face Recognition
Sincy John and Ajit Danti
Christ (Deemed to be) University, Bangalore, India
Abstract. Occlusion of face images is a serious problem encountered by the researchers working
in different areas. Occluded face creates a hindrance in extracting the features thereby exploits
the face recognition systems. Level of complexity increase with changing gestures, different
poses, and expression. Occlusion of the face is one of the seldom touched areas. In this paper,
an at-tempt is made to recover face images from occlusion using deep learning techniques.
Pix2pix a condition generative adversarial network is used for image recovery. This method is
used for the translation of one image to an-other by converting an occluded image to a non-
occluded image. Webface-OCC dataset is used for experimentation and the efficacy of the
proposed method is demonstrated.
Pandemic Simulation and Contact Tracing: Identifying
Superspreaders
Aishwarya Sampath, Bhargavi Kumaran, Vidyacharan
Prabhakaran, Cinu C Kiliroor
SCOPE, Vellore Institute of Technology, Chennai, India
Abstract. In the context of infectious human borne diseases, super spreaders are people who can
transmit diseases to a larger number of people than the average person. Medically, it is assumed
that one in every five people can be a super spreader. Using graph theory and social network
analysis, we have identified these super spreaders in Chennai, given a synthetic dataset with the
location history of a particular individual. We have also predicted the spread of the disease.
Network graphs have been used to visualise the spread. This aids visualization of the spread of
the pandemic and reduces the abstraction that accompanies statistical data.
51
Age, Gender and Emotion Estimation Using Deep
Learning
Mala Saraswat, Praveen Gupta. Ravi Prakash Yadav,
Rahul Yadav, Sahil Sonkar
ABES Engineering College, Ghaziabad, UP, India
Abstract. Age, gender and emotion estimation plays very important role in intelligent
applications such as human-computer interaction, access control, healthcare, marketing
intelligence, etc. To make computer demonstrating of people age, gender and emotion, lot of
research has been conducted. However, it is as yet a long way behind the human vision
framework. This paper proposes and build an automatic age, gender and emotion estimation
towards human faces. This estimation plays a significant part in computer vision and pattern
recognition. Non-verbal specialized techniques like facial appearances, eye variation and
gestures are utilized in numerous applications of human computer interconnections. This paper
pro-pose a convolutional neural network (CNN) based engineering architecture for age, gender
and emotion classification. The model is trained to categorize input images into eight groups of
age, two groups of gender and six groups will be used for the emotion. Basically, our approach
shows better accuracy in age, gender and emotion classification compared with different
classifier-based methods. In computer model-ling the planning is to predict human emotions
using deep-CNN and observe changes occurred on emotional intensity. For extracting the
features of images pre-processing algorithm that is known as Voila-Jones calculation.
Experiments conducted using different datasets: FER13 using our proposed approach provides
accuracy of 81% for emotion estimation, age 79% and gender accuracy75%.
Assessment of Attribution in Cyber Deterrence: A Fuzzy
Entropy Approach
Nisha T N and Prasenjit Sen
Symbiosis Centre for Information Technology, Pune, India
Abstract. Against the threat of cyber warfare and cyber-attacks involving cyber kinetics by state
and non –state elements traditionally various and the kind of defensive measures are in vogue.
However, cyber deterrence has come into the domain of cyber defense as a silent agent in the
repudiation of cyber-attacks. The concept of deterrence has evolved from ancient times, fully
integrated with state policy in the era of nuclear deterrence and MAD and is currently involved
in cyber warfare. In the operation of cyber deterrence against a known or invisible attacker, one
of the issues faced is that of attribution. It is not only that of detection but also that of ascribing
the motive and methods of the cyber-attack. Any misjudgement of attribution may lead to a
consequential and irreversible retaliation. For assessment of proper attribution, this paper has
proposed a mathematical model and attempts to work out an Intuitionistic Fuzzy Entropy
approach.
52
Predictive Maintenance of Bearing Machinery using
MATLAB
Karan Gulati, Shubham Tiwari, Keshav Basandrai,
Pooja Kamat
Symbiosis Institute of Technology (SIT), Symbiosis International (Deemed
University), Pune, India
Abstract. In recent years, health monitoring of machines has become increasingly important in
the manufacturing and maintenance industry. Unexpected failures of machine equipment can
have disastrous effects, such as production interruptions and expensive equipment repair. Being
one of the most fragile elements of rotating machinery, rolling bearings are a must-have. Failure
in machines is a natural phenomenon. Due to this reason a strong maintenance strategy has to
be put in place, so that the interruptions and downtimes can be handled in advance. Predictive
maintenance is a technique that tracks equipment performance during regular service using
condition monitoring techniques in order to detect and fix possible faults before they cause
failure. Predictive maintenance has had a major impact on the manufacturing sector as it lets
you find sufficient time to plan ahead of the machine failure. This helps in reducing the time to
re-initiate the machine after it has been repaired. It also helps in pinpointing problems in our
machines and giving information on the parts which need to be repaired before they reach their
useful life. Therefore, using a predictive maintenance approach we not only reduce machine
downtime but also help in reducing repair cost. As a result, this method is adaptable and can be
used in a variety of situations and be useful in diagnosis of a large number of machines. Signal
Processing and Vibration analysis methods implemented in MATLAB can be effective for
understanding real time machine status. Extracting time domain features from machine data
using Spectral Kurtosis and Envelope Spectrum techniques, predictive machine maintenance
can be achieved since the unplanned downtimes and maintenance expenses can be reduced if
industrial machinery breaks.
Application of Data Mining and Temporal Data Mining
Techniques: A Case Study of Medicine Classification
Shashi Bhushan
Gaya College Gaya, Bihar, India
Abstract. This research paper classifies drugs based on their properties using the K-Means
clustering method. K-Means clustering method is a very powerful tool for data mining and
temporal data mining. With the help of this, data is classified. In this study, we have classified
the claims based on their properties with the help of K-means clustering. Here, the weight of
drugs and their pH values have been considered as the attribute. Euclidean distance has been
used to extract similarity between two drugs. Hence, this study is very useful for classifying the
medicines according to their properties. This proposed method is automated method for
classifying the medicines.
53
Fuzzy Keyword Search over Encrypted Data in Cloud
Computing: An Extensive Analysis
Manya Smriti, Sameeksha Daruka, Khyati Gupta, Siva
Rama Krishnan S
School of Information Technology and Engineering, Vellore Institute of
Technology, Vellore, India
Abstract. With the ever-increasing rate of growth and flexibility of cloud computing daily, more
sensitive and insensitive data are being stored in the cloud. For the protection of sensitive
identity information, information must be encrypted before distribution. Traditional search
encryption schemes offer a variety of search methods for encrypted data but only support direct
keyword searches. Inappropriate keyword searches for cloud storage systems, do not allow users
to make spelling or similar formatting errors, and hence greatly reduces program usage. We
analysed fuzzy keyword search which would entertain typos when the user performs searching
over files in the cloud which is kept in an encrypted manner. We plan to use base64 for
encryption and decryption of files to be uploaded and AES encryption for N-gram keywords to
be searched. A keyword or keyword phrase will be associated with each file; its N-grams will
be encoded using AES. N-grams keywords generated using the user’s search input would be
used to retrieve the file from Cloud. File which has the highest relatable N-grams keywords
when compared to N-grams keywords generated by the user’s search input will be retrieved
using the concept of Jaccard Index.
A Deep Learning Approach for Plagiarism Detection
System using BERT
Anjali Bohra and N C Barwar
MBM Engineering College, Jai Narayan Vyas University, Jodhpur, India
Abstract. The processing of natural language processing is changed after the evident of deep
learning algorithms. The machine learning algorithms use numerical data for processing
therefore categorical data are converted into equivalent vectors for processing by the machines.
Word embeddings are the real vectored representation of words which stores semantic
information. These embeddings are the significant tool of natural language being used in various
tasks like name-entity-recognition and parsing etc. Authorship attribution is a major problem in
natural language processing. A framework for identification of authorship attribution has two
layers of processing one is attribution (feature selection) and another is verification
(classification). Solution for the problem is to obtain a similarity score with the content.
Similarity between the contents is identified by plagiarism detection system by finding pds score
of the given documents/contents. The paper proposed a plagiarism detection algorithm using
explicit semantic detection algorithm. The system obtains contextualized word-embeddings
using BERT pretrained model. STS-benchmark dataset is used for fine-tuning of BERT model.
The proposed algorithm compares the word embeddings of the suspicious content with the
reference collection using sentence similarity function. The experiments have been performed
using python and deep learning keras framework. The research has shown that results obtained
through experimentation have improved the efficiency of the proposed system compared to
existing systems.
54
Enhanced Security Layer for Hardening Image
Steganography
Pregada Akshita and Amritha P. P.
TIFAC-CORE in Cyber Security, Amrita School of Engineering, Coimbatore,
Amrita Vishwa Vidyapeetham, India
Abstract. Steganography is a technique for securing data by hiding data into data or data behind
data. Modern steganography uses a variety of formats or types, including text, picture, audio,
video, and protocol, but digital images are the most extensively utilised due to their prevalence
on the internet for example, secret information binary code can be hidden in images binary code,
causing the image to be slightly altered. Some enable invisibility of information, while others
supply a large secret message that must be kept hidden. This paper proposes a new method of
image steganography based on the LSB substitution, with Base64 Encoding added to make it
more secure. We are going to build an authentication application, which will add one more layer
of security for image steganography. This legitimate email application can be used for
communication between the sender and the receiver.
Machine Learning Techniques on Disease Detection and
Prediction Using the Hepatic and Lipid Profile Panel
Data: A Decade Review
Ifra Altaf1, Muheet Ahmed Butt1, Majid Zaman2
1Department of Computer Sciences, University of Kashmir, Srinagar, J&K, India
2Directorate of IT&SS, University of Kashmir, Srinagar, J&K, India
Abstract. Owing to the high availability of data, machine learning becomes an important
technique for exhibiting the human process in the medical field. Liver function test data and
lipid profile panel data comprise of many parameters with various values that specify certain
evidence for the existence of the disease. The objective of this research paper is to provide a
chronological, summarized review of literature with comparative results for the various machine
learning algorithms that have been used to detect and predict the diseases from at least one of
the attributes from liver function test data or lipid profile panel data. This review is intended to
highlight the significance of liver function and lipid level monitoring in patients with diabetes
mellitus. The association between LFT data and LPP data with diabetes is presented based on
the review of past findings. Data is definitely a challenge and region-specific medical data can
be helpful in terms of the aspects that they can reveal. This review paper can help to choose the
attributes required to collect the data and form an appropriate dataset.
55
Matrix Games with Linguistic Distribution Assessment
Payoffs
Parul Chauhan and Anjana Gupta
Delhi Technological University, Shahbad Daulatpur, Main Bawana Road,
Rohini, Delhi 110042, India
Abstract. In this paper, we propose a new concept of two-person constant sum matrix games
having payoffs in the form of linguistic distribution assessments. Such types of payoffs allow
the players to express their opinion in terms of a whole fuzzy set and thus, are not limited to just
one single term. To establish an equilibrium solution of these kinds of matrix games, first, we
define the maxmin and minmax strategies to apply in case the players play pure strategies. In
mixed strategies, we develop a Linguistic Distribution Linear Programming (LDLP) approach
to find the players’ mixed strategies. The method is depicted as a generalization of that
traditionally used in the solution of a classical game. The applicability of defined LDLP is
illustrated with the help of an example.
Performance Analysis of Machine Learning Algorithms
for Website Anti-phishing
Mohan Krishna Varma N1, Padmanabha Reddy YCA2,*,
Rajesh Kumar Reddy C1
1Department of CSE, Madanapalle Institute of Technology & Science,
Madanaplle, AP, India
2Department of CSE, B V Raju Institute of Technology, Narsapur, Telangana,
India
Abstract. Phishing has become the main hazard to most of the web users and website phishing
makes people for lose millions of dollars every year. In today's world, most of the files are
placed on web. Security of these files are not guaranteed. In the same way phishing makes easier
to steal the data. One simple approach is not sufficient to solve this problem. This paper provides
the overview of different anti-phishing techniques using machine learning approach to solve the
website phishing. Machine learning is technique of learning from experience. Machine learning
has different paradigms like supervised, unsupervised, semi-supervised and reinforcement
learning. This paper follows supervised learning approach to provide solution to the web-site
phishing problems. Supervised learning is used in classification and regression. The comparison
of accuracy levels of these anti-phishing techniques is discussed in this paper.
56
Analytical Analysis of Two Ware-House Inventory Model
Using Particle Swarm
Sunil Kumar and Rajendra Prasad Mahapatra
SRM IST, NCR Campus Modinagar, India
Abstract. A stock set up for weakening things with two degree of capacity framework and time
subordinate interest with halfway accumulated deficiencies is created in this research topic.
Stock is moved from hired warehouse (RW) to personal ware house (OW) in mass delivery
design and cost of transportation considered as insignificant. Rates of weakening in all the
distribution centres are consistent yet unique because of the distinctive safeguarding
methodology. Up to a particular time, holding cost is viewed as consistent and after some time
it increases. Particle swarm optimization having fluctuating populace numbers is utilized to
tackle the set up. In given PSO a fraction of better kids is incorporated along with the populace
of parent for future. Size of its parent set and kid’s subset having same level. The mathematical
model is introduced to validate the presented the setup. Affectability examination is performed
independently for every boundary.
Towards an Enhanced Framework to Facilitate Data
Security in Cloud Computing
Sarah Monyaki1, John Andrew van der Poll2 and Elisha
Oketch Ochola1
1School of Computing, College of Science, Engineering and Technology,
University of South Africa (Unisa), SA
2Graduate School of Business Leadership (SBL), University of South Africa
(Unisa), SA
Abstract. Cloud- and associated edge computing are vital technologies for online sharing of
computing resources with respect to processing and storage. The SaaS provisioning of services
and applications on a pay-per-use basis removes the responsibility of managing resources from
organisations which in turn translates to cost savings by reducing capital expenditure for such
organisation. Naturally any online and distributed environment incurs security challenges, and
while ordinary users might not be concerned by the unknown whereabouts of their data in the
cloud, the opposite may hold for organisations or corporates. There are numerous interventions
that attempt to address the challenge of security on the cloud through various frameworks, yet
cloud security remains a challenge since the emergence of cloud technology. This research
highlights and critically analyses the structure of and mechanisms as-sociated with three
prominent cloud security frameworks in the literature to evaluate how each of them addresses
the challenges of cloud security. On the strength of a set of qualitative propositions defined from
the analyses, we develop a comprehensive cloud security framework that encompasses some
components of the studied frameworks, aimed at improving on data and information security in
the cloud.
57
Political Optimizer Based Optimal Integration of Soft
Open Points and Renewable Sources for Improving
Resilience in Radial Distribution System
Sreenivasulu Reddy D and Varaprasad Janamala
Dept. of Electrical and Electronics Engineering, School of Engineering and
Technology, Christ (Deemed to be University), Bangalore – 560074, KA, India
Abstract. In this paper, new and simple nature-inspired meta-heuristic search algorithm namely
political optimizer (PO) is proposed for solving the optimal location and sizing of solar
photovoltaic (SPV) system. An objective function for distribution loss minimization is
formulated and solved using proposed PO. At the first stage, the computational efficiency of PO
while solving optimal al-location of SPV system in radial distribution system (RDS) is
compared with various other similar works and highlighted its superiority in terms of global
solution. In second stage, the interoperability requirement of SPV system via soft open points
(SOPs) among multiple laterals is solved considering radiality constraint. Various case studies
on standard IEEE 69-bus system have shown the effectiveness of proposed concept of
interoperable-photovoltaic (I-PV) system in improving resilience and performance in terms of
reduced losses and improved voltage profile.
Kinematics and Control of a 3 DOF Industrial
Manipulator Robot
M.I Claudia Reyes Rivas1,2, Dra. María Brox Jiménez1,
Andrés Gersnoviez Milla1, Héctor René Vega Carrillo2,
M.C Víctor Martín Hernández Dávila2, Francisco Eneldo
López Monteagudo2, Manuel Agustín Ortiz López1
1Universidad de Córdoba, España
2Universidad Autónoma de Zacatecas, México
Abstract. This article presents the analysis of the kinematics and dynamics of a manipulator
robot with three rotational degrees of freedom. The main objective is to obtain the direct and
inverse kinematic models of the robot, as well as the equations that describe the motion of two
pairs: τ1 and τ2, through the dynamic model and the development of the Lagrange equations.
For this rea-son, this document shows the mathematical analysis of both models. Once the
equations representing the robot have been described, the PD+ controller calculations are
described, as well as the results obtained by simulating the manipulator equations, using the
VisSim 6.0 software, with which the kinematic models were programmed. To observe the
importance of this analysis, a predefined linear trajectory was designed.
58
Enhanced Energy Efficiency in Wireless Sensor Networks
Neetu Mehta Arvind Kumar
Department of CSE, SRM University, Delhi-NCR, Sonepat, Haryana-131029
Abstract. A wireless sensor network incorporates a range of sensor motes or nodes that normally
run-on battery power with limited energy capacity and also the battery replacement is a difficult
job because of the size of these networks. Energy efficiency is thus one of the main problems
and the design of energy-efficient protocols is essential for life extension. In this paper, we
discuss communication systems that may have a major effect on the total dissipation of energy
of the WSN networks. Based on the reviews, that traditional mechanisms for route discovery,
static clustering, multi-hop routing as well as mini-mum transmission are not ideal for
heterogeneous sensor network operations, we formulated CLENCH (Customized Low-Energy
Network Clustering Hierarchy) which uses the random rotational mode of local cluster sink
stations (cluster heads) for dynamic distribution of energy between the sensor nodes within the
network. Simulation showed that CLENCH may reduce power consumption by as much as eight
factors compared to traditional routing methods. CLENCH may also uniformly distribute energy
among the sensor nodes which almost doubles the usable network lifetime for the model
designed.
Social Structure to Artificial Implementation: Honeybees
Depth and Breadth of Artificial Bee Colony Optimization
Amit Singh
Department of Informatics, School of Computer Science, University of
Petroleum and Energy Studies, Dehradun, Uttrakhand-248007
Abstract. Swarms are individuals known as agents of a colony system that collectively performs
computationally complex real-world problems in a very efficient way. The collaborative effort
of these agents achieves a common goal in a distributed and self-organized manner. In nature,
such individuals as Bees in beehives, Ants in colony system, and Birds in the flocking system,
etc. are some examples of a long list of swarms. An inspirational and efficient course of action
in complex real-world problems of similar kind attracted re-searchers to study such optimization
solutions. Bonabeau et al. has transport-ed this natural swarm intelligence into artificial. This
paper presents an ex-tensive review of the state-of-the-art Artificial Bee Colony optimization,
in-spired by the natural beehives system in various application domains. In addition to the
performance in complex real-world engineering problems, the paper also enlights the
computational feasibility of its candidacy in the related domain areas. Overall, the application
domains are categorized into various specialized domains of computer science and robotics.
Finally, the paper concludes with possible future research trends of bee colony optimization.
59
Lifetime Aware Secure Data Aggregation Through
Integrated Incentive-based Mechanism in IoT based WSN
Environment
Nandini S1 and Kempanna M2
1Department of Computer Science & Engineering, Research Centre-Bangalore
Institute of Technology, Visvesvaraya Technological University, Belagavi,
Karnataka-590018, INDIA
2Department of AI&ML, BIT, Bangalore-560004, INDIA
Abstract. Internet of Things grabbed fine attention by researchers due to wide range of
applicability in daily human life-based application like healthcare, agriculture and so on. WSN
possesses a restricted environment and also generates a huge amount of data and further causes
data redundancy. Although data redundancy is efficiently solved through the various data
aggregation mechanism, security remains a primary concern for adaptability in the real-time
environment. Integrated Incentive-based Mechanism (IIBM) follows three parts i.e., first this
research work designs the optimal and secure data aggregation; second part follows the
formulation of correctly identification of deceptive data packets and third part includes
discarding deceptive node through conditional approach. Integrated Incentive Mechanism is
evaluated considering the different security parameters like identification of malicious node and
misidentified malicious or dishonest node; further comparison is carried out with the existing
model to prove the model efficiency. Furthermore, another parameter like energy utilization and
several node functioning is considered for the optimality evaluation of the model. Performance
evaluation shows enhancement of nearly 7%, 14% and 15% considering the three distinctive
deceptive nodes i.e., 5, 10 and 15 (in percentage) respectively.
A Multi-attribute Decision Approach in Triangular Fuzzy
Environment under TOPSIS Method for All-rounder
Cricket Player Selection
H. D. Arora, Riju Chaudhary and Anjali Naithani
Department of Mathematics, Amity Institute of Applied Sciences, Amity
University Uttar Pradesh, Noida, India
Abstract. Of all the sports played in the globe, cricket is one of the extremely popular and
entertaining sport. The 20 overs game, named T-20 cricket has recently been gaining popularity.
The Indian Premier League (IPL) is critical in raising the profile of Twenty-20 cricket. The goal
of this research is to analyse performances of selecting best all-rounder cricket player using
triangular fuzzy set approach through TOPSIS method. To cope with imprecise and ambiguous
data, the suggested work uses five alternative multi-criteria procedures and four criteria in a
fuzzy environment. The results suggest that the proposed model provides a more realistic way
to select a best all-rounder cricket player among others.
60
Multi-Temporal Analysis of LST-NDBI Relationship with
Respect to Land Use-Land Cover Change for Jaipur City,
India
Arpana Chaudhary1, Chetna Soni1, Uma Sharma2,
Nisheeth Joshi2, Chilka Sharma1
1School of Earth Science, Banasthali Vidyapith, Banasthali, 304022 India 2Department of Computer Science, Banasthali Vidyapith, Banasthali, 304022,
India
Abstract. There have been multiple studies showing the comparison between Land Surface
Temperature - Normalized Difference Built-up Index (LST-NDBI) relationship especially in
urban areas, however, many of the studies have lower accuracy while comparing LST-NDBI
due to lower temporal availability of higher-resolution images particularly those used for LST
derivation. The main reason behind this is the solid heterogeneity of Land Use Land Cover
(LULC) surfaces due to which LST changes drastically in space as well as in time, hence it
involves measurements with thorough spatial and temporal sampling. In this study, a
comparison of the multi-temporal LST-NDBI relationship is done and also, the further
comparison is shown using LULC. The results are in agreement with previous studies which
show a strong and positive correlation across the years (r = 0.69, r = 0.64 and r = 0.59 for 2001,
2011 and 2020 respectively). In addition, the LST trend shows the reduction in daytime LST
over the years in the summer season which also reaffirms the findings of those very few studies
conducted in Semi-Arid regions. These results can help understand the effects of increasing
built-up areas and the interclass LULC change on LSTs in urban settings. However, it is
recommended that multi-seasonal comparisons will provide a better idea of the LST-NDBI
relationship with higher resolution LST maps.
Analysis and Performance of JADE on Interoperability
Issues Between Two Platform Languages
Jaspreet Chawla1 and Anil Kr. Ahlawat2
1Department of Computer Science & Engineering, JSS Academy of Technical
Education, Noida (Affiliated to AKTU, Lucknow) 2Department of Computer Science & Engineering, KIET group of Institutions,
Ghaziabad (Affiliated to AKTU, Lucknow)
Abstract. There are a large number of toolkits and frameworks for multi-agent systems available
on the market. These toolkit and framework help the researchers to build an architecture that
works on interoperability issues of web services on different software languages. After studying
numerous multi-agent tools, we observed that JADE is a suitable multi-agent soft-ware tool that
acts as a bridge between inter-platform languages and works efficiently on a distributed
network. This paper shows the results and analysis of different interoperability issues of web
service between the two languages, Java & .Net, and proves the quality and maturity of JADE.
The analysis focuses on interoperability issues like precision issues of data types, Array with
null values, unsigned numbers, complex data structure, and Date-time formats between JAVA
&.NET, and how JADE acts as middleware, built the agent handler, and resolves the web service
interoperability issues effectively.
61
Interval-valued Fermatean Fuzzy TOPSIS Method and its
Application to Sustainable Development Program
Utpal Mandal and Mijanur Rahaman Seikh
Department of Mathematics, Kazi Nazrul University, Asansol-713 340, India
Abstract. Interval-valued Fermatean fuzzy set is a generalization of Fermatean fuzzy set and
interval-valued fuzzy set. In this paper, we construct a multi-attribute decision-making
(MADM) approach under the interval-valued Fermatean fuzzy (IVFF) environment. At first, we
define some operational rules, score function, and accuracy function for the IVFF information.
Then, we propose a distance measure to calculate the distance between IVFF numbers. Later,
we extend the technique for order preference by similarity to the ideal solution (TOPSIS)
method to solve MADM problems under an IVFF environment. Also, we define Fermatean
fuzzy entropy method to obtain attribute weights. Finally, to justify the accuracy and flexibility
of our proposed method, we solve a numerical problem of choosing the most suitable way of
the sustainable development program in India.
A TAM Based Study on the ICT Usage by the
Academicians in Higher Educational Institutions of Delhi
NCR
Palak Gupta and Shilpi Yadav
Jagannath International Management School, New Delhi, India
Abstract. Recent scenario has seen massive up-shift in Information and Communication
Technology (ICT) usage where each and every sector has started using ICT for automating its
business processes. This has brought a shift from manual processes to semi or fully automated
business operations leading to advanced efficiency, productivity, cost-saving and timely results.
Even the Education sector has seen transformation from offline to online or hybrid model. The
ICT has brought huge disruption in methodology and ways of hosting education. New online
tools and the support of cloud platforms, Artificial Intelligence (AI) and Machine Learning
(ML), virtual interactions and flipped classrooms have revolutionized higher education. In this
paper a primary survey has been done on ICT adoption and usage by the academicians in their
teaching methodology especially in higher educational institutions of Delhi NCR using the
research framework on Technology Acceptance Model (TAM) to determine the predictors of
ICT adoption by academicians. Empirical analysis through Python, Jamovi and IBM SPSS
Statistics has been done to analyse how successful ICT adoption has been for the academicians
in fulfilling teaching pedagogy and bringing better awareness and satisfaction among the
students to-wards curriculum and industry practices.
62
An Empirical Study of Signal Transformation Techniques
on Epileptic Seizures Using EEG Data
Umme Salma M and Najmusseher
Department of Computer Science, CHRIST (Deemed to be University), Hosur
Road, Bangalore, India
Abstract. Signal processing may be a mathematical approach to manipulating the signals for
varied applications. A mathematical relation that changes the signal from one kind to a different
is named a transformation technique in the signal process. Digital processing of
Electroencephalography (EEG) signals plays a significant role in a highly multiple application,
e.g., seizure detection, prediction, and classification. In these applications, the transformation
techniques play an essential role. Signal transformation techniques are acquainted with improve
transmission, storage potency, and subjective quality and collectively emphasize or discover
components of interest in an extremely measured EEG signal. The transformed signals result in
better classification. This article provides a study on some of the important techniques used for
transformation of EEG data. During this work, we have studied six Signal Transformation
Techniques like Linear Regression, Logistic Regression, Discrete Wavelet Transform, Wavelet
Transform, Fast Fourier Transform, and Principal component Analysis with Eigen Vector to
envision their impact on the classification of epileptic seizures. Linear Regression, Logistic
Regression and Discrete Wavelet Transform provides high accuracy of 100% and Wavelet
Transform produced accuracy of 96.35%. The proposed work is an empirical study whose main
aim is to discuss some typical EEG signal transformation methods, examine their performances
for epileptic seizure prediction, and eventually recommend the foremost acceptable technique
for Signal Transformation supported by the performance. This work also highlights the
advantages and disadvantages of all seven transformation techniques providing a precise
comparative analysis in conjunction with the accuracy.
An Investigation on Impact of Gender in Image based
Kinship Verification
Vijay Prakash Sharma1, Sunil Kumar2
1IT, SCIT, Manipal University Jaipur
2CCE, SCIT, Manipal University Jaipur
Abstract. The task of kinship verification is to establish a blood relationship between two
persons. Kinship verification using facial images provides an affordable solution as compared
to biological methods. KV has many applications like image annotation, child adoption, family
tree creation, photo album management, etc. However, the facial image verification process is
challenging because images do not have fixed parameters like resolution, background, age,
gender, etc. Many parameters are affecting the accuracy of the methods. One such parameter is
the gender difference in the kin relation. We have investigated the impact of the gender
difference in the kin relation on popular methods available in the literature. The investigation
suggests that gender difference affects kin detection accuracy.
63
Classification of Covid-19 Chest CT images using
Optimized Deep Convolutional Generative Adversarial
Network and deep CNN
Thangavel K. and Sasirekha K.
Department of Computer Science, Periyar University, Salem, Tamilnadu, India
Abstract. Coronavirus disease 2019 (COVID-19) pandemic has become a major threat to the
entire world and severely affects the health and economy of many people. It also causes the lot
of other diseases and side effects after taking treatment for Covid. Early detection and diagnosis
will reduce the community spread as well as saves the life. Even though clinical methods are
available, some of the imaging methods are being adopted to fix the disease. Recently, several
deep learning models have been developed for screening COVID-19 using Computed
Tomography (CT) images of the chest, which plays a potential role in diagnosing, detecting
complications, and prognosticating Coronavirus disease. However, the performances of the
models are highly affected by the limited availability of samples for training. Hence, in this
work, Deep Convolutional Generative Adversarial Network (DCGAN) has been pro-posed and
implemented which automatically discovers and learns the regularities from input data so that
the model can be used to generate requisite samples. Further, the hyper-parameters of DCGAN
such as Number of Neurons, learning rate, Momentum, Alpha and Dropout probability have
been optimized by using Genetic Algorithm (GA). Finally, Deep Convolutional Neural Network
(CNN) with various optimizers is implemented to classify COVID-19 from non-COVID-19
images which assists radiologists to increase diagnostic accuracy. The proposed deep CNN
model with GA optimized DCGAN exhibits an accuracy of 94.50% which is higher than the
pre-trained models such as AlexNet, VggNet, and Res-Net.
Intelligent Fractional Control System of a Gas Diesel
Engine
Alexandr Avsievich1, Vladimir Avsievich1 and Anton
Ivaschenko2
1Samara State Transport University, 1 Bezymyanny, 16, Samara, Russia 2Samara State Technical University, Molodogvardeyskaya, 244, Samara, Russia
Abstract. The paper presents a new intelligent control system aimed at improving the operational
and technical characteristics of an internal combustion engine running on a mixture of diesel
fuel and natural gas. The proposed solution is intended for used in large power units, which
place high demands on efficiency and reliability, for example, diesel locomotives and vessels.
New digital computing algorithm is proposed for fractional proportional-integral-differential
control to improve the stability and quality of transient processes in a gas-diesel engine.
Controller coefficients are determined by intelligent algorithm, the integral link with a differ
integral, taking into account the pre-history. The conclusions and results of the study are to
substantiate the ad-vantages of implementing the proposed control algorithm in terms of the
time of the transient process and the integral assessment of the quality in comparison with the
classical algorithms. The developed control system makes it possible to reduce fuel consumption
and increase the safety of the gas-diesel internal combustion engine while reducing the time of
the transient process by implementing fractional control of the crankshaft rotation frequency.
64
Diabetes Prediction using Logistic Regression & K-
Nearest Neighbor
Ami Oza and Anuja Bokhare
Symbiosis Institute of Computer Studies and Research, Symbiosis International
(Deemed University), Pune-411016, Maharashtra, INDIA
Abstract. Diabetes is a long-term illness that has the ability to become a worldwide health-care
crisis. Diabetes mellitus, sometimes known as diabetes, is a metabolic disorder characterized by
an increase in blood sugar levels. It's one of the world's most lethal diseases, and it's on the rise.
Diabetes can be diagnosed using a variety of traditional approaches complemented by physical
and chemical tests. Methods of data science have the potential to benefit other scientific domains
by throwing new light on prevalent topics. Machine learning is a new scientific subject in data
science that deals with how ma-chines learn from experience. Several data processing
techniques have been developed and utilized by academics to classify and predict symptoms in
medical data. The study employs well-known predictive techniques such as K-Nearest
Neighbour (KNN) and Logistic Regression. A predicted model is presented to improve and
evaluate the performance and accuracy by com-paring the considered machine learning
techniques.
Linear Regression for Car Sales Prediction in Indian
Automobile Industry
Rohan Kulkarni and Anuja Bokhare
Symbiosis Institute of Computer Studies and Research, Symbiosis International
(Deemed University), Pune-411016, Maharashtra, INDIA
Abstract. The Automobile Industry is one of the leading industries in our economy. Sudden up
rise in the demand for automobile vehicle and also the growth in profits is the leading factor for
this industry to become one of the major and important ones. This industry is also coming up
with various financial aids and schemes for the general population which is why people are
buying vehicles causing a ripple effect and maximizing their profits and the growth of industry.
This industry’s been a great force and a contributor to our economy. That’s why this is of
important significance for us to accurately predict the sales of automobile. That’s why every
industry or organization wants to predict the result by using their own past data and various
learning algorithms of machine learning. This will help them visualize past data and help them
to determine their future goals and plan accordingly and thus making sales pre-diction the
current trend in the market. Current study helps to get the prediction of sales in automobile
industry using machine learning techniques.
65
Load Balancing Algorithms in Cloud Computing
Environment – An Effective Survey
N. Priya and S. Shanmuga Priya
Research Department of Computer Science, Shrimathi Devkunvar Nanalal Bhatt
Vaishnav College for Women, Chennai, India
Abstract. In recent years, the usage of internet services and the number of users accessing the
cloud systems are increased tremendously since the cloud offers enormous services to users and
allows them to access its services from any-where at any time in a flexible manner. Cloud
computing is an emerging technology which has high performance and high throughput systems
that can handle multiple users requests simultaneously. However, handling multiple user
requests is a major challenge as number of requests increased day by day. It is very difficult for
a server to manage all these users request at one time. Sometimes it may result in system
breakdown and overloading of servers which causes load unbalancing. Load balancing is a
technique in cloud computing that solves the problem of load unbalancing by evenly distributes
the user’s request among multiple servers in an optimized way. In this paper, we present an
overview of various load balancing algorithms proposed by several authors in recent years with
respect to different load balancing metrics and tools used.
Agent driven Traffic Light Sequencing System using Deep
Q Learning
Palwai Thirumal Reddy, R. Shanmughasundaram
Department of Electrical and Electronics Engineering, Amrita School of
Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India
Abstract. Reinforcement learning (RL) is a machine learning technique where an agent
successively improves its control policies through feedback. It can address complex real-world
problems with minimum development effort as the agent understands the environment by itself.
One such complex scenario is to control the traffic flow in areas with high traffic density. This
work is to automate the sequencing of traffic lights providing less waiting time at an intersection.
The agent is a computer program that acts accordingly by observing traffic at an intersection
with the help of sensors. It learns over time based on its interactions with the environment. The
Deep Q Learning technique is chosen to build this agent because of its better performance. This
setup is implemented using python in the SUMO simulator environment. A comparison is drawn
between static traffic sequencing and RL traffic agent. The traffic agent performs better over
static traffic sequencing.
66
Rainfall Estimation and Prediction using Artificial
Intelligence: A Survey
Vikas Bajpai, Anukriti Bansal, Ramit Agarwal, Shashwat
Kumar, Namit Bhandari, and Shivam Kejriwal
The LNM Institute of Information Technology, Jaipur, Rajasthan, India
Abstract. Rainfall is a major source to meet water requirements in majority of the countries.
With increasing population and reducing natural resources, analysis and prediction of the
occurrence of rainfall become very important to fulfil agricultural, industrial, and day-to-day
human needs. Several authors across the globe have made efforts in analysing the rainfall pattern
and its accurate prediction. These methods can be categorized as numerical, empirical,
statistical, and artificial intelligence-based. Recently, with the increase in computational power,
artificial intelligence-based methods are gaining more popularity over traditional numerical and
statistical approaches for accurate prediction of the occurrence and the intensity of rainfall. This
paper highlights some of the key contributions in rainfall prediction using artificial intelligence-
based methods across the globe, with a major focus on the Indian subcontinent.
System Partitioning with Virtualization for Federated and
Distributed Machine Learning on Critical IoT Edge
Systems
Vysakh P Pillai and Rajesh Kannan Megalingam
Department of Electronics and Communication Engineering, Amrita Vishwa
Vidyapeetham, Amritapuri, India
Abstract. Machine learning transforms the fledgling IoT landscape by making meaningful
business decisions utilizing data from a vast number of sensors. However, the scale of connected
devices puts a toll on system networks. Federated and distributed learning systems have been
introduced to offload the net-work stress into edge and fog nodes. However, this approach
presents a new challenge in integrating and deploying machine learning algorithms into existing
systems. Due to the complex nature of machine learning algorithms and the associated data
interaction paradigms, most traditional edge node systems today require a total system re-
architecture to incorporate machine learning on the edge. This paper presents a novel
virtualization-based system partition approach to system design that enables the execution of
machine learning algorithms on edge nodes without modifications to existing software and
hardware in a system. In addition to easing the development process, this approach also prevents
inadvertent introduction errors by virtue of complete memory isolation of the learning systems
on the same hardware.
67
A Review on Preprocessing Techniques for Noise
Reduction in PET-CT Images for Lung Cancer
Kaushik Pratim Das and Chandra J
Department of Computer Science, CHRIST (Deemed to be University),
Postcode-560029, Hosur Road, Bangalore, India
Abstract. Cancer is one of the leading causes of death. According to World Health Organization,
lung cancer is the most common cause of cancer deaths in 2020, with over 1.8 million deaths.
Therefore, lung cancer mortality can be reduced with early detection and treatment. The
components of early detection require screening and accurate detection of the tumor for staging
and treatment planning. Due to the advances in medicine, nuclear medicine has become the
forefront of precise lung cancer diagnosis; with PET/CT is the most preferred diagnostic
modality for lung cancer detection. However, variable results and noise in the imaging
modalities and the lung's complexity as an organ have made it challenging to identify lung
tumors from the clinical images. In addition, the factors such as respiration can cause blurry
images and introduce other artifacts in the images. Although nuclear medicine is at the forefront
of diagnosing, evaluating, and treating various diseases, it is highly dependent on image quality,
which has led to many approaches, such as the fusion of modalities to evaluate the disease. In
addition, the fusion of diagnostic modalities can be accurate when well-processed images are
acquired, which is challenging due to different diagnostic machines and external and internal
factors associated with lung cancer patients. The current works focus on single imaging
modalities for lung cancer detection, and there are no specific techniques identified individually
for PET and CT images respectively for attaining effective and noise-free hybrid imaging for
lung cancer detection. Based on the survey, several image pre-processing filters have been
identified for various forms of noise, identifying the types of noise present in PET and CT
images and the techniques that perform well for both modalities without changing the essential
features of the tumor for lung cancer detection. The primary aim of the review is to identify
efficient pre-processing techniques for noise and artifact removal in the PET/CT images for lung
cancer diagnosis.
Analysis on Advanced Encryption Standard with
Different Image Steganography Algorithms: An
Experimental Study
Alicia Biju, Lavina Kunder and J. Angel Arul Jothi
Department of Computer Science, Birla Institute of Technology and Science
Pilani, Dubai Campus, DIAC, Dubai, United Arab Emirates
Abstract. In this ever-changing world of technology, data security is of utmost im-portance. This
research paper focuses on identifying the best combination of cryptography and steganography
algorithms for securing data. The proposed approach developed a complete end-to-end system
that encrypted text message using the Advanced Encryption Standard algorithm. The encrypted
message was then embedded onto images using steganography techniques like Least Significant
Bit, Discrete Cosine Transform and Discrete Wavelet Transform. The message was later
decrypted and extracted. The performance of the algorithms was evaluated using various
metrics. The best performing combination of algorithms for each metric was then identified.
68
Optimal DG Planning and Operation for Enhancing Cost
Effectiveness of Reactive Power Purchase
Nirmala John, Varaprasad Janamala and Joseph
Rodrigues
Dept. of Electrical and Electronics Engineering, Faculty of Engineering, Christ
(Deemed to be University), Bangalore – 560 074, KA, India
Abstract. The demand for reactive power support from Distributed Generation (DG) sources has
become increasingly necessary due to the growing penetration of DG in the distribution
network. Photovoltaic (PV) systems, fuel cells, micro-turbines, and other inverter-based devices
can generate reactive power. While maximizing profits by selling as much electricity as possible
to the distribution companies (DisCos) is the main motive for the DG owners, technical
parameters like voltage stability, voltage profile and losses are of primary concern to the
(DisCos). Local voltage regulation can reduce system losses, improve voltage stability and
thereby improve efficiency and reliability of the system. Participating in reactive power
compensation reduces the revenue generating active power from DG, thereby reducing DG
owner’s profits. Payment for reactive power is therefore being looked at as a possibility in recent
times. Optimal power factor (pf) of operation of DG becomes significant in this scenario. The
study in this paper is presented in two parts. The first part proposes a novel method for
determining optimal sizes and locations of distributed generation in a radial distribution
network. The method proposed is based on the recent optimization algorithm, Teaching
Learning Based Optimization with Learning Enthusiasm Mechanism (LebTLBO). The
effectiveness of the method has been compared with existing methods in literature. The second
part deals with the determination of optimal pf of operation of DG to obtain maximum benefit
derivation for the distribution company from the reactive power purchase. A new factor has
been proposed to evaluate the benefit derived. The approaches’ effective-ness has been tested
with IEEE 33 and 69 bus radial distribution systems.
Image Classification using CNN to Diagnose Diabetic
Retinopathy
Arul Jothi S, Mani Sankar T, Rohith Chandra
Kandambeth, Siva Koushik Tadepalli, Arun Prasad P,
and Arunmozhi P
PSG College of Technology, Coimbatore, Tamil Nadu - 641004, India
Abstract. Diabetic retinopathy (DR) is one of the many diseases that can result in permanent
blindness. People with this condition have some damage to the blood vessels in their eyes. This
might even result in permanent blindness. Ophthalmologists diagnose the condition by
observing the fundus images. In this paper, a CNN model of ResNet50 Architecture is used to
classify fundus images. Basic data augmentation and hyper-parameter tuning are performed.
The images are quite varied from one another, making for a rich dataset for the model to learn
from. When compared to existing models, the model constructed performed remarkably well
over the dataset, reaching a training accuracy of 91% and a validation accuracy of 80%. On the
test dataset, the model achieved a weighted average precision of 88%, a weighted average recall
of 86%, and a weighted average f1-score of 86%.
69
Real-Time Segregation of Encrypted Data Using Entropy
Gowtham Akshaya Kumaran P and Amritha P.P.
TIFAC-CORE in Cyber Security, Amrita School of Engineering, Coimbatore,
Amrita Vishwa Vidyapeetham, India
Abstract. Encryption translates data into another form. It can be read-only with the keys.
Encrypted data is often known as ciphertext, whereas unencrypted data is known as plaintext.
Encryption protects les or encrypts them with a key, making them accessible only to those who
have the keys to decrypt. The main idea is to prevent unauthorized parties from accessing the
les. These days, one must protect information stored on their computers or communicated over
the internet against cyberattacks. Cryptographic methods come in a variety of shapes and sizes.
Choosing a cryptographic process is mainly determined by application requirements such as
reaction speed, bandwidth, integrity and confidentiality. However, each cryptographic
algorithm will have its own set of strengths and weaknesses. Here we have segregated the
encrypted data using the entropy as a measure. The encryption algorithm taken for analysis are
3DES, AES, RC4 and blowfish.
Performance Analysis of Different Deep Neural
Architectures for Automated Metastases Detection of
Lymph Node Sections in Hematoxylin and Eosin-stained
Whole-slide images
Naman Dangi and Khushali Deulkar
Dwarkadas Jivanlal Sanghvi College of Engineering, Mumbai, India
Abstract. In medical imaging, digital pathology is a rapidly growing field, where glass slides
containing tissue specimens are digitized using whole-slide scanners at very high resolutions.
Virtual microscopy, also known as whole slide imaging, aids digital pathology in the analysis,
assessment, and diagnosis of tissue slides. Lymph node metastases occur in most cancer types
like breast, colon, prostate, etc. Metastatic involvement of lymph nodes is a very important
variable in the prognosis of breast cancer, where the diagnostic procedure for the pathologists
is tedious, prone to misinterpretation, and requires large amounts of reading time from
pathologists. Automated disease detection has been a long-standing challenge for computer-
aided diagnostic systems however, within the past few years, the field has been moving towards
grand goals with strong potential diagnostic impact: fully automated analysis of whole-slide
images to detect or grade cancer, to predict prognosis or identify metastases. In this paper, we
focus on the detection of micro and macro-metastases in haematoxylin and eosin-stained whole-
slide images of lymph node sections with an aim to improve the detection of cancer metastasis
potentially reducing the workload of pathologists by a great amount while at the same time
reduce the subjectivity in diagnosis. This paper demonstrates performance analysis of different
deep neural architectures deployed for automated metastases detection in whole slide images of
lymph node sections and draws analogies based on the recorded results.
70
Model Order Reduction of Continuous Time Multi Input
Multi Output System Using Sine Cosine Algorithm
Aditya Prasad Padhy1, Varsha Singh1, Vinay Pratap
Singh2
1Department of Electrical Engineering, NIT, Raipur, Chhattisgarh
2Department of Electrical Engineering, MNIT, Jaipur, Rajasthan
Abstract. This paper deals with a model order reduction (MOR) technique for the reduction of
higher order stable system (HOSS) into its corresponding reduced order stable model (ROSM).
The proposed reduction technique is a combination of sine cosine algorithm (SCA) and Routh
approximation (RA) method. In the proposed technique, numerator and denominator
coefficients of ROSM are computed by using SCA and RA method respectively. Further, it is
observed that ROSM retains the fundamental characteristics of HOSS. This pro-posed method
is validated by considering a standard multi-input multi output (MIMO) test case. From
simulated results, the performance ac-curacies of the corresponding ROSM are evaluated by
comparing its step response with other existing techniques.
Smart e-waste Management in China: a Review
Yafeng Han1,2, Tetiana Shevchenko1, Dongxu Qu1,2 and
Guohou Li2
1Sumy National Agrarian University, Sumy, Ukraine
2Henan Institute of Science and Technology, Henan, China
Abstract. To prevent the rapid increase in global e-waste generation from causing serious
environmental pollution and adverse effects on human health, proper e-waste management is
critical. In recent years, China has begun to pay more attention to e-waste management because
the informal recycling and disposal by unauthorised collectors have brought serious
environmental problems in some areas. However, it is an enormous challenge to achieve
efficient management of waste electronic products in a developing country like China, which
produces a large amount of e-waste every year but has a low recycling rate. The application of
intellectual technologies has given new opportunities for more effective e-waste management.
Many companies in China are developing smart e-waste collection and recycling systems by
applying the Internet of Things (IoT), Cloud Computing, Big Data and Artificial Intelligence
(AI), but they also face challenges in various aspects. In this line, to promote of smart e-waste
recycling in China, this study analyses and summarizes the main obstacles and countermeasures
for smart e-waste management.
71
A Study of Decision Tree Classifier to Predict Learner’s
Progression
Savita Mohurle and Richa Pandey
MIT Arts, Commerce and Science College, Alandi, Pune, India
Abstract. Now a day it is a big challenge in creating a good workforce as an education provider,
and even the education that a learner gets do not meet the international standards. The
development and the progression of learner differ as the delivery of education differs. Moreover,
the inculcation power and grasping adds to increase the overall performance level of learners.
The decision tree classifier is the top-down approach and is an inductive inference algorithm.
ID3, ASSISTANT and C4.5 are decision tree classifier techniques implemented to solve real
life problems. The ID3 is a decision tree classifier that best overcomes the problems such as
attribute-value pair, discrete output, errors, and missing values. This paper studies ID3 algorithm
to select the best attribute from the learners’ performance data to determine their progression.
The paper focus on the eight aspects including homework, class work, test marks, activities
participated, project work, learning process, behaviour and attitude towards learning, and
questioning skills to predict the overall performance of learners. The learner’s performance is
predicted by finding accuracy of aspects considered for study thereby calculating entropy and
information gain. Further, the results show that pre-diction accuracy for learner’s model appears
to be 83.33%. Based on predications the performance levels and hence progression is stated.
The conclusion state that the predictions made by the ID3 classifier aid to design further
strategies, for the learner's progression.
Prediction of User’s Behavior on the Social Media Using
XGBRegressor
Saba Tahseen and Ajit Danti
Department of Computer Science and Engineering, Christ (Deemed to be
University), Ban-galore, India
Abstract. The previous decennium has seen the growth and advance with respect to social media
and such that have violently also immensely expanded to infiltrate each side of user lives. In
addition, mobile network empowers clients to admittance to MSNs at whenever, anyplace, for
any character, including job and gathering. Accordingly, the association practices among clients
and MSNs are getting completer and more confounded. The goal of this paper is to examine the
number of followers, likes and post for Instagram users. The dataset yielded several
fundamental features, which were used to create the model with Multimedia Social Networks
(MSNs). Then, Natural Language Processing (NLP) features should be added (NLP) finally,
incorporate features derived in distinction to a machine learning technique like XGBRegressor
with TFIDF technique. We use two performance indicators to compare the different models:
Root Mean Square Error (RMSE) and the R² value. We achieved average accuracy using
XGBRegressor is 82%.
72
Artificial Intelligence Framework for Content Based
Image Retrieval: Performance Analysis
Padmashree Desai1 and Jagadeesh Pujari2
1KLE Technological University, Karnataka, India
2SDM College of Engineering & Technology, Karnataka, India
Abstract. Feature extraction, representation, and similarity estimation are all essential to
measuring the performance of a Content Based Image Retrieval (CBIR) system, and they have
all been widely studied for decades. Although numerous solutions have been projected, the
semantic gap remains one of the most challenging problems in the ongoing research of CBIR.
The semantic gap talks about how pixels in an image are perceived by computers and how
humans perceive images. In recent years, machine learning and deep learning approaches have
made considerable progress in addressing this issue. Proposed research work uses deep
architectures to model high-level abstractions in data. Deep learning is modelled as an intelligent
architecture that integrates data and information through various transformations and
representations. Deep learning techniques enable a computer to learn many complicated
functions that link pre-processed input data to output data without domain knowledge or human-
crafted features. We have used a multi-class weather data set and Wang’s data set to measure
the effectiveness of retrieval efficiency. The AlexNet and VGG16 are used for training and
testing. Developed systems are tested with a testing data set, and the results are compared with
state-of-the art technology. The VGG16 outperformed category-wise and also concerning mean
average precision.
Comparing the Pathfinding Algorithms A*, Dijkstra’s,
Bellman-Ford, Floyd-Warshall, and Best First Search for
the Paparazzi Problem
Robert Johner, Antonino Lanaia, Rolf Dornberger, and
Thomas Hanne
Institute for Information Systems, University of Applied Sciences and Arts
Northwestern Switzerland, Basel/Olten, Switzerland
Abstract. This paper aims to compare A*, Dijkstra, Bellmann-Ford, Floyd-Warshall and Best
First Search algorithms to solve a particular variant of the pathfinding problem based on the so-
called paparazzi problem. This problem consists of a grid with different non-moving obstacles
that lead to different traversing costs which are considered as minimization objective in a
specific model. The performance of the algorithms that solve the paparazzi problem is com-
pared in terms of computation time, the number of visited nodes, shortest path cost, and accuracy
of finding the shortest path. The comparison shows that heuristic algorithms mostly provide the
optimal path but with a shorter computation time.
73
Optimizing an Inventory Routing Problem Using a
Modified Tabu Search
Marc Fink, Lawrence Morillo, Thomas Hanne and Rolf
Dornberger
University of Applied Sciences and Arts Northwestern Switzerland, Olten/Basel,
Switzerland
Abstract. Nature-inspired algorithms such as Artificial Bee Colony and Ant Colony
Optimization have been widely used for the Inventory Routing Problem (IRP) as well as for the
Vehicle Routing Problem. These optimization methods encounter the challenge to get stuck in
a local minimum. Therefore, efforts have been made to improve the local search behaviour, for
example by using simulated annealing, which seeks to find the global optimum. We ap-plied a
modified Tabu Search algorithm to avoid local minima. We consider the search space to be a
network, in which a fleet of homogenous vehicles deliver homogenous items to meet the
customer’s demands over a planning horizon. The capacity of the vehicles as well as the depot
have sufficient supply to cover the deterministic demand of each customer. Therefore, this paper
only focuses on minimizing the transportation cost. We have bench-marked our results of the
algorithm to a paper, which uses the Artificial Bee Colony algorithm in IRP.
Handwritten Digit Recognition Using Very Deep
Convolutional Neural Network
Dhilsath Fathima M1, R Hariharan1, M Seeni Syed
Raviyathu Ammal2
1Department of Information Technology, Vel Tech Rangarajan Dr. Sagunthala
R&D Institute of Science and Technology, Chennai 2Department of Information Technology, Mohamed Sathak Engineering College,
Kilakarai
Abstract. Automated image classification is an essential task of the computer vision field. The
tagging of images into a set of predefined groups is referred to as image classification. The
implementation of computer vision to automate image classification would be beneficial
because manual image evaluation and identification can be time-consuming, particularly when
there are many images of different classes. Deep learning approaches are proven to overperform
existing machine learning techniques in a number of fields in recent years, and computer vision
is one of most notable examples. The very deep neural network (VDCNN) is a powerful deep
learning model for image classification, and this paper examines it briefly using MNIST hand-
written digit dataset. This dataset is used to prove the efficacy of very deep neural networks over
other deep learning models. An objective of this proposed work is understanding that the very
deep neural network architecture to perform a handwritten digit identification task. The
feasibility of the proposed model is evaluated using mean accuracy, validation accuracy, and
standard deviation. The study results of the very deep neural net-work model are compared to
convolutional neural network and convolutional neural net-work with batch normalization.
According to the results of the comparison study, very deep neural networks achieve a high
accuracy of 99.1% for handwritten dataset. The outcome of the proposed work is used to
interpret how well a very deep neural network performs when comparison to the other two
models of deep neural network. This proposed architecture may be used to automate the
classification of handwritten digits dataset.
74
Classification of Breast Cancer Histopathological Images
Using Pretrained CNN Models
Mirya Robin, Aswathy Ravikumar, Jisha John
Mar Baselios College of Engineering and Technology, Mar Ivanios Vidya
Nagar, Nalanchira, Thiruvananthapuram, Kerala-695015
Abstract. In the current situation, the timely diagnosis of cancer helps to increase the survival
rate of the patients. The most common cancer in women is Breast cancer. The histopathological
images of the breast help in the diagnosis of Breast cancer. In this work, the histopathological
stained images are used to build a pre-trained deep learning model for the prediction of Breast
cancer. The major pre-trained models like InceptionV3, AlexNet, MobileNetV2, VGG16,
ResNet are used for model building. For Breast cancer segmentation of histopathological
images, segmented regions are obtained using both Unet and R2U-net models. For
classification, pretrained models like InceptionV3, MobileNet, AlexNet, VGG net, and ResNet
were used. The highest accuracy was for ResNet of 89% and the least accuracy for MobileNet
of 78% for breast cancer classification using histopathological images.
The Necessity to Adopt Bigdata Technologies for Efficient
Performance Evaluation in the Modern Era
Sangeeta Gupta and Rupesh Mishra
CSE Department, Chaitanya Bharathi Institute of Technology, Hyderabad,
Telangana, India
Abstract. Latest technological advancements in the modern world led to innovations which, if
tackled properly yields value-added outcomes or may result in disruptions if mishandled. One
such technology is NoSQL databases that evolved in hundreds of numbers. Though these
support a wide number of features such as consistency, availability, fault-tolerance, scalability
and security, there is no single store that bundles all together. Particularly due to drastic data
rise that accounted up to the bigdata, it has become essential to compromise with security while
focusing on consistency and vice versa in similar directions. Another aspect to be considered
lies in the ability to handle streaming data which required a special kind of storage to process
data on fly. This gains wide support if integrated with various learning platforms such as deep,
machine learning etc to yield an added outcome. However, the application of pre-processing
techniques and the identification of training versus test data split irrespective of the dataset is an
essential activity to be carried out to infer better results. Also, the selection of an appropriate
algorithm to identify the outliers in voluminous data is essential to quantify the results. Towards
this end, an efficient hybrid machine learning algorithm PBS (Polynomial-Bayesian-Support
Vector Machine) is developed to over-come aforementioned bigdata analysis-based difficulties
and results are evaluated to make an inference about the effectiveness of the proposed work.
75
Forecasting Stock Market Indexes Through Machine
Learning using Technical Analysis Indicators and DWT
Siddharth Patel1, Vijai Surya BD1, Chinthakunta
Manjunath1, Balamurugan Marimuthu1 and
Bikramaditya Ghosh2
1CHRIST (Deemed to be University), Bengaluru, Karnataka, India
2RV Institute of Management, Bengaluru, Karnataka, India
Abstract. In recent years, the stock market prices have become more volatile due to refinement
in technology and a rise in trading volume. As these seemingly unpredictable price trends
continue, the stock market investors and consumers refer to the security indices to assess these
financial markets. To maximise their return on investment, the investors could employ
appropriate methods to forecast the stock market trends, taking into account the non-linearity
and non-stationarity of the stock market data. This research aims to assess the predictive
capability of supervised machine-learning models for the stock market regression analysis. The
dataset utilised in this research includes the daily prices and additional technical indicator data
of S&P 500 Index of US stock exchange and Nifty50 Index of Indian stock exchange from
January 2008 to June 2016; both the indexes are weighted measurements of the top companies
listed on respective stock exchanges. The model proposed in this research combines the discrete
wavelet transform and Support Vector Regression (SVR) with various kernels such as Linear,
Poly and RBF (Radial basis function kernel) of the Support Vector Machine. The results show
that using the RBF kernel on Nifty 50 index data, the proposed model achieves the lowest MSE
and RMSE error during testing are 0.0019 and 0.0431, respectively, and on S&P 500 index data,
it achieves 0.0027 and 0.0523, respectively.
Slotted Coplanar Waveguide-Fed Monopole Antenna for
Biomedical Imaging Applications
Regidi Suneetha P. V. Sridevi
Andhra University College of Engineering(A), Visakhapatnam 530003, India
Abstract. In this paper, two monopole antennas that operate between 1-10 GHz frequency band
for biomedical applications like stroke imaging and tumor detection in various parts of the body
are being presented. A double band monopole antenna having dimensions of 50×56×1.6 mm3
with a coplanar waveguide (CPW) feed structure is designed and fabricated. Multi-band is
attained with slots induced in the shape of 8 for the same antenna. Both the antennas are
fabricated on Fr4 substrate material with a dielectric constant of 4.4 and loss tangent of 0.02
because of its ease of availability and design flexibility. It can be concluded that the antenna
with slots performs better and is more suitable for microwave imaging applications compared
to the antenna without slots from the results obtained. These antennas also find a wide range of
applications in Wi-Fi and Wi-Max.
76
Artificial Intelligence in E-commerce: A Literature
Review
Richard Fedorko, Štefan Kráľ and Radovan Bačík
Faculty of Management, University of Presov, Konštantínova 16, 080 01 Prešov,
Slovakia
Abstract. With the development of information and communication technologies, artificial
intelligence is becoming increasingly popular. The main aim of companies in today's e-
commerce world is to influence customer behaviour in favour of certain products and brands.
The application of artificial intelligence as an innovative tool in the field of e-commerce may
seem as a positive step for-ward. The paper focuses on the description of the essence of e-
commerce and artificial intelligence and their benefits. The aim is also to evaluate the
importance of artificial intelligence and its use in the context of e-commerce based on available
studies on this issue.
CoFFiTT-Covid-19 Fake News Detection using Fine-
Tuned Transfer Learning Approaches
B. Fazlourrahman, B. K. Aparna and H. L. Shashirekha
Department of Computer Science, Mangalore University, Mangalore - 574199,
India
Abstract. In view of Covid-19 outbreak, the world is facing lot of issues related to public health.
Online media and platforms especially during the present pandemic have increased the
popularity of many online applications and also blogs. Few people are using this opportunity
for the good cause, whereas few others are misusing social media to share fake news and false
information about the pandemic. The main idea behind sharing fake news may be to mislead
communities, individuals, countries, etc. for various reasons like political, economic or even for
fun. Such fake news and false information impact the society negatively and can cause distrust
in public. Detecting fake news and avoiding the spread of the same in social media is posing a
big challenge. Even though researchers have explored several tools and techniques to address
fake news and hostile posts in various domains, it is still an open problem as there will always
be a new domain like Covid-19. In view of this, this paper describes two models based on
Transfer Learning (TL) approaches, namely: Extended Universal Language Model Fine-Tuning
(Ext-ULMFiT) and Fine-Tuned Bidirectional Encoder Representations from Transformers
(FiT-BERT). Both the models are fine-tuned on CORD-19 dataset to combat Covid-19 fake
news. The proposed models evaluated on Covid-19 Fake News Detection shared task dataset of
CONSTRAINT'21 workshop obtained 0.99 weighted average F1 score. However, FiT-BERT
outperformed Ext-ULMFiT in predicting fake news' and Ext-ULMFiT was more successful in
the prediction of real news. Further, the performances of the proposed models are very close to
the best performing team of Covid-19 Fake News Detection shared task in CONSTRAINT'21
workshop.
77
Improved Telugu Scene Text Recognition with Thin Plate
Spline Transform
Srinivasa Rao Nandam and Atul Negi
University of Hyderabad, Hyderabad Central University Rd, CUC, Gachibowli,
Hyderabad, Telangana 500046, India
Abstract. Scene text recognition is a difficult task because of complex backgrounds, different
text orientations, varying lighting conditions and noise introduced by devices used to capture
the images. The difficulty increases when the data used to train the model has very few samples
like in the case of telugu scene text recognition. This paper tries to address the issues caused by
complex text shapes and the lack of huge training data for Telugu Scene Text Recognition. We
apply a thin plate spline transform (TPS) as a pre-processor to Text Recognizer to handle the
complexity caused by the irregular text shapes. The Text Recognition model is based on the
Convolutional Recurrent Network (CRNN) based model which has been used for various
traditional OCR and Telugu Scene Detection applications. It uses a Resnet based feature
extractor which is much more successful in extracting rich features compared to VGG used in
traditional Convolutional Recurrent Network (CRNN) models. The features extracted by Resnet
are passed to a Bidirectional LSTM, the outputs of which are passed to a final prediction layer
which uses a softmax classifier. Connectionist Temporal Classification (CTC) loss is used as a
loss function. Instead of training from scratch the weights for training Telugu Text Recognition
models are loaded with weights trained on large English Scene Text Datasets (SynthText,
MJSynth) to give a good initialization for model weights. We show that above additions increase
normalized edit distance of the network by large margin and produce a better Scene Text
Recognition framework for Telugu text. The Recognizer is able to perform well under complex
under text orientations and varying fonts, shapes and highly varying characters present in the
Telugu text. We also show that the network achieves better normalized edit distance and faster
convergence when loaded with weights trained on English Scene Text datasets when they are
applied on Telugu text data. This emphasises the use of proper weight initialization and benefits
of fine tuning for producing a robust framework for Telugu Scene Text Detection.
On the Industrial Clustering: A View From an Agent-
based Version of Krugman Model
Smicha Ait Amokhtar and Nadjia El Saadi
Higher National School of Statistics and Applied Economics, Algeria
Abstract. Industrial clustering can be considered as a result of two types of forces: the centripetal
force, which encourages the concentration of the manufacturing activities, and centrifugal force,
which acts in the opposite direction. To explain the agglomeration process, we consider the
core-periphery model of Krugman in which the economy is composed of two regions, two
factors of production, and two sectors (agricultural and manufacturing). We develop an agent-
based model in order to apprehend the main causes of why the economic activity is concentrated
within only a few regions, and through a variety of simulations, we determine the suitability of
our agent-based model in explaining real phenomena. Our article shows that reducing transport
costs can have drastic effects on the disparity of industries and that the limited capacity of a firm
to hire labor can slow down the migration process, which leads to a reduction in regional
disparity.
78
Linguistic Classification Using Instance-Based Learning
Rhythm Girdhar1, Priya S Nayak1 and Shreekanth M
Prabhu2
1PES University, Bengaluru, India 2CMR Institute of Technology, Bengaluru, India
Abstract. Traditionally, linguists have organized languages of the world as language families,
such as Indo-European, Dravidian, and Sino-Tibetan. Within the Indo-European family, they
have further organized the languages into sub-families such as Germanic, Celtic, and Indo-
Iranian. They do this by looking at similar-sounding words across languages and the
commonality of rules of word formation and sentence construction. In this work, we make use
of computational approaches that are more scalable. More importantly, we contest the tree-based
structure that Language Family models follow, which we feel is rather constraining and comes
in the way of the natural discovery of relationships between any two languages. For example,
the affinity Sanskrit has with Irish, Iranian or English and languages across Indo-European
languages is better illustrated using a network model. Similarly, Indian Languages have inter-
relationships that go beyond the confines of the Indo-Aryan and Dravidian divide. To enable
the discovery of inter-relationships between languages, in this paper we make use of instance-
based learning techniques to assign language labels to words. Our approach comprises building
a corpus of words and then applying clustering techniques to construct a training set. Following
this, the words are vocalized and classified by making use of a custom linguistic distance metric.
We have considered seven Indian languages, namely Kannada, Marathi, Punjabi, Hindi, Tamil,
Telugu, and Sanskrit. We believe our work has the potential to usher in a new era in linguistics
in India.
A Framework for Enhancing Classification in Brain-
Computer Interface
Sanoj Chakkithara Subramanian1,2 and D Daniel1
1Department of Computer Science and Engineering, Christ University,
Bengaluru, India 2Department of Computer Science and Engineering, Sri Venkateswara College
of Engineering, Sriperumbudur, Tamil Nadu, India
Abstract. Over the past twenty years, the various merits of Brain-Computer Interface (BCI) have
garnered much recognition in the industry and scientific institutes. An increase in the quality of
life is the key benefit of BCI utilization. The majority of the published works are associated with
the examination and assessment of classification algorithms due to the ever-increasing interest
in Electroencephalography (EEG) based BCIs. Yet another objective is to offer guidelines that
aid the reader in picking the best-suited classification algorithm for a given BCI experiment. For
a given BCI system, selecting the best-suited classifier essentially requires an understanding of
the features to be utilized, their properties, and their practical uses. As a feature extraction
method, the Common Spatial Pattern (CSP) will project multichannel EEG signals into a
subspace to highlight the differences between the classes and minimize the similarities. This
work has evaluated the efficacy of various classification algorithms like Naive Bayes, K-Nearest
Neighbor Classifier, Classification and Regression Tree (CART), and AdaBoost for the BCI
framework. Furthermore, the work has offered the proposal for channel selection with Recursive
Feature Elimination.
79
Measuring the Accuracy of Machine Learning Algorithms
when Implemented on Astronomical Data
Shruthi Srinivasaprasad
Cerner Corporation, Bengaluru, India
Abstract. Astronomy as a field is now facing gargantuan sizes of data as scientists continue to
make terrestrial and space telescopes more and more powerful. They range from optical,
ultraviolet, and infrared to X-rays and gamma rays, and they collect extremely detailed data
creating a need for astronomers to rely on statisticians and computer scientists to infer the data.
The analysis of this data has created an unparalleled opportunity for AI and machine learning to
sift through the noise. Classification algorithms in general have been of the paramount
importance as they help understand what celestial object is represented by the data. This has
helped researchers understand the data in a more logical manner. The Sloan Digital Sky Survey
released comprehensive astronomical data for general usage. It was the fourth release of the
fourth phase – DR-16. This dataset contains three different celestial objects – stars, galaxies,
and quasars (or quasi-stellar objects). This research will compare the accuracy of two
classification algorithms and learn how they differ in classifying the different celestial objects
the data represents. This comparison in accuracy will help figure out the simple and better
method to classify astronomical data.
Modified Non-Local Means Model for Speckle Noise
Reduction in Ultrasound Images
Shereena V B1 and Raju G2
1School of Computer Sciences, Mahatma Gandhi University, Kerala, India
2Christ University, Bengaluru, India
Abstract. In the modern health care field, various medical imaging modalities play a vital role
in diagnosis. Among the modalities, Medical Ultrasound Imaging is the most popular and
economic modality. But its vulnerability to multiplicative Speckle noise is challenging, which
obscure accurate diagnosis. To reduce the influence of the Speckle noise, various noise filtering
models have been proposed. But while filtering the noise, these filters exhibit limitations like
high computational complexity and loss of detailed structures and edges of organs. In this
article, a novel non-local means (NLM) based model is proposed for the speckle reduction of
Ultrasound images. The design parameters of the NLM filter are obtained by applying the Grey
Wolf Optimization (GWO) to the input image. The optimized parameters and the noisy image
are passed to the NLM filter to get the denoised image. The efficiency of this proposed method
is evaluated with standard performance metrics. A comparative analysis with existing methods
highlights the merit of the proposal.
80
Improved Color Normalization Method for
Histopathological Images
Surbhi Vijh1, Mukesh Saraswat2, Sumit Kumar1
1Amity University, Noida, Uttar Pradesh, India
2Jaypee institute of information technology, Noida, Uttar Pradesh, India
Abstract. The exponential growing demand for computer-aided systems has significantly
increased the detection of cancerous cells from digital histopathology images. However, the cell
manual sectioning and color variation inevitably creates challenges and affect the performance
of computer-assisted diagnosis (CAD) due to misclassification. Therefore, color normalization
of Hematoxylin and eosin (H&E) plays an important role to attain promising outcomes. This
paper proposes the improved color normalization method by incorporating the gaussian function
in the fuzzy modified Reinhard method to enhance the intensity and contrast of the image. To
evaluate the improved fuzzy modified Reinhard (IFMR) algorithm, the comparative analysis is
performed on several mathematical quality metrics. The observation depicts that the proposed
algorithm provides better results and work efficiently in comparison to existing methods.
Analyzing Voice Patterns to Determine Emotion
Amit Kumar Bairwa, Vijandra Singh, Sandeep Joshi and
Vineeta Soni
Manipal University Jaipur, Rajasthan, India
Abstract. The human voice is extremely flexible and conveys a huge number of feelings. Feeling
in discourse conveys additional understanding about human activities. Through additional
investigation, we can more readily comprehend the intentions of individuals, regardless of
whether they are miserable clients or cheering fans. Speech emotion analysis focuses on the
nonverbal elements of speech and uses numerous approaches to evaluate vocal behaviour as a
marker of affect (e.g., emotions, moods, and tension). The underlying premise is that there are
a set of objectively quantifiable vocal characteristics that represent a person's current effective
condition. We should research the characterization of feelings in our discourse tests. We begin
our study of emotion by defining the data that will be used. We then move on to describing our
technique, and we look at the best methods for selecting characteristics that are important to
emotion prediction using this analysis. We also explore a variety of machine learning algorithms
for classifying emotion. In our examination of feeling, we need to depict the information
utilized. We change to examining our approach, and through this examination, we explore the
best calculations to choose highlights that are pertinent to foreseeing feeling.
81
Face and Emotion Recognition from Real-Time Facial
Expressions using Deep Learning Algorithms
Shrinitha Monica and R. Roseline Mary
CHRIST (Deemed to be University), Bangalore, India
Abstract. Emotions are faster than words in the field of Human-Computer Interaction.
Identifying human facial expressions can be performed by a multimodal approach that includes
body language, gestures, speech and facial expressions. This paper throws light on emotion
recognition via facial expressions, as the face is the basic index of expressing our emotions.
Though emotions are universal, they have a slight variation from one person to another. Hence,
the proposed model first detects the face using Histogram of Gradients (HOG) recognized by
deep learning algorithms such as Linear Support Vector Machine (LSVM) and then the emotion
of that person is detected through deep learning techniques to increase the accuracy percentage.
The paper also highlights the data collection and pre-processing techniques. Images were
collected using a simple HAAR classifier program, resized and pre-processed by removing noise
using a mean filter. The model resulted in an accuracy percentage for face and emotion being
97% and 92% respectively.
Internet Based Healthcare Things Driven Deep Learning
Algorithm for Detection and Classification of Cervical
Cells
Shruti Suhas Kute1, Amit Kumar Tyagi1,2, Shaveta
Malik3, Atharva Deshmukh3
1School of Computing Science and Engineering, Vellore Institute of
Technology, Chennai, 600127, Tamilnadu, India. 2Centre for Advanced Data Science, Vellore Institute of Technology, Chennai-
600127, Tamilnadu, India 3Terna Engineering College, Navi Mumbai
Abstract. Cervical cancer has been one of the major health concerns as it has increased the death
rates caused by cancer among women. However, its early detection can definitely have a huge
impact by reducing the possible complications and other mishaps. Internet of Healthcare Things
(IoHT) is termed for collectively addressing the unique set of healthcare devices which have
been interconnected to the internet so as to communicate and exchange data with each other.
Deep Learning, one of the major subsets of Artificial Intelligence (AI), offers a plethora of
algorithms which can be extensively utilized for cell detection and classification of the extracted
images. Convolutional Neural Network (CNN) models can be used to analyse and survey the
features and attributes which have been highlighted through the deep learning techniques.
Despite the fact that cervical cancer is a highly preventable disease, the population ratio of
women who have been affected and adversely exposed to its consequences are extremely high.
This paper discusses about the techniques which can be used to detect the presence of cervical
cancer by integrating IoHT and deep learning related algorithms.
82
Review on Novel Coronavirus Disease COVID-19
Aditi Rawat and Anamika Ahirwar
Jayoti Vidyapeeth Women's University, Jaipur, India
Abstract. The start of 2020 has seen the rise of coronavirus disruption brought about by a novel
infection called SARS-CoV- 2. As indicated by the World Health Organization (WHO), the
coronavirus (COVID-19) pandemic is putting indeed, even the best medical management
systems over the world under enormous tension. The early location of this kind of infection will
help in soothing the weight of the medicinal services frameworks. COVID-19 pandemic is
causing a significant flare-up in excess of 150 nations around the globe, sever affecting the
wellbeing and life of numerous individuals comprehensively. In such situations, ingenious
automation, for example, blockchain and Artificial Intelligence (Computer based intelligence)
have developed as promising solutions for battling coronavirus outbreak. Chest X-beams has
been assuming a critical job in the conclusion of infections like Pneumonia. As COVID-19 is a
sort of flu, it is conceivable to analyse utilizing this imaging method. With quick advancement
in the region of Machine Learning (ML) and Deep learning, there had been savvy frameworks
to order between Pneumonia also, Normal patients. This paper proposes the AI based order of
the extricated profound element utilizing ResNet152 with COVID-19 and Pneumonia patients
on chest X-beam pictures. Destroyed is utilized for adjusting the imbalanced information
purposes of COVID-19 and Normal patients. In light of COVID-19 radiographical changes in
CT pictures, we guessed that Artificial Intelligence's profound learning techniques may have
the option to separate COVID-19's particular graphical highlights and give a clinical
determination in front of the pathogenic test, along these lines sparing crucial time for ailment
control.
Brain Tumor Analysis and Reconstruction Using Machine
Learning
Priyanka Sharma1, Dinesh Goyal2 and Neeraj Tiwari1
1Poornima University Jaipur, India
2Poornima Institute of Engineering and Technology, Jaipur, India
Abstract. The enormous success of image recognition machine training algorithms in recent
years is intersected with a period when electronic medical records and diagnostic imaging have
been used substantially. This article presents the machine learning techniques for medical image
analysis, which concentrate on convolutionary neural networks and highlight clinical features.
Due to its record-breaking performance, deep education has lately become a solution for
quantitative analytics. However, the examination of medical images is unique. The brain
tumors’ are the most prevalent and aggressive illness that led at their greatest grade to extremely
short life expectancy. MRI pictures are utilized in this project to diagnose brain tumor. But at a
given moment the enormous number of data provided by the MRI scanning thwarts tumor vs.
non-tumor manual categorization. Automatic brain tumor identification by applying the CNN
classification is suggested in this paper. The deeper design of the architecture is achieved with
smaller kernels. The neuron's weight is considered tiny. Experimental findings reveal that the
97.5 percent precision CNN archive rate with little complexity compared to the whole state of
the arts methodology.
83
Development of Multiple Regression Model for Rainfall
Prediction
Nusrat Jahan Prottasha1, Md. Jashim Uddin2, Boktiar
Ahmed Ahmed3, Rokeya Khatun Shorna1 and Md.
Kowsher2
1J Daffodil International University, Bangladesh 2Noakhali Science and Technology University, Bangladesh
3Jhenaidah Polytecnic Institute, Bangladesh
Abstract. Rainfall forecast is imperative as overwhelming precipitation can lead to numerous
catastrophes. The prediction makes a difference for individuals to require preventive measures.
In addition, the expectation ought to be precise. Most of the nations in the world is an
agricultural nation and most of the economy of any nation depends upon agriculture. Rain plays
an imperative part in agribusiness, so the early expectation of rainfall plays a vital part in any
agricultural economy. Overwhelming precipitation may well be a major disadvantage. It’s a
cause for natural disasters like floods and drought that unit of measurement experienced by
people over the world each year. Rainfall forecast has been one of the foremost challenging
issues around the world in the final year. There are so many techniques invented for predicting
rainfall, but most of them are classification and clustering techniques. Predicting the quantity of
rain prediction is crucial for countries' people. In our paperwork, we have proposed some
regression analysis techniques which can be utilized for predicting the quantity of rainfall (The
amount of rainfall recorded for the day in mm) based on some historical weather conditions
dataset. we have applied 10 supervised regressors (Machine Learning Model) and some pre-
processing methodology to the dataset. We have also analysed the result and compared them
using various statistical parameters among these trained models to find the best-performed
model. Using this model for predicting the quantity of rainfall in some different places. Finally,
the Random Forest regressor has predicted the best r2 score of 0.869904217, and the mean
absolute error is 0.194459262, mean squared error is 0.126358647 and the root mean squared
error is 0.355469615.
Qualitative Classification of Wheat Grains using
Supervised Learning
N. Neelima N, Lohith K, Sarveswar Rao P and Satwik K
Amrita University, India
Abstract. Agriculture has a significant part in the Indian economy. Wheat is India's second-
highest cultivated crop. Damage in the wheat grain is the main cause of the degradation of food
quality. In addition, feeding products from spoiled wheat grains for the long term induces
diseases or leads to malnutrition. Hence, detecting the damaged wheat grains is important. This
work is aimed at the prediction of the quality of wheat grains. Initially, the wheat grain dataset
is taken and pre-processing is performed followed by the segmentation. After this, feature
extraction and classification are performed. At last, the performance analysis is carried out. In
two-class classification, MLP classifies the grains as good or impurities grains. On the other
hand, MLP classifies wheat as healthy, damaged, grain cover, broken grain, and foreign particles
in five class classifications. The performance of the proposed system is analyzed in terms of test
84
loss and accuracy that shows an efficient outcome. Comparative analysis is also performed and
the results reveal that the proposed MLP improves classification accuracy by 90.19% over
existing methods.
Fitness based PSO for Large Scale Job Shop Scheduling
Problem
Kavita Sharma1 and P.C. Gupta2
1Government Polytechnic College, Kota, India
2University of Kota, Rajasthan, India
Abstract. The large-scale job-shop scheduling problem (LSJSSP) is among one of the complex
scheduling problems. Researchers are continuously working to deal the LSJSSP through
applying the various probabilistic algorithms which includes swarm intelligence based as well
as the evolutionary algorithms even though not able to get the optimum results and it is still an
interesting area. Therefore, in this paper a recently developed non-deterministic algorithm
namely fitness-based particle swarm optimization (FitPSO) is applied to solve the LSJSSP
problem instances. In the proposed solution, fitness-based solution update strategy is
incorporated with the PSO strategy to get the desired results. The obtained outcome is
motivating and through results analysis, a confidence is archived that the proposed FitPSO can
be recommendation to solve the existing and the new LSJSSP instance. A fair comparative
analysis is also presented which also supports the proposed recommendation.
An Overview of Blockchain and IoT in e-Healthcare
System
S.V. Vandana Somayajula and Ankur Goyal
KL (Deemed to be University), Hyderabad, India
Abstract. Blockchain Technologies and Internet of Things (IoT) are being tremendously applied
in many fields, especially for e-Healthcare. For the safe, secure delivery of healthcare data
management, there must be miraculous support for applying ap-plications of blockchain.
Patients’ privacy and security are becoming a worry as the number of IoT devices in the
healthcare system grows at an exponential rate. IoT devices can provide real-time sensory data
like clinical trials, device tracking, health insurance details of patients for better tracking and
pharmaceutical tracing In IoT the problem arises when there is manipulation of data or
tampering of data or any point of failure especially in healthcare. A Blockchain is a form of the
ledger; which consists of distributed records that are unmodifiable and transparent through
replicating among public/ private networks. Because of its competency and convenience for
people’s lifestyles, the mobile healthcare system is receiving a lot of attention. This paper
examines an overview of the blockchain based authentication technologies, as well as answers
to security concerns and developments in healthcare via blockchain and IoT integration. Also,
discuss the applications of blockchain in e-healthcare to design decentralized IoT-based e-
Healthcare systems, methods, statistics, and success cases applied. Further, this paper prepares
the challenges of blockchain-based smart healthcare systems.
85
Priority Based Replication Management for HDFS
Dilip Rajput, Ankur Goyal and Abhishek Tripathi
KL (Deemed to be University), Hyderabad, India
Abstract. Hadoop distributed file system provides a fault tolerant and reliable way of dis-tributed
storing data. First data is divided into blocks and then each block is as-signed a data node by the
Name node. As the cluster consists of commodity hard-ware to offer fault tolerant nature
replication of blocks is done. In latest version of Hadoop default block size is 128 MB. Data is
put to cluster by user. Data is divided into blocks and placed on data node. After successful
placement of data block acknowledgment is sent to the master. In this way master forms
metadata. This metadata will be used when user wish to access the data again. To give flaw open
minded nature Hadoop repeats each square of record. Of course, 3 copies are framed. First
duplicate is set at the data node geologically nearest to the client. This is done to decrease the
entrance cost. Then, at that point data node having unique square reproduces it to another data
node and this data node will again imitate the square bringing about 3 reproductions. In this
work to further develop planning calculation we have altered information replication approach
moreover. We have planned a need-based replication plot in which second and third imitations
are framed dependent on needs. The subsequent copy is shaped and a data node having high
need and third at a data node having low need and having adequate accessible space.
Limacon Inspired PSO for LSSMTWTS Problem
Shruti Gupta and Rajani Kumari
Career Point University, Kota, India
CHRIST (Deemed to be University), Bangalore, India
Abstract. As the large-scale Single machine total weighted tardiness scheduling problem
(LSSMTWTSP) is a complex NP-Hard problem in which a set of unrelated tasks with varying
criteria that must be scheduled on a single machine. The problem’s goal is to find the lowest
total weighted tardiness possible. For the last few decades, Particle Swarm Optimization
Algorithm (PSOA) has performed admirably in the field of optimization. To solve complex
optimization problems, several new variants of PSOA are being created. In this article, an
effective local search (LS) technique that is designed by taking inspiration by limacon curve, is
incorporated in PSOA and the designed strategy is named Limacon inspired PSO (LimPSO)
algorithm. The efficiency and accuracy of the designed LimPSO strategy is tested over
LSSMTWTS problem which shows that LimPSO can be considered as an effective method for
solving the combinatorial optimization problems.
86
Visualizing Missing Data
Gajula Raja Gopal, Mandasu Bhargavi, Valiveti Akhil
Lakireddy Bali Reddy College of Engineering (LBRCE), India
Abstract. In this paper it is all about visual representation of the missing values and the actual
data with the help of COVID-19 dataset. We are taking the COVID-19 datasets of three states
[Andhra Pradesh, Telangana, Tamil Nadu]. Initially we are visualizing the actual datasets by
using python programming. Thereafter applying the missingness to the actual dataset of
different percentages and then we will get the different datasets with the missing values. Then
we are applying the different types of Imputation methods on the newly obtained datasets. Then
we are getting the new dataset with the predicted values in the place of missing values. Now we
are going to use the regression methods in order to decrease the margin values from highest to
lowest values. Then we will get the dataset with modified values. This dataset will be considered
as the final dataset. After getting the final dataset we are going to measure the accuracy of these
techniques with the original values and determining the best technique in order to find the
missing values in the table. Then this dataset will be processed under the different data
visualization techniques in order to represent the data in the different forms like Bar chart, line
chart, Scatter chart.