IJCSS, Vol.4, No.1, 2012 ISSN: 1803-8328 © USAR ...€¦ · data warehouse, advanced data...
Transcript of IJCSS, Vol.4, No.1, 2012 ISSN: 1803-8328 © USAR ...€¦ · data warehouse, advanced data...
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
1
S. No.
Paper Title Page No.
1 Data Warehouse in Telecommunication Industry: Survey and
classification
Hoda A. Abdelhafez
2-15
2 Comparison artificial neural network (ANN) and explicit equations
for estimation of the friction factor in pipes
Farzin Salmasi
16-24
3 Impact of Cloud Computing in Developing the Education Process
Ibrahiem M. M. El Emary
25-33
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
2
Data Warehouse in Telecommunication Industry: Survey and classification
Hoda A. Abdelhafez
Information Systems & Decision Support Dept.,
Faculty of Computers & Informatics, Suez Canal University
Al-Shekh Zayid Street, Old campus, Ismailia,
Egypt
Abstract
Data warehouse for decision support system is advanced tool in telecommunications
companies for dealing with huge amounts of data. It helps telecommunications industry to
cope with competitive pressures and achieve higher profits. The theme of this paper is to
focus on data warehousing in telecommunications companies and why they need new
technology and data warehouse platforms. The results demonstrate that the advanced data
warehouse platforms are capable to handle large scale of CDRs and provide daily reports for
decision makers.
a warehouse; data warehouse telecommunications industry; data warehouse; Teradat Keywords:
appliance, SOA; Cloud computing; Exadata.
1. Introduction
Data warehouse is defined as a subject oriented,
integrated, time variant, non volatile collection of
data in support of management's decision making
process (Hotchkiss, 2009). Data warehouse
includes historical data for many years. This stored
data is extracted, transformed, and loaded from
different data sources such as mainframe
applications, OLTP applications, or external
sources.
Early data warehouse was based on database
management systems which were oriented toward
transaction processing, but now most vendors who
are specialized in database management systems
(DBMS) make the transition to data warehousing.
These vendors are IBM’s DB2, Oracle, NT SQL
Server, and Teradata. IBM’s DB2 was offered both
an SMP and MPP architecture; Oracle was applied
SMP architecture; but NT SQL Server started small
and inexpensively. Meanwhile, Teradata became
specialized in handling huge amounts of data using
a shared MPP solution (Inmon, 1995; Inmon,
2007).
The purpose of this paper is to focus on new data
warehouse platforms through survey in
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
3
telecommunication companies and illustrate how
these companies overcome the increasing pressure
in analyzing massive volumes of Call Detail
Records (CDRs). The results shows that the new
DW platforms in telecommunication industry can
afford to do detailed analytics on large volumes of
CDRs and can scale to hundreds of terabytes of
data with excellent query performance.
This paper includes the following sections: tradition
data warehouse, advanced data warehouse
platforms, new data warehouse platforms in
telecommunications companies, Oracle Exadata,
and the benefits of advanced DW platforms
compared with traditional data warehouse.
2. Related Works
Within Telecommunication Companies, there are
three infrastructure groups: business management,
service management and network management.
These groups might build and utilize together a data
warehouse for a decision support system. These
companies are installing call detail record (CDR)
based decision support system which contain a gold
mine information about customer, products,
networks and competitors in order to maximize
access and use of these corporate information
(Conine, 1998).
In telecommunication industry, the first generation
of data warehouse applications was based on push
approach which is loaded all call detail records
(CDRs). The result of this approach was huge of
warehouses that were rich in data but poor in
business intelligence. The second generation of
data warehouse applications was based on pull.
This approach reduces the storage requirements and
ensures that decision makers can view the contents
of the data warehouse in a meaningful and useful
format (Frost, 2009).
The vision in this paper is to compare tradition data
warehouse with new DW platforms in
telecommunication companies and describe how
these companies apply the new data warehouse
platforms. The main finding is that using new DW
platforms can help telecom companies to take
competitive advantage in the market.
3. Traditional Data Warehouse
Telecommunications become one of the most
competitive business arenas of today's market
because of three forces: technology development,
user demand and deregulation. Early, long distance
service was the heart of the telecommunication
companies. Nowadays there are new services such
as cellular phones and wireless Internet; these
services are demanded by users with increased
quality (Krivda, 2008).
Telecommunication companies deals with large
amount of customer inputs; these inputs come from
numerous channels, including call centers and
increasing number of online transactions and data
warehouse is the best way to integrate and manage
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
4
these transactions. Moreover, each company needs
to have a full history of the customer transactions in
its database in addition to the new transactions.
Although those issues are similar to the ones faced
by many companies applying data warehouses, the
telecommunications industry does have some
unique requirements. The sheer volume of data is
one of these requirements.
The huge amounts of information could be stored in
traditional data warehouse architectures, but in the
new millennium, the massive growing of data
volumes demonstrated inability to provide the
detailed analysis that businesses needed in a timely
manner. On these traditional data warehouse, the
process of SQL queries took days or even weeks to
provide the required information which is out of
date especially in this type of business (Lamont,
2000).
Since there is a difference between a data
warehouse and the platform that manages it, you
can remodel the DW significantly to add value
without replacing the platform. There are some new
generations involve tools that are tangential to the
platform, such as solutions for data integration,
quality, master data, reporting, and others.
Incremental additions to hardware are common (to
add more CPUs, memory, or storage), and these
satisfy next generation requirements (fast queries,
in-memory databases, and scalability) by doing
more with the current platform (Russom, 2009).
4. New Data Warehouse Platforms
The telecommunications industry has been a major
user of traditional data warehousing technologies
for many years. However, the cost and difficulty of
scaling these platforms has limited their ability to
support large scale analysis of CDRs. As a result,
data warehousing has been generally limited to
supporting the billing cycle. This situation is
getting worse as new sources of traffic such as
VoIP are increasing volumes to beyond one billion
records per day. Legislation and competitive
pressures are forcing carriers to retain CDRs for up
to 25 months. In addition, Fraud detection and
network traffic analysis require near real-time
access to data (Frost, 2007).
Advanced data warehouse platforms such as data
warehouse appliances and software appliances
provide many more options and new interest today
as well as columnar databases. Moreover, open
source Linux is also common in data warehousing.
Recently, the new platform includes real-time
integration between the data warehouse platform
and operational applications, several types of
advanced analytics, and reusable interfaces
(Russom, 2009).
According to the survey conducted by the data
warehouse institute, real-time data warehousing has
the greatest projected growth rate of the DW
options surveyed. Vendors with database
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
5
management systems (DBMSs) have added new
features to their products and helped develop best
practices for RTDW and similar DW options such
as Teradata’s “active data warehousing” and IBM’s
“dynamic data warehousing. Also, the prospects for
data warehouse appliances (DWA) are very
positive, based on the survey’s indications of good
growth. This balance of growth and commitment
shows that the DWA has definitely “arrived” as a
common DW platform.
Moreover, Service-oriented architecture (SOA) is
the most anticipated hot data warehouse options;
SOA is adopting services to data warehousing
which will lead to more real-time interfaces. In
addition to cloud computing, one of the newest
platforms which enables customers to leverage
platforms and software that are more scalable and
cost effective It is fully utilize the server resources
with less administrative work as compared to
traditional data center approaches. (Russom, 2009).
5. Applied New Data Warehouse
Platforms in Telecommunication
Industry
This section discuses the real applications of the
new DW platforms in the large telecommunication
companies. These platforms include Data
Warehouse Appliances, Active Enterprise Data
Warehouse (Active EDW), Service Oriented
Architecture (SOA), and Cloud Computing.
Data Warehouse Appliances
A. Orange UK
Orange UK covers 99% of the UK population as a
mobile phone service provides high quality. Orange
UK had been using Business Intelligence (BI)
systems to manage and quickly leverage vast
amounts of corporate data. But the growing volume
of data caused a lot of problems such as low
performance, the significance of data latency and
strained infrastructure (Sawkins, 2009).
In 2003, Orange applied data warehouse appliance
technology from Netezza to analyze billions of Call
Detail Records (CDRs). It became the first
organization in Europe using data warehouse
systems. DW appliance provides Orange with
several significant benefits in data quality, the areas
of performance and data center space. In previous
BI system, the number of queries that could be
performed was very low because of the poor
performance. These queries would take 12-24 hours
on the company’s previous system. But now in the
new system, Orange is able to perform an average
of 1,800 complex queries per week. Using the
Netezza DW appliance system, the average query
run is up to 90 seconds and almost all queries queue
for less than 3 seconds. Orange became to realize
that improving query performance has increased the
quality of output and business decision making.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
6
The decisions were made based on data that was
eight weeks old but now they are made based on
daily report.
B. Reliance Communications
Reliance Communications is the second-largest
telecommunications company with a digital
network covering over 14,000 towns and 400,000
villages in India. It is also the telecommunications
arm of Reliance Group which is the largest private
sector company (Greenplum, 2008).
Traditional database systems in Reliance
Communications could not able to operate the
business because the increase in demand of its
services was producing explosive growth in the
systems and infrastructure. The ability to provide
accurate, timely analytics to all parts of the business
was becoming more acute. Reliance’s rapid growth
caused inability of traditional database systems to
scale and perform; a request for records took
multiple days to deliver. The Reliance Company
applied the S1004 model of the Sun Data
Warehouse Appliance. This model (S1004)
integrates Greenplum Database with 4 Sunfire
X4500 systems and a Sunfire X4200 system. As a
result of implementing the S1004, a request for
detailed call records would take few hours instead
of multiple days. Compared to Reliance’s previous
database system, the Greenplum system reduced the
average time to load a day’s worth of data by over
90 percent, from 2 hours to less than 10 minutes.
Active Enterprise Data Warehouse (Active EDW)
A. KDDI
KDDI is the second largest telecommunications
company in Japan, has implemented a Teradata
Active Enterprise Data Warehouse (EDW).
Teradata’s Active Enterprise Data Warehouse
platform, designed for fast processing speed and
scalability for complex data analysis by many
concurrent users. The Active EDW will serve as
the data retrieval and analysis platform that KDDI
uses to perform multi-faceted analyses of customer
and other related data from its mission-critical sales
support and planning system. Testing of the
Teradata Active EDW platform demonstrated a
seven-time increase in the average performance
than the current data warehouse system, without
requiring tuning (Hotchkiss, 2009).
The Teradata enables KDDI to integrate data from
customers and other critical information from
KDDI service operations, and to analyze that
information to further improve customer service.
Using the Teradata Active EDW platform for
extensive analysis of KDDI business operations
will help it find ways to improve its customer
service. Active EDW platform compared with the
previous system according to these improvements,
and the lower cost of operating a Teradata, the
KDDI will begin to realize a return on this
investment.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
7
B. Bouygues Telecom
Bouygues Telecom is a wireless
telecommunications company, the third largest one
in France. Bouygues telecom has applied Teradata
for a real-time enterprise data warehouse (EDW).
This system will support sales, finance, marketing,
fraud and insurance revenue processes through
transforming customer information into enterprise
intelligence (Baudet and O'Sullivan, 2008).
Data warehouse in Bouygues Telecom will provide
first, the required real-time business information to
support daily operations and second, the high-level
business information to support strategic decision-
making. Implementation of Teradata active data
warehouse began with the consolidation of huge
amounts of customer data from numerous data
marts into a central data repository. The centralized
EDW provides services to various departments with
greater speed and consistency. It also provides
users with faster access to information and an
integrated view of customer relationship
intelligence in order to increase their insight for
better decisions.
C. Vodafone New Zealand
In 2004 Vodafone became a leading
telecommunications company in New Zealand. It is
a subsidiary of the United Kingdom global
telecommunications company. Competitive and
tougher market let Vodafone executives recognized
that faster decision making required real-time
knowledge of current conditions (Krivda, 2008).
The legacy system in Vodafone was Red Brick data
warehouse. This legacy system was unable to
support the company needs in modeling processes,
transactional analysis, or research in order to meet
its goals. Vodafone has selected an enterprise data
warehouse (EDW) from Teradata in 2004 as well as
the Teradata Communications Logical Data Model.
Vodafone was able to use the advanced features of
the EDW to create advanced analytics, research and
competitive intelligence. The EDW provides a lot
of benefits as multi-variant analysis, pre-designed
models, online analytical processing, query and
model performance, hot staging of reports, an
integrated database that supports reports, fixed
downstream feeds, comprehensive ad hoc analysis
and customer segmentation. Vodafone was able to
use the power of the EDW to create advanced
analytics, research and competitive intelligence.
The EDW is also used to enhance Vodafone
customer life cycle program to build a scientific
base for optimum communication with customers.
It can get campaigns to the customer more
efficiently and effectively, through determine who
the company campaign to, and what is relevant to
the customer. Therefore, decision makers are
receiving more accurate information and faster than
ever before.
D. Xinjiang Telecom
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
8
Xinjiang Telecom, part of the China Telecom
group, is a leader in the Xinjiang
telecommunication market. The number of
telephone users in Xinjiang is the largest among all
12 of China's western provinces. In order to
increase the efficiency and quality of its decision-
making process and to improve its targeted
marketing capability, Xinjiang Telecom decided to
build a data warehouse system in 2004. Xinjing
Telecom selected Teradata's data warehouse and
data mining technology to build a single data
platform. The centralized data platform provides
powerful support to Xinjiang Telecom's decision-
making and marketing re-engineering initiative
(Gale, 2006).
Service Oriented Architecture (SOA)
A. ChungHwa Telecom Company
ChungHwa Telecom, the largest telecommunication
company in Taiwan is the 14th largest in the world.
The company sought to upgrade its existent billing
system to one that was both NGOSS-compliant
SOA-based. It consists of an operations support
system using NGOSS (Next Generation Operations
System and Software) and it implements a service
oriented architecture (SOA) that relies on an
enhanced enterprise service bus (ESB). This
enhanced ESB makes it possible to carry out
changes to business rules at runtime, thus avoiding
costly shutdowns to the billing application.
Implementing this system provides complete
support to its billing application. As a result, the
billing process cycle time has been reduced from
10–16 days to 3–4 days (Chen, Ni and Lin, 2008).
B. Alestra in Mexico
Alestra is the third-largest telecommunications
provider in Mexico. It offers broadband, long
distance, and high-touch integrated
communications services to corporate and
residential customers. Alestra is using TIBCO’s
SOA and BPM (business process management)
platform to facilitate the process of upgrading and
integrating applications for CRM, billing, inventory
management, activation and other applications.
The integration was for 14 platforms and almost
100 services. Alestra’s successful approach
leverages TIBCO’s SOA platform to simplify
connectivity with both existing and new systems
(Ancira, 2009).
TIBCO’s BPM and SOA software helps the
company’s sales force, customer care
representatives and operations team to manage and
achieve customer requests throughout the entire
product lifecycle and reduce the time required to
complete service requests.
C. British Telecom
British telecom Group is one of the largest
telecommunications services companies in the
world and has operations in more than 170
countries. BT wants to allow customers to manage
their own subscribed services online through a
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
9
Web-based interface. It also needs to reduce the
time and cost of integration and become a lot more
agile. BT decided to go with Microsoft’s CSF
(Connected Services Framework), a SOA-based
service-delivery platform that functions as an
extension of Microsoft BizTalk Server, SQL
Server, and Windows Server 2003. SOA enables
BT to cobble together new offerings with those of
third parties and integrate them quickly with their
internal, billing system, provisioning, and other
support systems (Erlanger, 2005).
Cloud Computing
A. AT&T
AT&T is the largest provider of mobile
telephony service in the United States, with over
95.5 million wireless customers and more than 210
million total customers (AT & T report). AT&T's
began to use cloud computing in 2006, its cloud
service, dubbed AT&T Synaptic Computer
Services, delivers on-demand computing-as-a-
service (CaaS).The AT&T cloud is built on the Sun
Open Cloud Platform and utilizes Sun Cloud API's
in conjunction with a VMware virtual environment.
The AT&T cloud is much more similar to the
Amazon which offers a self-service portal that
enables customers to add computing power or
storage space on the fly. AT&T is closely
following the Amazon blueprints to duplicate that
success (Bradley, 2009).
B. China Mobile
China Mobile is one of the big three integrated
telecom operators in the country. The company
wants to capitalize on an upcoming cloud
computing boom in the telecom industry. It was
developing a cloud computing platform BigCloud
to come up with the diversifying demand in the 3G
era. In its new platform, China Mobile adopted
some applications and high-efficiency cloud
computing management software like parallel
algorithms for data mining, cloud storage, large
capacity database, as well as search engine. The
China Mobile goal is to provide featured mobile
Internet information service for individual users
and corporate users. The company, via its
BigCloud platform, provides various mobile
Internet services like instant information, content
storage, mobile map, mobile mailbox, information
search, integrated communication, music
downloading, as well as mobile directions (C114,
2009).
6. Oracle Exadata
Oracle transformed the data warehousing industry,
with every customer experiencing performance
improvements of at
least 10x or more. With Exadata V2 Oracle
provides faster performance and the flexibility to
consolidate data warehousing as well as OLTP
workloads on the same machine (Russom, 2009).
The Sun Oracle Database Machine combines
industry-standard hardware from Sun, Oracle
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
10
Database 11g Release 2, and Oracle Exadata
Storage Server Software to create a faster, more
versatile database machine. It represents a
completely scalable and fault-tolerant package for
data warehousing and transaction processing
applications. The Sun Oracle Database Machine
also includes Sun’s new FlashFire technology to
cache “hot” data for dramatically improved
transaction response times and throughput.
A. China Mobile Group (Liaoning Mobile)
China Mobile Group provider in Liaoning Province
(Liaoning Mobile) is the largest mobile
communications serving 27 million customers and
generating annual revenue of US$2.3 billion. The
company has 14 branch offices and 56 county
offices in the province, and it is responsible for
communications network construction,
maintenance, and service. Liaoning Mobile is a
subsidiary of China Mobile Group, China’s leading
mobile communications provider.
Due to rapid growth in its mobile communications
business, Liaoning Mobile’s business operations
systems were struggling to process data in real
time, and to support an increasing number of users.
The company had to find a more efficient way to
manage and use the database resources needed to
run these systems. Therefore, Liaoning Mobile
implemented Exadata Database Machine X2-2 to
build a database cloud architecture that improved
performance and system resource utilization and
cut deployment time for new applications.
Implementing Oracle Exadata Database Machine in
Liaoning Mobile achieved a more than six-fold
improvement in the performance of the business
operations systems. Staff satisfaction and
productivity increased, as the improved system
response speeds reduced wait times for processing
queries and transactions. The Liaoning Mobile has
also eliminated the need to make continuous
adjustments to databases, servers, and storage
systems as data volumes increase which made
database expansion easier (Oracle Customer
Success Stories, 2011).
B. Turkcell
Turkcell İletişim Hizmetleri A.Ş. is a leading global
system provider for mobile communications (GSM)
in Turkey. It has more than 34 million subscribers
and ranks third in Europe and 16th in the world by
number of subscribers. Turkcell covers
approximately 83% of the Turkish population
through its 3G and 99% through its 2G technology
supported network. The company manages 250
terabytes of data in an enterprise data warehouse
including more than 500 Oracle databases and more
than 150 new databases under development.
Running reports required analysis of up to 1.5
billion call data records generated by the
company’s customers, daily. To overcome these
challenges the company applied Oracle Exadata
Database Machine to reduce the size of the
company’s data warehouse to 25 terabytes with
hybrid columnar compression and simplified the
system architecture from ten storage cabinets to one
full rack. Turkcell replaced 10 data-storage
cabinets and a Sun M9000 server with a single full-
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
11
rack high performance Exadata Database Machine.
Oracle Exadata reduced the mean time for
producing a report tenfold—from 27 minutes to just
3 minutes—and doubled reporting speed from 45%
to 90% of all reports (Oracle Customer Snapshot,
2010; Oracle Customer Success Stories, 2011).
C. SK Telecom
SK Telecom is a mobile communications service
provider, the largest one in Korea. The company
specializes in data-driven applications and
advanced multimedia services delivered to cell
phones, personal digital assistants, and MP3
players. It provides music and streaming video
services such as movies, video clips, animation,
games, and sports and television programs, as well
as real-time financial information (stock trades,
investments, and insurance policies) (Oracle
customer case study, 2010) . SK Telecom wanted
to ensure it could handle the growth in data
volumes, as well as improve billing verification and
analysis to prevent errors, SK Telecom decided to
implement Oracle Database Machine with Exadata
as its new database storage platform.
SK Telecom’s current billing
system manages the use of data
and information linked to more
than 210 wired and wireless
internet service systems. These
systems process an average of
500 to 600 million billing
transactions daily.
Oracle Database Machine is
linked to the billing system to
enable the reliable collection,
storage, and analysis of complex
billing data in a timely manner.
It analyzes the billing data and
highlights any inconsistencies so
that the errors can be fixed
before bills are sent to
customers. Since moving to
Oracle Database Machine, data
warehouse querying
performance has improved ten-
fold. The platform can analyze
24TB usable data in a 50-day
window, equivalent to about 1.2
billion transactions.
D. SoftBank
SoftBank Mobile Corporation established in 1986
as a leading mobile telecommunications service
provider based in Tokyo, Japan. It offers a range of
mobile services that run on Wideband Code
Division Multiple Access (W-CDMA) and
Universal Mobile Telecommunications System
(UMTS) 3G networks. SoftBank Mobile has
achieved the highest growth in Japan’s mobile
phone market over the past two years. The increase
in subscribers from a previous average of 50,000
per month into more than 200,000 subscribers per
month strained the company’s data warehouse. In
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
12
2009 softbank decided to replace 36 Teradata racks
with just three Oracle Exadata racks after testing
oracle exadata database machine and during this
test, the company’s data warehouse performance
improved by up to eight times. The new data
warehouse, running on Oracle Exadata, is
connected to the customer care and billing system,
which runs on Oracle database. It can store up to
150TB of data, an increase in capacity of 150% on
the previous Teradata solution as well as reducing
database running costs by 50% and operational
costs by more than half (Oracle customer case
study, 2011).
7. Advantages of the New DW
Platforms Compared with
Traditional Data Warehouse
Many critical telecommunications functions rely on
fast, complex analysis of CDR data including
billing, revenue assurance customer relationship
management (CRM) and network performance.
Applying a single complex BI query against
billions of records using traditional DW systems
takes hours or days which results in incomplete
information for decision-making (Business
Intelligence Guide, 2006). Moreover, the terabytes
of dynamic customer data will continue to expand
as carriers add new services and as IP-based traffic
increases. This expanding volume of data is
straining the performance capabilities of relational
databases, servers and storage systems that provide
the foundation for BI (Business Intelligence Guide,
2006).
Advanced data warehouse platforms can face these
challenges in telecommunication companies by
handling real time, analysis of scalable database at
the CDR level. These DW platforms such as real
time data warehouse and data warehouse appliances
could provide decision makers with daily reports
and the queries would take only few hours.
Therefore, advanced data warehouse can effectively
apply CRM, revenue assurance, fraud prevention
and network traffic analysis (NETEZZA, 2004).
Moreover, Oracle exadata transformed the data
warehousing industry, Telecommunications
Company saved 33 hours of batch processing time
and is able to analyze more data faster.
8. Conclusion
The challenges of
Telecommunications industry
including legislation,
competitive pressures, , difficult
to scale large data, analysis
network traffic, fraud detection
and others demonstrate the
limitations of traditional data
warehouse. Therefore, the new
categories of data warehouse
platforms and new technologies
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
13
such as Exadata are able to
overcome these limitations.
These platforms and technology
can help the telecommunications
industry to handle complex
queries against multi terabyte
data sets. Also, decision makers
are more certain about their
decision because they receive
daily reports instead of reports
based on old data.
9. References
1. Ancira R. (2009) “TIBCO Integrates
Alestra´s Telecom Infrastructure with
SOA/BPM Platform”, TIBCO Software
Inc. http://www.tibco.com/multimedia/ss-
alestra_tcm8-9650.pdf
2. Baudet C. and M. O'Sullivan (2008),
“Teradata Selected By French Telecom
Leader Bouygues for Real-Time”
Enterprise Data Warehouse,
http://www.teradata.com/t/newsrelease.asp
x?id=5977.
3. Bradley T. (2009), “IBM and AT&T
Unveil Cloud Computing Services”,
PCWorld Business Center,
http://www.pcworld.com/businesscenter/ar
ticle/182238/ibm_and_atandt_unveil_clou
d_computing_services.html.
4. C114 Online Media (2009) “China Mobile
Brews Cloud Computing Platform”,
http://www.cn-
c114.net/576/a416006.html.
5. Chen I., Ni G. and Lin C (2008), “A
runtime-adaptable service bus design for
telecom operations support systems”, IBM
Systems Journal, Vol. 47 No. 3, PP. 445-
456.
6. Conine R. (1998), “The data warehouse in
the telecommunications industry”, IEEE,
vol.1, PP. 205 - 209.
7. Erlanger L. (2005) “British Telecom dials
into SOA”, InfoWorld,
http://www.infoworld.com/d/developer-
world/british-telecom-dials-soa-898.
8. Frost S. (2007), “Saving
Telecommunications Data Warehousing
with DATAllegro”, White Paper, Version
(1), January,
http://www.datallegro.com/pdf/white_p
apers/wp_telcoms.pdf
9. Gale T. (2006), “Xinjiang Telecom
implements Teradata warehouse for its
business analysis system”, China Telecom
Magazine, Vol.13, No. 4, Page: 19(2).
10. Gomez J. (1998), “ Data Warehousing for
the Telecom Industry”, Information
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
14
Management Magazine, December,
http://www.information-
management.com/issues/19981201/260-
1.html
11. Greenplum (2008), Reliance
Communications Case Study, February,
http://www.greenplum.com/studies/?reg=1
12. Hotchkiss D. (2009), “ Japan’s Second
Largest Communications Carrier Chooses
Teradata for Enterprise Information
System”, Teradata news release,
http://www.teradata.com/t/newsrelease.
aspx?id=11507
13. Inmon W. (1995), “What is a Data
Warehouse?”, Prism Solutions, Vol. 1,
No. 1.
14. Inmon W. (2007), “A Brief History of
Data Warehousing: From the Vendors
Perspective”, EIMI Archives, Volume (1)
Issue 3 May,
http://www.eiminstitute.org/library/eimi-
archives/volume-1-issue-3-May-2007-
edition/a-brief-history-of-data-
warehousing-from-the-vendors-
perspective-part-i.
15. Krivda C. (2008), “ Dialing up growth in
a mature market: Vodafone New Zealand
Ltd. combines Teradata and powerful
analytics to optimize customer
communications and improve retention”,
Teradata Magazine-March,
http://www.teradata.com/tdmo/v08n01/
pdf/AR5549.pdf.
16. Lamont J. (2000), “Data warehouse in
telecommunication industry”, KMWorld
magazine.
http://www.kmworld.com/Articles/Editoria
l/Feature/Data-warehousing-in-the-
telecommunications-industry-9153.aspx
17. NETEZZA (2004), “Transforming
Telecommunications Business
intelligence: Real-Time, comprehensive
Analyses for Proactive Business
Decisions”, White Paper,1.866. Netezza,
www.netezza.com.
18. Oracle customer case study (2010), “SK
Telecom Builds Database Infrastructure
That Analyzes up to 1.2 Billion
Transactions Daily”, January,
http://www.oracle.com/customers/snapsho
ts/sk-telecom-rac-case-study.pdf.
19. Oracle Customer Snapshot (2010),
“Turkcell İletişim Hizmetleri A.Ş.
Reduces Mean Reporting Time Tenfold
for More Than 50,000 Reports”,
http://www.oracle.com/us/corporate/custo
mers/turkcell-exadata-snapshot-
189469.pdf.
20. Oracle Customer Success Stories (2011),
“Information for success: Customers
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
15
Achieve Extreme Performance at Lowest
Cost with Oracle Exadata Database
Machine”,
http://www.oracle.com/us/products/databa
se/exadata-reference-booklet-400018.pdf.
21. Oracle Customer Snapshot (2011),
“Softbank Mobile improves database
query performance by up to eight times’,
http://www.oracle.com/us/corporate/custo
mers/softbank-mobile-corp-7-exadata-cs-
214491.pdf
22. Russom P. (2009) “Next Generation Data
Warehouse Platforms”, TDWI Best
Practice Report,
http://www.oracle.com/database/docs/t
dwi-nextgen-platforms.pdf.
23. Sawkins S. (2009), “Orange and Netezza:
Dealing with the Business End of BI”,
http://www.netezza.com/documents/ora
nge_case_study.pdf.
24. Sun Microsystems (2006), “Business
intelligence and data warehousing Sun
Microsystems”,
http://businessintelligence.ittoolbox.com/bro
wse.asp?c=BIWhite+Papers&r=http%3A%2
F%2Fwww%2Esun%2Ecom%2Fstorage%2
Fwhite%2Dpapers%2Fbidw%2Epdf
25. The Business Intelligence Guide (2006),
“Real-Time Analysis of Telco Data, The
Business Intelligence Guide”,
http://www.thebusinessintelligenceguide.c
om/industry_solutions/Telco/Telco_Data_
Manqgement/Real_Time/CDR_Analysis.p
hp.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
16
Comparison artificial neural network (ANN) and explicit equations for
estimation of the friction factor in pipes
Farzin Salmasi, Assistant Prof., Department of Water Engineering, Faculty of Agriculture, Tabriz
University, Tabriz-Iran. Email: [email protected] Phone: +98 4113392786
ABSTRACT
A non-iterative procedure was developed, using an artificial neural network (ANN), for
calculating the friction factor (f) in the Darcy-Weisbach equation when estimating head losses
due to friction in closed pipes. The successive substitution method was used as an implicit
solution procedure to estimate the f values for a range of Reynolds numbers, Re, and relative
roughness /D values. In developing the ANN model, two configurations were evaluated: (i)
the input parameters Re and /D was taken initially on a linear scale; (ii) input parameters Re
and /D was transformed to a logarithmic scale. Configuration (ii) yielded an optimal ANN
model with one hidden layer and 5 neurons in it. This configuration has R2 and RMS values
0.995 and 0.0218 respectively and was capable of predicting the values of f in the Darcy-
Weisbach equation and was in close agreement with those obtained using the numerical
technique. In addition comparison among some previous empirical equations for calculation
of the f and numerical method performed.
Keywords: Neural network modeling; Hydraulics of pipe flow; Darcy-Weisbach equation.
1. Introduction
The energy loss due to friction undergone
by a Newtonian liquid flowing in a pipe is
usually calculated through the Darcy–
Weisbach equation:
g
V
D
Lfh f
2
2
(1)
In this equation f is the so-called Moody or
Darcy friction factor which, from the
above equation, is calculated as follows:
22 2/12/1 V
P
L
D
V
gh
L
Df
f
(2)
The friction factor depends on the
Reynolds number (Re), and on the relative
roughness of the pipe, ε/D. For laminar
flow (Re<2100), the friction factor is
calculated from the Hagen–Poiseuille
equation (Romeo, et al., 2002):
VDRf
e
6464 (3)
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
17
For turbulent flow, the friction factor is
estimated through the equation developed
by Colebrook and White (Colebrook and
White, 1937):
)523.2
7.3(log2
1
fRDf
(4)
The Colebrook–White equation is valid for
Re ranging from 4000 to 108, and values of
relative roughness ranging from 0 to 0.05.
The formula is often used in pipe network
simulation software. It has an implicit form
in which the value of f appears on both
sides of the equation. Obtaining an
accurate solution for f can be very time
consuming, requiring many iterations. An
approximate equation for f that does not
require iteration can be used to improve
the speed of simulation software.
This equation covers the limit cases of
smooth pipes, ε = 0, and fully developed
turbulent flow. For smooth pipes, Equation
(4) turns into the Prandtl–von Karman
(Colebrook, 1939):
71.3
/log2)(log214.1
1 D
Df
(5)
If the flow is fully developed, it is verified
that 200/ fDRe . In this case, the
friction factor depends only on the relative
roughness and can be calculated through
the equation deduced by von Karman
(Colebrook, 1939):
52.2log28.0)(log2
1 fRfR
f
ee (6)
Unless the Karman number, Re√f , is
previously known, i.e. the pressure drop of
the fluid in the pipe is known, Equations
(4) and (6) are implicit with respect to the
value of f, and are solved using numerical
methods. Thus, if the auxiliary variable F
is defined as 1/√f , the Colebrook–White
equation (Equation (4)) can be re-written
to be solved by a method of successive
substitution:
)523.2
7.3(log21 nn F
RDF
(7)
Equation (7) converges very rapidly,
especially if there is a good initial
estimation of the friction factor. For this
the graph produced by Moody (1947) or
any of the explicit equations available in
the literature can be used.
An alternative solution to the iterative
methods is the direct use of an explicit
equation which is precise enough to
calculate the value of f directly. In the case
of smooth pipes, in which f depends only
on Re, Gulyani (1999) provides a revision
and discussion of the correlations more
commonly used to estimate the friction
factor. In the general case of rough tubes,
numerous equations have been proposed
since the 1940s. In this work, in addition of
applying ANN, a revision of those more
frequently used is presented.
More (2006) obtained an analytical
solution of the Colebrook and White
equation for the friction factor, using the
Lambert W function. Romeo, et al. (2002)
reviewed the most common correlations
for calculating the friction
factor in rough and smooth pipes. From
these correlations, a series of more general
equations has been developed making
possible a very accurate estimation of the
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
18
friction factor without carrying out
iterative calculus. In recent years, artificial
neural network (ANN) models have
attracted researchers in many disciplines of
science and engineering, since they are
capable of correlating large and complex
datasets without any prior knowledge of
the relationships among them. ANNs were
applied by Yuhong and Wenxin (2009) to
predict the friction factor of open channel
flow, by Zahiri and Dehghani (2009) to
determine flow discharge in straight
compound channels, by Ozgur Kisi (2004)
to predict mean monthly stream flow, by
Nakhaei (2005) for estimating the
saturated hydraulic conductivity of
granular material and by Landeras et al.
(2009) for forecasting weekly
evapotranspiration.
The overall objective of the present study
was to devise and evaluate a non-iterative
scheme for estimating the friction factor, f,
in the Darcy-Weisbach equation using an
ANN as a means to avoid the need for a
time-consuming, iterative solution of the
Colebrook equation. Also comparisons
between numerical solutions of Colebrook
with some empirical equation performed
.
1.1. Review of previous equations for
calculation of the friction factor
The most widely used equations postulated
since the end of the 1940s are stated below
in the order of publication.
(a) Moody (1947) proposed the following
empirical equation:
3
1
64 10
10*210055.0RD
f
(8)
According to the author, this equation is
valid for Re ranging from 4000 to 108 and
values of ε/D ranging from 0 to 0.01.
(b) Later, Wood (1966) proposed the
following correlation:
134.0
225.0
62.1,88.0
53.0094.0
.
DC
Db
DDa
Rbaf c
(9)
This equation is recommended for Re
between 4000 and 107 and values of ε/D
ranging from 0.00001 to 0.04.
(c) Churchill (1973), using the transport
model, deduced the following expression:
9.09.0
869.0/1 7
7.3
/log2
17
7.3
/
R
D
fR
De
f (10)
(d) Churchill (1977), again proposed the
following equation valid for the whole
range of Re (laminar, transition and
turbulent):
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
19
16
169.0
12
1
2
312
37530
7
7.3
/log2
88
RB
R
DA
BAR
f
(11)
(e) Chen (1979) proposed the following
equation:
8981.0
1098.1 8506.5
8257.2
)/(0452.5
7065.3
/2
1
R
DLog
R
DLog
f
(12)
This method involves carrying out two
iterations of the Colebrook–White
equation. The accuracy of the results
obtained from this equation is high due to
the fact that the initial estimate is good.
The equation proposed by Chen is valid for
Re ranging from 4000 to 4*108 and values
of ε/D between 0.0000005 and 0.05.
(f) Barr (1981), by a method analogous to
that used by Chen (1979), proposed the
following expression:
0.70.52
4.518 log 712log
3.7 1 . 29
RD
f R R D
(13)
(g
) Zigrang and Sylvester (1982) also
followed the same method as that used by
Chen (1979), but carried out three internal
iterations. They proposed the following
equation:
1 5.02 5.02 132log log log
3.7 3.7 3.7
D D D
R R Rf
(14)
2. Material and methods
2.1. Methodology
The development of any ANN model
involves three basic steps: the generation
of data required for training, the training of
the ANN model, and the evaluation of the
ANN configuration leading to the selection
of an optimal configuration. The ANN
software program employed was
Qnet2000. The procedure used for the
development of our ANN model is
outlined below:
(a) An iterative solution scheme was first
prepared to solve the Colebrook equation
at predefined values of Re and /D. The
parameters used for preparing the input
data file for the iterative solution scheme
included a combination of 74 Re and 28
D parameters resulting in a total of 2072
input data points.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
20
(b) Several ANN models were then trained
and tested with the information about each
Re and /D as inputs and the generated
corresponding value of f as the output.
(c) The trained ANN models were then
used to predict the values of f based on
known Re and /D values.
(d) The optimum ANN model which
produces the best results based on some
preset measures was then selected and
validated using a larger dataset.
The values of f in Equation (4) must be
determined either by trial-and-error or after
implementing an implicit solution
procedure. In this study successive
substitution procedure used.
2.2. Training dataset
The data for training the ANN model were
generated using the numerical procedure
described above. A dataset consisting of a
total of 2072 points (74 values of Re
ranging from 2000 to 108 and 28 values of
/D ranging from 10-6
to 0.05) resulting
from the combination of Re and /D as
inputs and f as output was used for training
the ANN model. 30% of total input date
was selected as test data (622 point data).
The optimal ANN configuration was
selected from amongst various ANN
configurations based on their predictive
performance. The two error measures used
to compare the performance of the various
ANN configurations were: determination
coefficient (R2) and root mean square error
(RMS).
3. Results
The ANN configurations employed an
input layer having two neurons, with one
corresponding to each of the input
parameters (Re and /D) in some form, and
an output layer consisting of one neuron
representing the output parameter (f).
Various transfer function was used in all
cases. In order to find the optimal network,
several configurations were tried in which
the number of hidden layers varied from
one to two and the number of neurons
within each hidden layer varied from two
to 10 (Table 1).
Table 1. Prediction errors for the training
and testing dataset of the friction factor
associated with different ANN
configurations without transformations of
the input parameters
Training Test
Transfer function No. of hidden layers No. of neurons/layer RMS R2 RMS R
2
Sigmoid 1 2 0.0384 0.978 0.0422 0.968
Sigmoid 1 3 0.0375 0.978 0.0406 0.974
Sigmoid 1 4 0.0383 0.977 0.0386 0.977
Sigmoid 1 5 0.0379 0.977 0.0396 0.976
Sigmoid 1 6 0.0388 0.977 0.0382 0.976
Sigmoid 1 8 0.0369 0.977 0.038 0.977
Sigmoid 1 10 0.0399 0.976 0.038 0.974
Hyperbolic Tangent 1 5 0.0372 0.978 0.0388 0.976
Gaussian 1 5 0.0404 0.974 0.0374 0.979
Sigmoid 2 2,2 0.0385 0.977 0.0411 0.973
Sigmoid 2 2,3 0.0401 0.974 0.0374 0.979
Based on Table 1, ANN with one hidden
layer has enough accuracy and architecture
2,5,1 (input layer having two neurons, one
hidden layer with 5 neuron; one output
neuron) have minimum RMS and
maximum R2. So architecture 2,5,1 with
Sigmoid transfer function could be
selected in this case. Although the model
with optimum configuration predicted the
friction factor reasonably well for Re and
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
21
/D values at the upper end of the range of
input data, the overall performance of this
ANN configuration included errors that
could be deemed unacceptable for solving
many problems in closed pipe flow.
As it is clear from both Equation (4), the
parameter f is a logarithmic function of
both input parameters (i.e. Re and /D). For
this reason, a second attempt was made to
improve the performance of the ANN
model by transforming input data
parameters. Thus Re and /D parameters
were transformed using a logarithmic
function to the 10 base. Repetition of the
analysis outlined earlier produced an
optimum ANN configuration which
markedly improved the overall predictions
of the model. The error measures
associated with the different ANN
configurations for this case are presented
in Table 2.
Table 2. Prediction errors for the training
and testing dataset of the friction factor
associated with different ANN
configurations with transformations of the
input parameters
Training Test
Transfer function No. of hidden layers No. of neurons/layer RMS R2 RMS R
2
Sigmoid 1 2 0.0325 0.981 0.0409 0.97
Sigmoid 1 3 0.0262 0.988 0.0266 0.987
Sigmoid 1 4 0.0353 0.989 0.0258 0.988
Sigmoid 1 5 0.0218 0.995 0.0234 0.99
Sigmoid 1 6 0.022 0.992 0.023 0.991
These results demonstrate the importance
of choosing the right transformation of
input data parameters and the significant
impact this may have on the overall
performance of the ANN model. In
addition, Figure 2 shows a plot of f values
as predicted by some empirical equation
and numerical solution of Colebrook-
White equation. It is clear that predicted f
in equations by Moody (1947), Wood
(1966) and Churchill (1977), have low R2
respect the others.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
22
Figure 2. Plot of f values as predicted by some empirical equation and numerical solution of
Colebrook-White equation.
Figure 3. shows plot of f values as
predicted by ANN and numerical solution
of Colebrook-White equation. Improved
ANN configuration with log(Re) and
log(/D) used in this figure with 2,5,1
setup. This ANN configuration has R2 and
RMS 0.995 and 0.0218 respectively (Table
2). This ANN model is capable of
predicting the values of f in the Darcy-
Weisbach equation and was in close
agreement with those obtained using the
numerical technique.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
23
y = 0.9985x + 4E-05
R2 = 0.9949
0.00
0.02
0.04
0.06
0.08
0.10
0.00 0.02 0.04 0.06 0.08 0.10
Numerical
Sim
ula
ted
by
AN
N
Figure 3. Plot of f values as predicted by ANN and numerical solution of Colebrook-White
equation.
4. Conclusions
From the correlations shown in the
literature, a series of more general
equations has been shown making possible
a very accurate estimation of the friction
factor without trial and error. An optimum
ANN model was developed for calculating
the friction factor in the Darcy-Weisbach
equation as applied to the turbulent flow
regime in closed pipes. The model
involves a neural network with one hidden
layers and 5 neurons in that layer.
Following logarithmic transformations of
the input data parameters, the trained
network was able to predict the response
with R2 and RMS 0.995 and 0.0218
respectively (Table 2). This model allows
for an explicit solution of f without the
need to employ a time-consuming iterative
or trial-and-error solution scheme, an
approach that is usually associated with the
solution of the Colebrook equation in the
turbulent flow regime of closed pipes. For
these reasons, the model is useful for flow
problems that involve repetitive
calculations of the friction factor such as
those encountered in the solution of pipe
network problems as well as the hydraulic
analysis of pressurized irrigation systems.
1) References
2) Barr, D. I. H. (1981), Solutions
of the Colebrook-White
function for resistance to
uniform turbulent flow,
Proceeding Inst. civil
engineers, Part 2, 529-536.
3) Chen, N. H. (1979), An
explicit equation for friction
factor in pipe, Ind. engineer's
Chemical fundamental.
18(3):296.
4) Churchill, S. W., (1973),
Empirical expressions for the
shear stress in turbulent flow
in commercial pipe, AIChE
Journal, 19 (2):375.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
24
5) Churchill, S. W., (1977),
Friction factor equations spans
all fluid–flow regimes,
Chemical engineers. 84(24):91.
6) Colebrook, C.F., White, C.M.,
(1937), Experiments with
fluid-friction roughened pipes,
Proc. R. Soc. Ser. A 161 367.
7) Colebrook, C.F., (1939),
Turbulent flow in pipes, with
particular reference to the
transition region between the
smooth and rough pipe laws, J.
Inst. Civil Engrs. (London) 11
133.
8) Gulyani, B.B., (1999), Simple
equations for pipe flow
analysis, Hydrocarbon Process.
(8) 67-78.
9) Landeras G, Ortiz-Barredo A.,
and Javier López, J., (2009)
Forecasting weekly
evapotranspiration with
ARIMA and Artificial Neural
Network models. Journal of
Irrigation and Drainage
Engineering, 135(3): 323-334.
10) Moody, M. L., (1947), An
approximate formula for pipe
friction factors, Trans., ASME,
69:1005.
11) More, Ajinkya A., (2006).
Analytical solutions for the
Colebrook and White equation
and for pressure drop in ideal
gas flow in pipes, Chemical
Engineering Science, 61 5515
– 5519.
12) Nakhaei, M. (2005) Estimating
the Saturated Hydraulic
Conductivity of Granular
Material, Using Artificial
Neural Network, Based on
Grain Size Distribution Curve.
Journal of Sciences, Islamic
Republic of Iran. University of
Tehran. 16(1): 55-62.
13) Ozgur K. (2004) River Flow
Modeling Using Artificial
Neural Networks, Journal of
Hydrologic Engineering,
9(1):60-63.
14) Romeo, E., Royo, C. and
Monzon, A., (2002). Improved
explicit equations for
estimation of the friction factor
in rough and smooth pipes.
Chemical Engineering Journal
86:369–374.
15) Wood, D. J., (1966), An
explicit friction factor
relationship, Civil engineers.
ASCE.
16) Yuhong, Z. and Wenxin H.,
(2009) Application of artificial
neural network to predict the
friction factor of open channel
flow. Commun. Nonlinear Sci.
Numer. Simulat. 14: 2373–
2378.
17) Zahiri A. and Dehghani A. A.,
(2009) Flow Discharge
Determination in Straight
Compound Channels Using
ANNs, World Academy of
Science, Engineering and
Technology 58, 12-15.
18) Zigrang, D. J. & Sylvester, N.
D., (1982), Explicit
approximations to the
Colebrooks friction factor,
AIChE J., 28 (3):514.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
25
Impact of Cloud Computing in Developing the Education
Process
Ibrahiem M. M. El Emary, Ph.D
Information Technology Deanship, King Abdulaziz University
Jeddah, Saudi Arabia
E-mail: [email protected]
ABSTRACT At the moment, we see that implied big cloud plays as central to a wide range of
applications. While it may not be interactive in the physical sense, it has a strong
potential for social interaction. One of the main applications benefiting very
effectively from the cloud is e-learning systems that usually require many hardware
and software resources. There are many educational institutions that these investments
cannot, therefore, that cloud computing represents the best solution for them. The
implementation of cloud computing in the e-learning system with the characteristics
and specific approach. Therefore, the main objective of this paper is to address and
discuss how to tap the potential of cloud computing to promote much-needed practice
of cooperation between educators, as well as talk about the positive impact of using
cloud computing for e-learning development solutions architectures.
Key Words: HE, IT, Cloud, Web Browser, Microsoft, Google, Amazon, PDA
and PAU
1. Introduction The concept of computing in the cloud
can be defined as the delivery of IT
services that run in a web browser; the
type of services range from adaptations
of familiar tools such as email and
personal finance to new offerings
such as virtual worlds and social
networks. Storage of digital data is an
important service among these. Cloud
computing is a computing platform
that resides in a service provider’s
large data center and is able to
dynamically provide servers the ability
to address a wide range of needs of
clients. The cloud is a metaphor for the
internet. Some people call it the World
Wide Computer. Technically, it is a
computing paradigm in which tasks are
assigned to a combination of
connections, software and services
accessed over a network. This network
of servers and connections is
collectively known as the cloud.
Physically, the resource may sit on a
bunch of servers at different data
centers or even span across continents.
Actually, it is designed to work like a
whole computer in the cloud and
aimed at a wider audience, including
those who can’t afford their own
computer. Computing at the scale of
the cloud allows users to access
supercomputer-level power. Instead
of operating their own data centers,
firms might rent computing power
and storage capacity from a service
provider, paying only for what they
use, as they do with electricity or
water. This paradigm has also been
referred to as “utility computing,” in
which computing capacity is treated
like any other metered utility service—
one pays only for what one uses.
Users can reach into the cloud for
resources as they need from anywhere
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
26
at any time. For this reason, cloud
computing has also been described as
"on-demand computing."
There is a lot of existing technologies
supported by the cloud computing that
represents one of the most talked
subjects in the field of business world
today. For example, most mobile
applications are hosted in the cloud.
So, the cloud is one of the most
effective solutions for data back-up
and storage. While it was difficult to
find examples of cloud computing in
the learning organization just a few
years ago, it is clear that it is firmly in
place and will impact every level of
training in the coming years. Higher
education (HE) landscape around the
world is in a constant state of flux
and evolution, mainly as a result of
significant challenges arising from
efforts in adopting new and
emerging technologies and
pedagogies in their teaching and
learning environments. This is mainly
as a result of a new genre of students
with learning needs vastly different
from their predecessors, and it is
increasingly recognized that using
technology effectively in higher
education is essential to providing
high quality education and preparing
students for the challenges of the
21st century [1].
However, an unresolved challenge
to the effective use of technology
in education is the continued
dominance of traditional didactic
pedagogy despite the critical need for
a paradigm shift from the passive
teacher- centered approach
(transmission of information and
skills) to student-centered
constructivists approaches whereby
students construct knowledge
through interaction and collaboration
with peers as well as teachers. The
bulk of today’s eLearning systems still
consist of simple conversion of
classroom-based content to an
electronic format while still retaining
its traditional distinctive knowledge-
centric nature [1]. Although the new
technologies have the potential to play
an important role in the development
and emergence of new pedagogies,
where control can shift from the
teacher to an increasingly more
autonomous learner, and to rescue the
HE from this appalling situation, the
change is very slow or not forthcoming
at all for various reasons. This is
mainly because both teachers and
learners require a number of
specific skills for technology-
supported constructivist approaches
that is, online tutor skills, and online
learning skills; learners get limited
support to develop such skills from
their teachers who often lack these
same skills themselves. There’s no
doubt learning executives recognize
the advantages of cloud computing;
many have already integrated it into
their learning organizations [1]. The
major advantages of cloud computing
in the field of learning are:-
• Learning content is more
readily available and
accessible in the cloud
since it is an open
system and resides outside a
company’s firewall.
• Content can be managed and
sourced from anywhere within
the cloud and is easily scalable
to meet users’ needs.
• The cloud provides rapid
program implementation and
revision. Content can be
revised and published faster
and easier.
Adopting cloud computing technology
in HE face some challenges as well.
First is integration; how does the cloud
fit into a company’s existing learning
management system? There are
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
27
concerns about privacy and security;
particularly how to protect proprietary
content within the cloud. Others are
concerned with the initial investment.
Will the cloud deliver strategic value
in addition to measurable cost-savings?
1. CLOUD COMPUTING STRUCTURE,
CHARACTERISTICS AND
FACILITIES IN EDUCATION
Frequently the “Cloud Computing”
term has been overloaded and flexed
by the hardware and software vendors
to fit their marketing strategy, resulting
in a general confusion of what truly
cloud computing is. However there are
some common traits that could loosely
define the characteristics of Cloud
Computing [7]: Virtualized which means that at any
given point the consumers are unaware
where the application software or virtual
machine lives. As a matter of fact the
appealing aspect of it is that the users
don't have to be concerned about that
aspect as long as the computing
resources are available to them.
Autonomic and Elastic which
simplifies and reduces the cost of
management. Unlike current systems,
the cloud computing infrastructure can
re-size itself based on the computing
demand. This is accomplished by
adding or removing dynamically
resources such as CPU, memory and
disk.
Multi-tenant or shared facility which
means that there has to be a mechanism
of sharing, separating and securing the
resources of many users who access and
use the system simultaneously.
Service Oriented which means that it's
only offered as a service. Nowadays
there are several distinct group of
services offered separately or in
combination:
− Infrastructure as a Service
(IaaS)
− Platform as a Service (PaaS)
− Software as a Service
(SaaS) Accessible from anywhere regardless of
user's geographical location. To
accomplish this HTTP has been
adopted as a common transport
protocol. To increase security it can be
combined with XML/SOAP. For the
IaaS; common Ethernet-based system
access protocol such as SSH, RDP,
VNC can be used.
Measurable and Billable: There must
an entitlement and control of consumed
resources along with ways to capture
the actual consumption.
Other desired but optional
characteristics are integration with
various private and public cloud
computing infrastructures, which
should result in service mobility and
utilization of combined resources from
many dispersed clouds. The
advantages that come with cloud
computing ( can help resolving some
of the common challenges one might
have while supporting an educational
institution [2, 3, 4]) are listed as
follows:- Cost; One can choose a subscription or
in some cases, pay-as-you- go plan –
whichever works best with that
organization business model.
Flexibility; Infrastructure can be
scaled to maximize investments. Cloud
computing allows dynamic scalability
as demands fluctuate.
Accessibility; This help makes data
and services publicly available without
make vulnerable sensitive information.
Some would resort to a cloud
computing vendor because of the lack
of resources while others have the
resources to build their cloud
computing applications, platforms and
hardware. But either way, components
have to be implemented with the
expectation of optimal performance
when mobile terminals are used [5]
are: - The Client – The End User;
everything ends with the client
(mobile) [see Fig.1].
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
28
Fig. 1 Various components of Cloud
computing [2]
The hardware components, the
application and everything else
developed for cloud computing will be
used in the client. Without the client,
nothing will be possible. The client
could come in two forms: the hardware
component or the combination of
software and hardware components.
Although it’s a common conception
that cloud computing solely relies on
the cloud (internet), there are certain
systems that requires pre-installed
applications to ensure smooth
transition. In this work, all the pre-
installed applications can view by
mobile devices though clouds. The
hardware on the other hand will be the
platform where everything has to be
launched. Optimization is based on
two fronts: the local hardware capacity
and the software security. Through
optimized hardware with security, the
application will launch seamlessly with
mobile devices [5]. Cloud computing
always has a purpose. One of the main
reasons cloud computing become
popular is due to the adoption of
businesses as the easier way to
implement business processes. Cloud
computing is all about processes and
the services launched through mobile
cloud computing always has to deal
with processes with an expected
output.
Regarding the Services that exist in
Cloud Computing, it is divided into the
following:-
Infrastructure as a Service; One can
get on-demand computing and storage
to host, scale, and manage
applications and services. Using
Microsoft data centers means one can
scale with ease and speed to meet the
infrastructure needs of that entire
organization or individual
departments within it, globally or
locally [6].
Platform as a Service; The windows
azure cloud platform as a service
consists of an operating system, a
fully relational database, message-
based service bus, and a claims access
controller providing security-
enhanced connectivity and federated
access for on premise applications. As
a family of on- demand services, the
Windows Azure platforms offers
organization a familiar development
experience, on-demand scalability,
and reduce time to market the
applications.
Software as a Service; Microsoft
hosts online services that provide
faculty, staff, and students with a
consistent experience across multiple
devices. Microsoft Live at edu
provides students, staff, faculty, and
alumni long-term, primary e-mail
addresses and other applications that
they can use to collaborate and
communicate online— all at no cost
to the education institution. Exchange
Hosted Services offers online tools to
help organizations protect themselves
from spam and malware, satisfy
retention requirements for e-discovery
and compliance, encrypt data to
preserve confidentiality, and maintain
access to e-mail during and after
emergency situations. Microsoft
Dynamics CRM Online provides
management solutions deployed
through Microsoft Office Outlook or
an Internet browser to help customers
efficiently automate workflows and
centralize information. Office Web
Apps provide on-demand access to
the Web-based version of the
Microsoft Office suite of applications,
including Office Word, Office Excel,
and Office PowerPoint.’
With respect to the Cloud computing
usage; we say that the cloud plays the
main role in the business role and also
it is the only elastic data center which
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
29
wrapped around various new
technologies into it. The technology is
most probably used in the business
oriented scenario than the service
motivated organization as per the
survey did by us. According to the
Survey made during the month of
October 2010 based on the
questionnaire prepared by us it was
found that a major part of the survey
group knew about cloud computing,
69% knew that cloud is used in
business, 12% knew it is used in
education, 88% agree to implement the
cloud for education sector, 94%
believes that the cloud technology can
reduce the cost of high quality
education system and most of them are
unaware that the cloud is also offered
at low cost.
The requirements for Cloud can be
stated as follows: In the previous
generation of the information
technology the data sharing which led
the path for the knowledge sharing was
not used by the users globally, in this
generation the various streams have the
knowledge of e-Learning and the
Mobile based learning. In this present
context the usage of the central data
center is an easy process for the
education system however the cost of
implementation and the maintenance
of the data storage space and also the
load capability also software licensing
depends on the real time usage of these
systems. Business streams can make
revenue out of those expenses whereas
for educational institutions which
really want to motivate the learners
and want to offer a quality education at
affordable cost can achieve this by
spending a large amount. This can be
overcome by the present cloud
computing technology that is "Pay as
Use" (PAU).
As shown above, we can summarize as
follows: Cloud-based services can be
categorized into three models: (i)
Software as a Service (SaaS), (ii)
Infrastructure as a Service (IaaS), and
(iii) Platform as a Service (PaaS). In a
SaaS infrastructure, service providers
make available applications for
personal and business use such as MS
Exchange and Quick books. IaaS on
the other hand, offers hardware
services which may include virtual and
physical servers. And lastly, PaaS
provides a framework and tools for
developers to build their own
applications. Online content
management systems and website
building services are examples of this
infrastructure. Cloud computing offers
several technical and economic
benefits. In terms of technical
advantage, it is possible to use the
processing power of the cloud to do
things that traditional productivity
applications cannot do. For instance,
users can instantly search over GBs of
e-mail online, which is practically
impossible to do on a desktop. One of
the greatest advantages is that the user
is no longer tied to a traditional
computer to use an application, or has
to buy a version specifically
configured for a phone, PDA or other
device. Any device that can access the
Internet will be able to run a cloud-
based application. Regardless of the
device being used, there may be fewer
maintenance issues. Users will not
have to worry about storage capacity,
compatibility or other matters. Cloud
computing infrastructure allows
enterprises to achieve more efficient
use of their IT hardware and software
investments: it increases profitability
by improving resource utilization.
Pooling resources into large clouds
cuts costs and increases utilization by
delivering resources only for as long as
those resources are needed.
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
30
2. MAJOR CAPABILITIES AND
LIMITATIONS OF CLOUD
COMPUTING IN E- LEARNING
The cloud computing term was derived
from the way the Internet is often
represented in network diagrams. Due
to the fact it involves the existence of
data centers that are able to provide
services; the cloud can be seen as a
unique access point for all the requests
coming from the world wide spread
clients (see fig.2). Cloud computing
comprises of three layers [5]: Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Depending on the requirements, the
customers can choose one or more
services provided. Hardware devices
(such as regular PCs, notebooks,
mobile phones, PDAs or any other
similar equipment’s) or software
applications (like web browsers, for
example Google Chrome) can
successfully play the role of a cloud
client (see figure 2). The customers are
renting or simply accessing the needed
processing capacity from the data
center using the above mentioned
client applications. The quality of the
service becomes a crucial factor of the
cloud computing success.
Fig. 2 Cloud computing
Cloud computing is by no means
different from grid computing. The
later tries to create a virtual
processor by joining together a cluster
of computers. The aim of a grid
computing architecture is to solve large
tasks by using the advantage of
concurrency and parallelism, while the
cloud is focused on collaboration.
Fig. 3 Cloud computing clients
Cloud computing becomes very
popular because it moves the
processing efforts from the local
devices to the data center facilities.
Therefore, any device, like an Internet
connected phone, could be able to
solve complex equations by simply
passing the specific arguments to a
service running at the data center level
that will be capable to give back the
results in a very short time. In these
conditions, the security of data and
applications becomes a very major
issue. Cloud computing is widely
accepted today due to its key
capabilities:-
• The cost is low or even free in
some cases. Also, there are no
costs (or very small ones) for
hardware upgrades;
• For some applications (like
spreadsheets) it can be used even
in the offline mode, so when the
client goes back online a
synchronization process is
refreshing the data;
• The strong connection that exists
today between the users and their
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
31
personal computers can be
completely broken because a
customer can reach the same
result by using any Internet
connected device having
minimum software requirements;
• Devices with minimal hardware
requirements (mobile phones, for
example) could be successfully
used as cloud clients;
• In order to become part of the
cloud, there is no need to
download or install specific
software, only the Internet
connection is required;
• The cost of licensing different
software packages is moved to
the data center level, so there is
no need to upgrade the local
system when new service packs
or patches are released;
• Crash recovery is nearly
unneeded. If the client computer
crashes, there are almost no data
lost because everything is stored
into the cloud.
There are some of the cloud computing
limitations and weak points mentioned
as follows:- The Internet connection speed may
affect the overall performances;
On a long term basis, the data center
subscription fee may be more expensive
than using the hardware;
The service quality is crucial and the
need of the backups is critical when
speaking about data security.
The major players in the field of cloud
computing are Google, Microsoft,
Amazon, Yahoo and some legacy
hardware vendors like IBM and Intel.
Cloud Computing applications are
mainly intended to help companies and
individuals to stretch resources and
work smarter by moving everything to
the cloud. One of the biggest
promoters of the cloud computing is
Google that already owns a massive
computer infrastructure (the cloud)
where millions of people are
connecting to. Today, the Google
cloud can be accessed by Google Apps
[6] intended to be software as a service
suite dedicated to information sharing
and security. Google Apps covers the
following three main areas: messaging
(Gmail, Calendar and Google Talk),
collaboration (Google Docs, Video and
Sites) and security (email security,
encryption and archiving). Microsoft is
developing a new Windows platform,
called Windows Azure, which will be
able to run cloud based applications
[7]. In 2006, Amazon extended its
AWS (Amazon Web Services) suite
with a new component called Amazon
Elastic Compute Cloud (or EC2), that
allows to the users to rent from
Amazon processing power to be used
to run their own applications. The EC2
users rent out from Amazon virtual
machines that can be accessed
remotely. The cloud is an elastic one
just because the user can start, stop and
create the virtual machines through the
web service. There are three predefines
sizes for the virtual machines that can
be rented: small, medium and large,
depending on the physical hardware
performances.
3. E- LEARNING SOLUTIONS
THROUGH CLOUD COMPUTING
Many education institutions do not
have the resources and infrastructure
needed to run top e- learning solution.
This is why Blackboard and Moodle,
the biggest players in the field of e-
learning software, have now versions
of the base applications that are cloud
oriented. E-learning is widely used
today on different educational levels:
continuous education, company
trainings, academic courses, etc. There
are various e-learning solutions from
open source to commercial. There are
at least two entities involved in an e-
learning system: the students and the
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
32
trainers. The students' actions within
an e-learning platform are:
• Taking online course
• Taking exams
• Sending feedback
• Sending homework, projects.
The trainers involved in e-learning
solutions are: -
• Dealing with content
management
• Preparing tests
• Assessing tests, homework,
projects taken by students
• Sending feedback
• Communicating with students
(forums).
Each of these actions requires a certain
degree of security, depending on the
importance and data sensitivity.
Fig. 4 E-learning system
Usually, e-learning systems are
developed as distributed applications,
but this is not necessary so. The
architecture of a distributed e-learning
system includes software components,
like the client application, an
application server and a database
server (see figure 4) and the necessary
hardware components (client
computer, communication
infrastructure and servers). The client
hardware could be a mobile device or a
desktop computer. The client
application can be a simple web
browser or a dedicated application.
Even with the current hardware and
software limitations, mobile devices
are supporting multimedia based
applications. Compared with desktop
applications, nowadays mobile
applications, especially multimedia-
based applications, have serious
limitations due the processing power
and memory constraints. Due the fact
that the data processing is on the server
side, the use of mobile devices for
learning is growing fast. Still, the
mobile applications need to be
optimized to be used for e- learning.
The e-learning server will use cloud
computing, so all the required
resources will be adjusted as needed.
E-learning systems can use benefit
from cloud computing using:
• Infrastructure: use an e-learning
solution on the provider's
infrastructure
• Platform: use and develop an e-
learning solution based on the
provider's development interface
• Services: use the e-learning
solution given by the provider.
A very big concern is related to the
data security because both the software
and the data are located on remote
servers that can crash or disappear
without any additional warnings. Even
if it seems not very reasonable, the
cloud computing provides some major
security benefits for individuals and
companies that are using/developing e-
learning solutions, like the following:
• Improved improbability; it is
almost impossible for any
interested person (thief) to
determine where is located the
machine that stores some wanted
data (tests, exam questions,
results) or to find out which is
the physical component he needs
to steal in order to get a digital
asset;
• Virtualization; makes possible
the rapid replacement of a
compromised cloud located
server without major costs or
IJCSS, Vol.4, No.1, 2012
ISSN: 1803-8328
© USAR Publications
33
damages. It is very easy to create
a clone of a virtual machine so
the cloud downtime is expected
to be reduced substantially;
• Centralized data storage;
losing a cloud client is no longer
a major incident while the main
part of the applications and data
is stored into the cloud; so a new
client can be connected very
fast. Imagine what is happening
today if a laptop that stores the
examination questions is stolen;
• Monitoring of data access
becomes easier in view of the
fact that only one place should be
supervised, not thousands of
computers belonging to a
university, for example. Also, the
security changes can be easily
tested and implemented since the
cloud represents a unique entry
point for all the clients.
Another important benefit is related to
costs. If the e-learning services are
used for a relative short time (several
weeks, a quarter, a semester), the
savings are very important.
5. SUMMERY AND CONCLUSION
In the current decade, adopting cloud
computing for e-learning solutions
influences the way the e-learning
software projects are managed. There
are specific tasks that deal with finding
providers for cloud computing,
depending on the requirements
(infrastructure, platform or services).
Also, the cost and risk management
influences the way the e-learning
solutions based on cloud computing
are managed. So, cloud computing
play a significant scope to change the
whole education system. Cloud based
education will help the students, staff,
Trainers, Institutions and also the
learners to a very high extent and
mainly students from rural parts of the
world will get an opportunity to get the
knowledge shared by the professor on
other part of the world.
REFERENCES
[1] Teo, C. B., Chang, S. C. A., &
Leng, R. G. K (2006). Pedagogy
Considerations for E-learning.
Retrieved 10 Oct 2008
[2] N.Mallikharjuna Rao et al , “Cloud
Computing Through Mobile-
Learning”, International Journal of
Advanced Computer Science and
Applications, Vol.1, No. 6, December
2010
[3] Uhlig,R., Neiger, G. Rodgers, D.
S.M. Kagi, A.Leung, F.H. Smith :
Intel Corp., USA : Intel visualization
technology IEEE Computer Society :
May 2005.
[4] Perez, R., van Doom, L., Sailer, R.
IBM T.J. Watson Res. Center,
Yorktown Heights, NY: Visualization
and Hardware-Based Security -October
2008.
[5] Independent Cloud Computing
and Security
http://cloudsecurity.org/forum/stats/-
Augast 2010.
[6] GregorPetri: The Data Center is
dead; long live the Virtual Data
Center? Join the Efficient Data Center
Nov 2010.
[7] SMI Report Oskar Pienkos June
2011.