DCCN: A Non-Recursively Built Data Center Architecture for...
-
Upload
trinhthuan -
Category
Documents
-
view
214 -
download
0
Transcript of DCCN: A Non-Recursively Built Data Center Architecture for...
Computational and Applied Mathematics Journal 2015; 1(3): 107-121
Published online April 30, 2015 (http://www.aascit.org/journal/camj)
Keywords SGEMS,
Open Flow Load Balancer,
Scalability,
Portal,
Energy Management,
Demand Side Management,
Cloud Data Center
Received: March 29, 2015
Revised: April 16, 2015
Accepted: April 17, 2015
DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic Cloud Portal, (EEATCP)
K. C. Okafor1, G. N. Ezeh
1, I. E. Achumba
1, O. U. Oparaku
2,
U. Diala3
1Dept. of Electrical Electronic Engineering, Federal University of Technology, Owerri, Nigeria 2Dept. of Electronic Engineering, University of Nigeria, Nsukka, Nigeria 3Postgraduate Research Student, Dept. of Automatic Control and Systems Engineering, University
of Sheffield, UK
Email address [email protected] (K. C. Okafor), [email protected] (G. N. Ezeh),
[email protected] (I. E. Achumba), [email protected] (O. U. Oparaku),
[email protected] (U. Diala)
Citation K. C. Okafor, G. N. Ezeh, I. E. Achumba, O. U. Oparaku, U. Diala. DCCN: A Non-Recursively
Built Data Center Architecture for Enterprise Energy Tracking Analytic Cloud Portal, (EEATCP).
Computational and Applied Mathematics Journal. Vol. 1, No. 3, 2015, pp. 107-121.
Abstract Smart Green Energy Management System (SGEMS) proposed in a previous paper
introduced Integrated Service Open Flow Load Balancer (ISOLB) in its two-tier design
to achieve scalability, fault tolerance, and high network capacity for remote users that
will access the Enterprise Energy Analytic Tracking Cloud Portal (EEATCP).
Exponential growth and high bandwidth requirements are perceived as users make use of
the portal. Hence, this paper presents an improved non-recursively defined network
structure for EEATCP datacenter and showed its efficacy for DCCN based on
experimental results. Using Riverbed modeller version 17.5, the obtained results were
compared with those from DCell and BCube which are the most related DCN
architectures. Also, Open Flow security comparison was presented for the proposed
DCCN showing improved performance at large.
1. Introduction
From a study conducted in [1], a web based portal for monitoring energy consumption
trend and enforcing Demand Side Management (DSM) needs to be deployed on a well
designed datacenter network. But the heart of the proposed DCCN is the Cloud DCN.
Before now, the cloud based DCNs presents some interesting challenges to datacenter
experts. This is as result of the growth in datacenter models supporting Internet based
services. For instance, load centers running cloud services can store millions of data
captured from end users. Existing DCNs hold myriads of data and this makes the
existing tree based structures to run into bandwidth bottlenecks at the top of rack and
core switches. To sustain a tree structure, more expensive higher capacity switches are
needed as the number of servers grows exponentially. Also, tree based structures
inherently have a single point of failure at the core switch level. These are the research
motivation for our DCCN which is recommended to support a proposed EEATCP and
other cloud based services. Thus, the performance and dependability characteristics of
DCCN will have significant impact on the scalability of the architectural design. In
particular, the DCCN DCN needs to be agile and reconfigurable in order to respond
108 K. C. Okafor et al.: DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic
Cloud Portal, (EEATCP)
quickly to ever changing application demands and service
requirements. Significant research work has been done on
designing the datacenter network topologies in order to
improve their performance, but using QoS metrics will give a
clearer idea as to the performance of such energy datacenter
network.
This paper is organized as fellows: Section II focused on
the literature review. Section III outlined the system model.
Section IV discussed the DCCN validation model from
experimental setup. Section V concluded the work with
references.
2. Literature Review
In [2][3], a review on several DCN architectures was
carried out. However, the most closely related networks to
the proposed DCCN are the DCell and the BCube presented
next.
2.1. DCell DCN Architecture
The authors in [4] present DCell, as a novel network
structure that has many desirable features for data center
networking. DCell is a recursively defined structure, in which
a high-level DCell is constructed from many low-level
DCells and DCells at the same level are fully connected with
one another. DCell scales doubly exponentially as the node
degree increases. DCell is fault tolerant since it does not have
single point of failure and its distributed fault-tolerant routing
protocol performs near shortest-path routing even in the
presence of severe link or node failures. DCell also provides
higher network capacity than the traditional tree-based
structure for various types of services. Furthermore, DCell
can be incrementally expanded and a partial DCell provides
the same appealing features.
Figure 1. Architecture of a DCell [4]
The authors established that there are three design goals
for DCN. Firstly, the network infrastructure must be scalable
to a large number of servers and allow for incremental
expansion. Secondly, DCN must be fault tolerant against
various types of server failures, link outages, or server-rack
failures. Thirdly, DCN must be able to provide high network
capacity to better support bandwidth hungry services. Figure
1 shows the DCell1 network when n=4 and it is composed of
5DCell0 networks. This work will now present the
advantages and the disadvantages of the related architectures
next. Afterwards, the problems of existing network
architectures will be enumerated.
Advantages of DCell DCN
i. Doubly exponential scaling
ii. High network capacity
iii. Large bisection width
iv. Small diameter
v. Fault-tolerance
vi. Requires only commodity network components
vii. Supports an efficient and scalable routing algorithm
Limitations of DCell DCN
i. Load is not evenly balanced among the links in all-
to-all communication
ii. Large latency presence owing to its recursive
structure
2.2. BCube DCN
In [5], the authors presented BCube as a new network
architecture specifically designed for shipping-container
based, modular data centers. At the core of the BCube
architecture is its server-centric network structure, where
servers with multiple network ports connect to multiple
layers of COTS (commodity of the-shelf) mini-switches.
Servers act as not only end hosts, but also relay nodes for
each other. BCube supports various bandwidth-intensive
applications by speeding up one-to-one, one-to-several, and
one-to-all traffic patterns, and by providing high network
capacity for all-to-all traffic. BCube exhibits graceful
performance degradation as the server and/or switch failure
rate increases. This property is of special importance for
shipping-container datacenters, since once the container is
sealed and operational, it becomes very difficult to repair or
replace its components. Implementation experiences shows
that BCube can be seamlessly integrated with the TCP/IP
protocol stack and its packet forwarding procedure can be
efficiently implemented in both hardware and softwares.
Experiments in section 4 using the testbed (see figure 5)
demonstrate that BCube offers fault tolerant, load balancing
and significantly improves bandwidth- intensive applications
[4]. Similarly, these characteristics are also found with DCell.
Figure 2 shows a typical BCube DCN architecture.
Computational and Applied Mathematics Journal 2015; 1(3): 107-121 109
Figure 2. BCube DCN Architecture [5]
Advantages of BCube DCN Architecture
i. BCube is fault tolerant
ii. Supports load balancing
iii. Significantly accelerates representative bandwidth-
intensive applications.
Limitations of BCube DCN Architecture
i. BCube exhibits graceful performance degradation as
the server and/or switch failure rate increases.
ii. Fabricating the MDC is rather difficult, if not
impossible.
iii. Servicing MDC once it is deployed in the field is
difficult due to operational and space constraints.
Therefore, it is extremely important that the design of the
proposed network architecture be fault tolerant and does not
degrade performance in the presence of continuous
component failures.
Again, it is clear that the traditional tree structure
employed for connecting servers in datacenters will no longer
be sufficient for future cloud computing and distributed
computing applications. There is, therefore, an immediate
need to design new network topologies that can meet these
rapid expansion requirements. Current network topologies
that have been studied for large datacenters include fat-tree
[6], BCube [5], and FiConn [7]. These three address different
issues: For large datacenters, fat-tree requires the use of
expensive high-end switches to overcome bottlenecks, and is
therefore more useful for smaller datacenters. BCube is
meant for container-based datacenter networks, which are of
the order of only a few thousand servers. FiConn is designed
to utilize currently unused backup ports in already existing
datacenter networks.
Figure 3. FiConn2 Recursive Architecture with n = 4
110 K. C. Okafor et al.: DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic
Cloud Portal, (EEATCP)
From Figure 3, the FiConn2 is composed of 4 FiConn1s,
and each FiConn1 is composed of 3 FiConn0s. A level-0 link
connects one server port (the original operation port) to a
switch, denoted by dot-dashed line. A Level-1 or Level-2 link
connects the other port (the original backup port) of two
servers, denoted by solid line and dashed line respectively.
The overall challenges of the existing DCNS such as
DCell[4],FAT-tree[6],VL2[8],Monsoon[9],Dipillar[10],
Ficonn[11], DRWeb[12], SVLAN[13], Scafida[14] and
MDCube[15] are discussed next.
2.3. Problems with Existing Network
Architectures
1. Most of the DCN architectures lack agility for
distributed cloud based applications. - The ability to
dynamically grow and shrink resources to meet demand
and to draw those resources from the most optimal
location, eg. Tree-based architectures- FAT-tree, VL2,
Monsoon.
2. Size constraints and scalability issues- Most of them are
not scalable as servers are dedicated to the applications
in conventional DCs- FAT-tree, VL2, Monsoon. Cabling
complexity could be a practical barrier for scalability,
for instance in MDC design the long cables between the
containers cause an issue as the number of containers
increase.
Again in the context of scalability and physical constraints,
the datacenter scaling out means addition of components that
are cheap, whereas in scaling up more expensive components
are upgraded and replaced to keep up with demand. Cost
becomes an issue in this case.
1. Most architecture has huge power requirements as they
lack software management platforms for services
aggregation such as Openflow Software Define Network.
The physical constraints such as high density
configurations in racks might lead at room level to very
high power densities, though an efficient cooling
solution is important for datacenter reliability and
uptime. In addition, air handling systems and rack
layouts affect the datacenter energy consumption.
2. Most of the architectures have high cost economy. The
cost of end switches for three layer designs, cooling
systems, etc could lead to high OPEX.
3. Server/switch Interconnection limitations- In most of the
architectures, there is poor server to server connectivity
leading to high degree of resource oversubscription and
fragmentation in the DCN.
4. In most systems, limited server-to-server capacity limits
the datacenter performance and fragments the server
pool. Limited server-to-server capacity can lead to the
clustering the servers near each others in the hierarchy,
because the distance in the hierarchy affects the
performance and cost of the communication. As a
consequence of the dependencies, resources are
fragmented and isolated.
5. Most Datacenter architecture suffers from reliability,
utilization and fault tolerance challenges. If some
component of the data center fails, downtimes are high.
6. Security challenge is a fundamental challenge in these
architectures.
The problems of existing datacenter networks can only be
addressed in a cloud based environment only when there is
an efficient virtualization scheme, efficient load balancing,
with excellent resource scheduling and allocation on the
server pool. For an efficient energy management application,
these criteria must be satisfied by the DCN.
3. System Model
3.1. DCCN Full Scale Virtualization
In a Non-Recursive DCCN, full virtualization refers to the
creation of virtual resources such as the server operating
system; network I/O, CPU, memories, storage, etc. The main
goal is to manage workloads by radically transforming
traditional computing to make it more scalable. This can still
be applied to a wide range of system layers, including
operating system-level virtualization, hardware-level
virtualization and server virtualization. At the physical server,
this work considered the operating system-level virtualization.
In this case, multiple operating systems run on a physical
machine.
With virtualization on the server cluster, it is possible to
separate the physical hardware and software by emulating
hardware using software. When a different OS is operating
on top of the primary OS by means of virtualization, this is
now referred to as a virtual machine. The Vm is a data file on
a physical a machine that can be moved and copied to
another computer, just like a normal data file. The servers in
the virtual environment use two types of file structures: one
defining the hardware and the other defining the hard drive.
The virtualization software, or the hypervisor, offers caching
service that is used to cache changes to the virtual hardware
or the virtual hard disk for writing at a later time.
This technique enables an administrator to discard the
changes done to the operating system, allowing it to boot
from a known state. As shown in Figure 4, virtualization
offers many benefits, including low or no-cost deployment,
full resource utilization, operational cost savings and power
savings. The trade-off is its requirement for careful planning
and skilled technical expertise. Again, since the virtual
machines use the same resources to run, it is shown to lead to
slow performance.
Computational and Applied Mathematics Journal 2015; 1(3): 107-121 111
Figure 4. Full Virtualization used for Server Consolidation in Non-Recursive DCCN
As shown in figure 4, the Virtual Machine Monitor (VMM)
is an intermediate software layer between the OS and the
VMs which provides virtualization. It presents each VM
with a virtualized view of the real hardware, as such this is
therefore responsible for managing VMs' I/O access requests
and passing them to the host OS in order to be executed. As
a limitation, this operation introduces overhead which has a
noteworthy impact on the server performance. This
virtualization technique offers the best isolation and security
for virtual machines and can support any number of different
OS versions and its distributions. As such, EETACP
performance is greatly enhanced at the server level.
3.2. DCCN Datacenter Scalability and
Redundant Replication
i. DCell Scalability Adaptation
This work modified the DCell scalability model presented
in [4] by introducing laguerre’s function Ln(x) for scalable
replication in case of disaster recovery. This mathematical
characterization explains in the details the scalability and the
site redundancy behavior. This is vital because, for every
energy user connecting to the DCCN, the processing servers
must support scalability so as to allow myriads of remote
connections for their job/task processing especially when
accessing the EETACP. In this regard, this work seeks to
derive a scalability model for CEM users that connects to the
DCCN. Analytical models on the server setup in DCCN have
been presented in previous studies.
Now, let ����� denote a high level DCCN cluster subnet
and ������� denote a low-level server connected together.
Where � is a subnet factor (i.e. s ranges from 1 to Nk).
Hence, if a high-level subnet cluster is subnet N, a low-
level subnet is subnet N-1 in a descending order of redundant
integration.
In this case, a high level ����� is connected from a low
level �������
Lets denote �� as the number of links in ����� ,
112 K. C. Okafor et al.: DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic
Cloud Portal, (EEATCP)
����as the number of links in�������,
Then, the maximum number of ������� links that will be
used to connect to DCCNs is given by
�= ����+ 1 (1)
In this regard, let the number of ������� connected with
����� in a subnet cluster be denoted by �� such that
Hence
�� = ����+1 (2)
Also let the number of servers in a �����subnet cluster be
denoted by, �� such that
�� = ���� ∗ ����+1 (3)
Hence,
�� = �� ∗ ���� (4)
Equ (4) shows that the total number of servers in a subnet
cluster (DCCNs) is a product of the maximum number
of�������, that is used to build it and the number of links in
each cluster.
From (4), �� = �� ∗ ���� for �> 0
�� = �� ∗ ���� = ���� ∗ ����+1=(����)� + ����
By expanding an arbitrary variable(���� + ��)� , this now
yields
(���� + ��)� = (����)� +����+
��
Therefore, �� = (���� + ��)� −
�� such that it becomes
obvious that
�� = (���� + 12)
� − 14 > (���� + 1
2)� − 1
2
Hence,
�� + �� > (���� + �
�)� (5)
Similarly, by expanding an arbitrary variable (���� + 1)� ,
this yields (����)� + 2���� + 1.
Therefore, �� = (���� + 1)� − ���� − 1
Obviously �� = (���� + 1)� − ���� − 1 < (���� + 1)� − 1
Hence,
�� + 1<(���� + 1)� (6)
Replacing ���� with � which is the number of links in
DCCN��� in Equ5 (6.4.10) and 6(4.11), this yields
�� + �� > (� + �
�)��and �� + 1 < (� + 1)�� respectively for
�> 0 where � is the scalability factor.
Since �� + �� > (� + �
�)�� is equivalent to �� >(� + �
�)�� −�� and
�� + 1 < (� + 1)�� is equivalent to �� < (� + 1)��– 1
Hence,
(� + ��)�� −
�� < �� < (� + 1)��– 1 (7)
Therefore, for DCCNs with links��, the number of servers
is bounded as shown in Equ (7). The equation shows that the
number of servers in a DCCN scales exponentially as the
node degree increases, hence, a highly scalable support is
guaranteed for myriads of remote energy meter users.
But from Equ (7), scalability without scalable replication
in case of disaster recovery is very useless; hence by
introducing a laguerre’s function Ln(x) in the model, this will
address this issue.
(� + ��)�� −
�� < �� < (� + 1)��– 1 + Ln(x) (8)
Given that a Laguerre’s differential equation for distributed
site backups is
!"#!$" (1 − ) !#!$ + %& = 0 (9)
But, Equ 9 implies that
(�&( � + )(1 − )
* (&( + %& = 0
Here, = 0 is a regular singularity Equ (9). This can be
resolved by series solution method. +� is given as the
coefficients of the scalable site replication for the DCCN
Now,
Let& = [∑ +�.�/0 12� = +0 1 + +� 12� + +� 12�⋯⋯⋯ ∙∙∙∙∙∙∙ ++� 12�+∙∙∙∙∙] (10)
From Equ (10), this yields Equ (11).
!#!$=∑ (6 + �).�/0 +� 12��� (11)
!"#!$" = ∑ (6 + �) ∗ (6 + � − 1).�/0 +� 12��� (12)
By substituting Equ. (10), (11) and (12) in Equ 9, this yeilds
∑ (6 + �) ∗ (6 + � − 1).�/0 +� 12��� + (1 − )∑ (6 + �).�/0 +� 12��� + %∑ +�.�/0 12� = 0
This implies that
∑ (6 + �) ∗ (6 + � − 1).�/0 +� 12��� + ∑ (6 + �).�/0 +� 12��� − ∑ (6 + �).�/0 +� 12� + %∑ +�.�/0 12� = 0
Computational and Applied Mathematics Journal 2015; 1(3): 107-121 113
By simple arithmetic simplification, this now becomes
∑.�/0 +�7⟨6 + �⟩⟨6 + � − 1⟩ � ⟨6 � �⟩: 12��� � ∑.�/0 +�76 � � � %: 12�= 0
This gives
∑ +�.�/0 7⟨6 � �⟩� � ⟨6 � �⟩ � ⟨6 � �⟩: 12��� � ∑.�/0 +�76 � � � %: 12�= 0
∑ +�.�/0 7⟨6 � �⟩�: 12��� �∑.�/0 +�76 � � � %: 12�= 0 (13)
Equ 13 is obtained equating to zero the coefficients of the
lowest degree terms 1��
Now, 1�� is obtained by putting � � 0 in the first
summation of Equ 4.18. But it is not feasible to put � � �1 in the second summation to get 1�� , since � is
always positive.
By indicial equation +06� � 0 , hence 6 � 0 , 6 � 0 , +0 ; 0.
Also, by equating to zero, the coefficient of 12� in Equ
(10), is obtain thus:
[Note that to obtain 12� , let � � � � 1 be in the first
summation and � � � in the second summation of Equ 10]
⟨6 � � � 1⟩�+<2� � ⟨6 � � � %⟩+� � 0
Hence,
+<2� � ⟨12��=⟩⟨�12�2��"⟩ +� (14)
For 6 � 0 in Equ (14), this yeilds
+<2� � ⟨� � %⟩⟨�� � 1��⟩ +�
If � � 0, we now have +� � �%+0
If � � 1, we now have +� � ��=� +� � =��
� %+0
If� � 2, we now have
Hence, +< � ��1�� =�=����=���⋯⋯⋯�=�<2���<!�" +0
Where � � MappingNumber By substituting the site replication co-efficient +�,+�+K+K ∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙ +< and 6 � 0 in Equ 10, we then have
& � +0 � +� � � +� � ∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙ �+� � +∙∙∙∙∙ & � +0 � %+� � � =�=���
��!�" +0 � � =�=����=����K!�" +0 K � ��1�< =�=����=���∙∙∙∙∙∙∙∙∙∙∙∙∙�=��2��
�<!�" +0 �+∙∙∙∙∙∙∙∙∙ & � +0 L1 � % � %�% � 1�
�2!�� � � %�% � 1��% � 2��3!�� K �∙∙∙∙∙∙∙∙∙∙∙∙ � ∙ ��1�<%�% � 1��% � 2� ∙∙∙∙∙∙∙∙∙∙ �% � � � 1�
�N!�� � �∙∙∙O & � +0∑ ��1�<.�/0 �=!�
�<!�"�=�<�! �,
Where%is positive
If we take+0 � %!, then solution Equ 9 becomes the site replication laguerre’s polynomial function given by Equ (15).
& � %! P1 � % � =�=�����!�" � � =�=����=���
�K!�" K �∙∙∙∙∙∙∙∙ � ∙ ����Q=�=����=���∙∙∙∙∙∙∙∙∙∙�=��2���<!�" � �∙∙∙R (15)
For n > 1, Equ (15) now yield Equ (16)
S=� � � %! ∑ ��=�T�<!�".</0 � (16)
By substituting Equ (16)into Equ (8), we now obtain Equ (17)
�� � ����� � �
� � �� � �� � 1���– 1 + %!∑ ��=�T�<!�".</0 � (17)
Hence, Equ 17 shows the energy cloud datacenter
scalability and redundant replication model.
3.3. DCN Recursive Algorithm
A recursive algorithm is one in which objects are defined
in terms of other objects of the same type. This is typical of
DCell and BCube as previously explained. This type of
Algorithm shows even numbers for server mapping on a
switch in DCell and BCube DCN architectures. This is
demonstrated in table 1. The merits and demerits are also
outlined next.
114 K. C. Okafor et al.: DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic
Cloud Portal, (EEATCP)
Table 1. A Simple Recursive Algorithm
The advantages include:
- Simplicity of code
- Easy to understand
The disadvantages include:
- Memory
- Speed
- Possibly redundant work
In general, recursive computer programs require more
memory and computation compared with iterative algorithms,
but they are simpler and in many cases depict a natural way
of thinking about the problem.
3.4. Non Recursive DCCN
In the SGEMS, Non-Recursive DCCN_ routine was used
to implement figure5. The intent is to improve the quality of
service metrics of energy network. In the DCCN
archit6ecture, an Integrated Open Flow balancer (ISOLB)
supports embedded VLAN service which offers excellent
computing characteristics like flexibility, security, broadcast
regulation, congestion control, etc. This iterative or Non-
recursive algorithm has two sections as detailed in table 2.
The first section checks whether DCCNs server subnet cluster
is constructed. If so, it connects all the server nodes n to a
corresponding ISOLB port and ends the recursion.
The second section interconnects the servers to the
corresponding switch port and any number of UV< .W6or
UX servers are connected with one link. Each server in the
subnet cluster DCCNlb is connected with 40GB links for all
Open Flow VLAN id. The DCCN testbed in figure 5 has its
Open Flow VLAN logical segmentation shown in figure 6a
and 6b. Figure 5 has its linear construction algorithm as
detailed in table 2 below. In the DCCN logical structure, the
servers in one subnet are connected to one another through
one of the ISOLB ports that is dedicated to that subnet. Each
server in one subnet is also linked to another server of the
same order in all another subnets. As such, each of the
servers has two links. With one, it connects to other servers
in the same subnet (intra server connection) and with the
other, it connects to the other servers of the same order in all
other subnets (inter server connection).
Apart from the communication that goes on
simultaneously in the various subnets, the inter server
connection is actually an Open Flow VLAN connection. This
Open Flow VLAN segmentation of the servers logical
isolates them for security and improved network performance.
Together with other server virtualization schemes ultimately
improves the network bandwidth and speed. The Open Flow
VLAN segmentation gives each DCCNs (subnet) the capacity
to efficiently support enterprise web applications (EETACP,
Web Portals, Cloud applications such as software as a service)
running on server virtualization in each port thereby lowering
traffic density.
Table 2.A robust Non-Recursive Algorithm
Algorithm 1: DCCN Open Flow VLAN Construction Algorithm.
/* l stands for the level of DCCNs subnet links, n is the number of nodes in a cluster
DCCNlb,
pref is the network prefix of DCCNlbs is the number of servers in a DCCNlb cluster*/
Build DCCNs (l, n, s)
Section I: /* build DCCNs */
If (l = = 0) do
For (int i = 0; i < n; i++) /* where n is=4*/
Connect node [pref, i] to its switch;
Return;
Section II: /*build DCCNs servers*/
For (int i = 0; i < s; i++)
Build DCCNs ([pref, i], s)
Connect DCCNs (s) to its switch;
Return;
End
Algorithm 1a: Even(positive integer k)
Input: k, a positive integer
Output: k-th even natural number (the first even being 0)
Algorithm:
int i, even;
i := 1;
even := 0;
while( i < k ) {
even := even + 2;
i := i + 1;
}
return even.
Computational and Applied Mathematics Journal 2015; 1(3): 107-121 115
Figure 5. Non-Recursive DCCN Validation Testbed (Scenario)
Figure 6a. Logical Isolation of EETACP Servers
116 K. C. Okafor et al.: DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic
Cloud Portal, (EEATCP)
Figure 6b. OpenFlow Mapping
4. DCCN Model Validations
4.1. SGEMS Datacenter Experimental Setup
For validation of the energy application DCCN, this work
used the design parameters obtained from selected
production testbeds (UNN DCN and galaxy plc) to develop a
generic template in Riverbed Modeller version 17.5. Based
on the experimental measurement carried out on the testbed,
the work obtained the metrics used for performance
validations of DCCN in comparison with two closely related
datacenter network architectures reviewed previously, DCell
and BCube. On the generic Riverbed Modeller template
shown in figure 5 above, three scenarios were created, one
for DCCN, one for DCell, and one for BCube. For each of
the scenarios, the attributes of the three architectures were
configured on the template and the simulation was run.
Afterwards, the Riverbed Modeller engine generates the
respective data for each of the QoS investigated in this work.
Essentially, DCCN, BCube and DCell share several similar
design principles, such as providing high capacity between
all servers and fully accommodating the end-systems. DCCN
uses only a low-end OpenFlow load balancer and provides
better one-to-x support at the cost of multi-ports per-server.
Also, it is able to execute job dispatch to the server clusters.
Besides, the proposed DCCN uses its discrete process
algorithms, logical isolations, ISOLB, server virtualization,
and cluster server convergence to enhance its performance.
On the other hand, BCube uses active probing for load-
balancing. DCell employed neighbor maintenance, link state
management, prototyping, and fault tolerance schemes in its
design approach.
4.2. Result Analysis
Three experiments were carried out to study parameters
such as throughput, fault-tolerance, network capacity,
utilization, service availability, resource availability, delay,
scalability effects of DCCN. The DCCN responses with
respect to these parameters were compared against BCube
and DCell datacenter architectures.
Before the validation run, link consistence tests were
carried out randomly to ascertain any possibility of failure in
all cases. In context, an investigation on both the path failure
ratio and the average path length for the found paths was
carried out and all the links found satisfactory. This work
enabled the essential attributes for each of the three scenarios
(DCCN, DCell, and BCube). The validation run completed
successfully and the results were collated in the global and
object statistics reports. The validation simulation plots of the
three DCN architectures under study (DCCN, DCell and
BCube) are shown in the plots of figures 7 to 12.
From figure 7, it was observed that the DCCN had a
relatively better throughput response (40%) compared with
BCube (26.67%) and DCell (33.33%). This because, the two-
tier architecture of DCCN with its OpenFlow load balancer
module (interface segmentation) and virtualization offers
better job delivery services compared with the other two
architectures. Again, the recursive construction of the other
two architectures offers throughput impedance and delays for
mission critical cloud service environment such as EETACP
solution.
Computational and Applied Mathematics Journal 2015; 1(3): 107-121 117
Figure 7. Average throughput Response Comparison (DCCN, BCube and DCell)
Figure 8 shows the average scalability response
comparison. In this case, it was observed that the proposed
DCCN had a fairly good scalability response thereby
supporting fault tolerance up to an optimal degree. Recall
from literature, DCell and Bcube were established as fault
tolerant as well as scalable networks, but with the
introduction of OpenFlow load balancer and full scale
virtualization, the DCCN gave 34.85% scalability response
which is relatively better than compared with DCell of 33.33%
and BCube of 31.82% scalability responses respectively. It is
known that perfect scalability can only be achieved if the
network system provides identical response time for twice
amount of given work, twice amount of bandwidth and twice
amount of hardware. However, perfect scalability cannot be
achieved in real life, but attempts could be made to achieve
near perfect scalability. For example, DNS server behavior
have limit to its control. Hence, theoretically, it is not
possible to serve higher amount of requests than the DNS
servers can process. This is the upperbound for any DCN,
even in Google and facebook social networks.
Figure 8. Average Scalability Response Comparison (DCCN, BCube and DCell)
118 K. C. Okafor et al.: DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic
Cloud Portal, (EEATCP)
Figure 9 shows the average resource availability response
comparison. Network availability depends on the access
technology and the service types. Resource availability in a
DCCN is complex to gather when UDP and TCP services co-
exist. But, virtual machine migration enables load balancing,
hot spot mitigation and server consolidation in virtualized
environments. Generally, live VM migration is of two types -
adaptive, in which the rate of page transfer adapts to virtual
machine behavior and non-adaptive, in which the VM pages
are transferred at a maximum possible network rate. In either
method, migration requires a significant amount of CPU and
network resources, which can seriously impact the
performance of both the VM being migrated as well as other
VMs. This is the major constraint of DCCN.
But in a default non-migration mode, virtualization has
positive influence on resource availability. It was observed
that DCCN had38.46% resource availability; DCell had
33.33% while BCube has 28.21%. The implication is that for
very scalable services, DCCN will guarantee a better
deployment platform for the energy application (EETACP).
Figure 9. Average Resource Availability Response Comparison (DCCN, BCube and DCell)
Another key metric used in the comparison of DCCN,
DCell and BCube architectures is the system delay which is
used interchangeably with system latency. This delay is a
measure of the time it takes for data to make it from source to
destination. The higher the number, the longer it takes. High
latency/delays values can be detrimental to network-sensitive
applications, such as real-time video, EETACP, etc. High
latency values can also introduce noticeable delays to users.
The reduction of the DCCN into a two-tier network presents
a lower latency network compared with DCell and BCube
whose architectures are recursively built. The issue with this
recursive structure is high wait states or delay times.
From figure 10, the delay response of DCCN is 25% that
of DCell is 41.67% while that of BCube is 33.33%.In all
cases, a recursive structure though good for server
redundancy, but it creates issues for mission critical
applications in the area of system delay response times.
Computational and Applied Mathematics Journal 2015; 1(3): 107-121 119
Figure 10. Average Delay Response Comparison (DCCN, BCube and DCell)
Also, with lower firewall security delay time, as well as
lower utilization, an observation on relatively improved
resources availability behavior in the proposed DCCN was
made as shown in figures 11 and figure 12.
Now, figure 11 shows the DCCN security influence on link
utilization under active usage. In all cases, the energy
application on the server was emulated with a heavy http
service in the modeller environment. As observed in the
Openflow firewall design, the low utilization (5%) of the link
is as result of the activity Openflow firewall that blocks
unauthorized activities on the DCCN, thereby facilitating
lower bandwidth link utilization as shown above. This is not
the case, when the Pix firewall was isolated. It was observed
that the utilization was very high (95%) in a scenario without
Open Flow firewall. This is as a result of the non-monitoring
or filtering of unauthorized user activities which constitute a
major drain on the network link, thereby causing the system
to have a very high utilization response. As such, for an
improved performance of the DCCN, the need for a Pix
firewall is very indispensable.
Figure 11. Plot of DCCN_Pix Firewall Point to Point utilization of WAN link
120 K. C. Okafor et al.: DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic
Cloud Portal, (EEATCP)
Figure 12 shows the DCCN security influence on query
response time. As expected, the presence of the Pix firewall
improved the DB response time by 12.4%. This demonstrates
that with lower link utilization, the average delay on the link
will be lower also. By isolating the firewall device, the
response time delay of 87.06% was observed. This is very
high. This implies that malicious activities as well as
unnecessary transaction on the network will increase the
system delay and adversely affect the network performance.
Figure 12. Plot of DCCN Pix Firewall Response Times
From the DCCN security analysis, it was deduced that low
link resource utilization will result to high resource
conservation (processor, memory, I/Os, disk, etc) while a
high link resource utilization will result to low resource
conservation. The discrete event methodology used in
evaluating the performance of DCCN firewall throughput
metric shows that such network can be stable at all times.
Consequently, the DCCN with its Openflow virtual appliance
(Pix firewall) is shown as an effective security strategy for
the proposed DCCN. Figures 11 and 12 sufficiently showed
the roles of OpenFlow Virtual Appliance (OFVA) in a
previous study that focused on cloud based forensic security.
Table 3a and 3b shows the summarized results of the study.
Table 3a. Result Summary EETACPDCCN
S/N Validation Metrics DCCN DCell BCube
1. Avg. Throughput (Bytes/Sec) 40% 33.33% 26.67%
2. Avg. Scalability Response 34.85% 33.33% 31. 82%
3. Avg. Resource Availability 38.46% 33.33% 28.21%
4. Avg. Delay Response (Secs) 25.00% 41.60% 33.33%
Table 3b. Result Summary of DCCN OpenFlow Security
S/N Validation Metrics DCCN with Open Flow Firewall DCCN with Non-Open Flow Firewall
1. Avg. Link Utilization 5.00% 95.00%
2. Avg. Query Response 12.4% 87.06%
5. Conclusions
This paper has presented a non-recursively built datacenter
network subsystem of an earlier proposed Smart Green
Energy Management System (SGEMS). This network
primarily introduced an Integrated Service OpenFlow Load
Balancer (ISOLB) in its two-tier design to achieve scalability,
fault tolerance, and high network capacity for remote users
that will access the Enterprise Energy Analytic Tracking
Cloud Portal (EEATCP). A reviewed work on DCell and
BCube outlined its merits and demerits while articulating the
problems of existing datacenter networks for hosting web
based energy applications such as EETACP. Full
virtualization was used for server consolidation in Non-
Recursive DCCN. Mathematical models for the DCCN
datacenter scalability and redundant replication was
Computational and Applied Mathematics Journal 2015; 1(3): 107-121 121
established. DCN recursive algorithm was then employed in
EETACP DCCN design. Using Riverbed modeller, the
network was simulated while making comparison with DCell
and BCube which are the most related DCN architectures.
The results showed that the DCCN offered reliable
performance in terms of throughput (40%), scalability
response (34.85%), resource availability (38.46%), delay
response (25.00%), lower link utilization (5.00%) and lower
query response (12.4%).This implies that energy policy
makers can leverage on such platform to deploy their
enterprise energy applications. Future work will focus on the
power system design for DCCN considering a zero downtime
scenario. In this case, a PV microgrid hybrid system with
other utilities will be leveraged. To achieve this, a FPGA
changeover system referred to as CloudDPI will be
investigated for the DCCN.
References
[1] Okafor, K.C, F.N.Ugwoke, Obayi.I A.A, O.U Oparaku,“The Impact of Distributed Cloud Computing Initiatives (DCCI) on Green Energy Management Using Cronbach's Alpha Test”, International Journal of Advanced Scientific and Technical Research, India. Issue 4, Volume 4, July-August 2014, Pp.853-865. Available online on http://www.rspublication.com/ijst/index.html ISSN 2249-9954.
[2] C.C.Udeze, Okafor K.C, C.C.Okezie, I.O.Okeke, .G.C. Ezekwe, “Performance Analysis of R-DCN Architecture for Next Generation Web Application Integration”, In Proc. of the 6th IEEE International Conference on Adaptive Science & Techonology (ICAST 2014), Covenant University Otta, 19th-31st, October,2014.
[3] Okafor K..C. “A Modelfor Smart Green Energy Management Using Distributed Cloud Computing Network”, Ph.D. Thesis, University of Nigeria Nsukka.
[4] Chuanxiong Guo, Haitao Wu, Kun Tan, Lei Shiy, Yongguang Zhang, Songwu Luz,“DCell: A Scalable and Fault-Tolerant Network Structure for Data Centers”, In Proc. of SIGGCOM Seattle, Washington, USA, Aug.17–22nd , 2008..
[5] C. Guo, G. Lu, D. Li, H.Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, and S. Lu. “BCube: a high performance, server-centric network architecture for modular data centers”, In Proc. of the ACM SIGCOMM 2009 Conference on Data communication, (in SIGCOMM ’09), Pp.63–74, New York, NY, USA, 2009. ACM.
[6] M. Al-Fares, A. Loukissas, and A. Vahdat. A Scalable, Commodity Data Center Network Architecture. In SIGCOMM, 2008.
[7] Dan Li, ChuanxiongGuo, Haitao Wu, Kun Tan, Yongguang Zhang, Songwu Lu, “FiConn: Using Backup Port for Server Interconnection in Data Centers”, In Proceedings of INFOCOM’ 09, 2009.
[8] A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta. “VL2: a scalable and flexible data center network”, In Communications of the ACM, March 2011.vol. 54,no. 3.doi :10.1145/1897852.1897877.
[9] A. Greenberg, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta. Towards a next generation data center architecture: scalability and commoditization. In PRESTO ’08: Proceedings of the ACM workshop on Programmable routers for extensible services of tomorrow, Pp. 57–62, New York, NY, USA, 2008. ACM.
[10] Yong Liao , Jiangtao Yin, Dong Yin , LixinGao, “DPillar: Dual-port server interconnection network for large scale data centers”, Elsevier Computer Networks 56 (2012) 2132–2147.
[11] Dan Li, ChuanxiongGuo, Haitao Wu, Kun Tan, Yongguang Zhang, Songwu Lu, “FiConn: Using Backup Port for Server Interconnection in Data Centers”, In Proceedings of INFOCOM’ 09, 2009.
[12] Udeze C.C, “Re-Engineering DataCenter Networks for Efficient Web Application Integration in Enterprise Organisations”, PhD thesis, Unizik, February, 2013.
[13] Okafor K.C and T.A Nwaodo: “A Synthesis VLAN Approach to Congestion Management in Datacenter internet Networks”, In International Journal of Electronics and Telecommunication System Research, (IJETSR), Vol.5, No.6, May 2012, Pp.86-92.
[14] LászlóGyarmati, Tuan Anh Trinh”, “Scafida: A Scale-Free Network Inspired Data Center Architecture”, InACM SIGCOMM Computer Communication Review, Vol.40, No 5, October 2010, Pp.5-12.
[15] H. Wu, G. Lu, D. Li, C. Guo, and Y. Zhang. “MDcube: a high performance network structure for modular datacenter interconnection”, In Proc. of the 5th international conference on Emerging networking experiments and technologies,( in CoNEXT ’09), Pp. 25–36, New York, NY, USA, 2009. ACM