Graphs are at the Heart of the Cloud
-
Upload
alejandro-erickson -
Category
Technology
-
view
162 -
download
3
Transcript of Graphs are at the Heart of the Cloud
Graphs are at the heart of the cloud
Alejandro Erickson
This work is supported by the EPSRC grant“INPUT: Interconnection Networks, Practice Unites with Theory”,
with Iain Stewart, Javier Navaridas, Abbas Kiasari
April 28, 2015ACiD Seminar
School of Engineering and Computer Science,Durham University
1/38
Data Centre Networks
What are they?A collection of serversand networking hardware (switches,load balancers etc.) connectedtogether at one physical location.
What are they used for?
I Video streaming
I Data analysis and scientific applications
I Indexing the web
I Cloud services. Global cloud computing market soon toexceed $100 billion.
2/38
Huge and growing
Google, Amazon, and Microsoft combined house esimatedover 3 million servers, and these numbers have been growingexponentially. Microsoft Live, in Chicago, covers over700, 000ft2. In 2010, data centres used between 1.1% and1.5% of total global electricity, and the US EPA reported datacentre power usage doubled from 2000 to 2006.
3/38
Huge and growing
Google, Amazon, and Microsoft combined house esimatedover 3 million servers, and these numbers have been growingexponentially. Microsoft Live, in Chicago, covers over700, 000ft2. In 2010, data centres used between 1.1% and1.5% of total global electricity, and the US EPA reported datacentre power usage doubled from 2000 to 2006.
3/38
“Layered” (Single Tier) Data Centre Architecture
4/38
“Layered” (Single Tier) Data Centre Architecture
Aggregation Layer:“Server-to-server multi-tiertraffic flows through theaggregation layer and canuse services, such as firewalland server load balancing,to optimize and secureapplications ... services,such as content switching,firewall, SSL offload,intrusion detection, network analysis, and more.”Access Layer: “Where the servers physically attach to thenetwork...” 1
1http://www.cisco.com/c/en/us/td/docs/solutions/
Enterprise/Data_Center/DC_Infra2_5/DCInfra_1.html4/38
“Layered” (Single Tier) Data Centre Architecture
Faults. Poor:
I Scalability
I Fault tolerance
I Energy efficiency
I Hardware cost
I Agility
This is widely consideredby the research communityto be a legacy architecture.
4/38
What is an Architecture?
Architecture for “us”A list of constraints (formal and informal) on graphs. This canbe complex and difficult to determine!
For example
I nodes can be of different types; e.g., switch-nodes,server-nodes, with appropriate degree constraints.
I edges can be of different types, and certain edges may beforbidden.
I constraints on routing algorithms, diameter, embedding inspace etc...
Traditional data centre topologically “uninteresting”Most of the functionality is in the “enterprise-level”aggregation switches, and the network topology is limited.
5/38
What is an Architecture?
Architecture for “us”A list of constraints (formal and informal) on graphs. This canbe complex and difficult to determine!
For example
I nodes can be of different types; e.g., switch-nodes,server-nodes, with appropriate degree constraints.
I edges can be of different types, and certain edges may beforbidden.
I constraints on routing algorithms, diameter, embedding inspace etc...
Traditional data centre topologically “uninteresting”Most of the functionality is in the “enterprise-level”aggregation switches, and the network topology is limited.
5/38
What is an Architecture?
Architecture for “us”A list of constraints (formal and informal) on graphs. This canbe complex and difficult to determine!
For example
I nodes can be of different types; e.g., switch-nodes,server-nodes, with appropriate degree constraints.
I edges can be of different types, and certain edges may beforbidden.
I constraints on routing algorithms, diameter, embedding inspace etc...
Traditional data centre topologically “uninteresting”Most of the functionality is in the “enterprise-level”aggregation switches, and the network topology is limited.
5/38
What is an Architecture?
Architecture for “us”A list of constraints (formal and informal) on graphs. This canbe complex and difficult to determine!
For example
I nodes can be of different types; e.g., switch-nodes,server-nodes, with appropriate degree constraints.
I edges can be of different types, and certain edges may beforbidden.
I constraints on routing algorithms, diameter, embedding inspace etc...
Traditional data centre topologically “uninteresting”Most of the functionality is in the “enterprise-level”aggregation switches, and the network topology is limited.
5/38
FrameWork vs NowWork
6/38
Overview of Architectures
I 3-layer architecture; single-tier, multi-tier, a.k.a.hierarchical.
I Switch-centric, commodity off-the-shelf (indirectnetworks): fat-tree (Clos based multi-rooted tree),Al-Fares, Loukissas, Vahdat, SIGCOMM 2008. Jellyfish(random-regular graphs), Singla, USENIX 2012.
I Hybrid-optical switch; send certain traffic to an opticalswitch. Helios, Farrington et. al, SIGCOMM 2010.
I Server-centric networks. DCell, Chuanxiong Guo,SIGCOMM 2008. BCube, Chuanxiong Guo, SIGCOMM2009, and many others.
I Use wavelength division multiplexers to combine links intoa single optical cable. Quartz, Yupeng James Liu,SIGCOMM 2014.
I Free-space optics on steerable controllers for each rack ofservers. Firefly, Hamedazimi, SIGCOMM 2014. 7/38
Overview of Architectures
I 3-layer architecture; single-tier, multi-tier, a.k.a.hierarchical.
I Switch-centric, commodity off-the-shelf (indirectnetworks): fat-tree (Clos based multi-rooted tree),Al-Fares, Loukissas, Vahdat, SIGCOMM 2008. Jellyfish(random-regular graphs), Singla, USENIX 2012.
I Hybrid-optical switch; send certain traffic to an opticalswitch. Helios, Farrington et. al, SIGCOMM 2010.
I Server-centric networks. DCell, Chuanxiong Guo,SIGCOMM 2008. BCube, Chuanxiong Guo, SIGCOMM2009, and many others.
I Use wavelength division multiplexers to combine links intoa single optical cable. Quartz, Yupeng James Liu,SIGCOMM 2014.
I Free-space optics on steerable controllers for each rack ofservers. Firefly, Hamedazimi, SIGCOMM 2014. 7/38
Overview of Architectures
I 3-layer architecture; single-tier, multi-tier, a.k.a.hierarchical.
I Switch-centric, commodity off-the-shelf (indirectnetworks): fat-tree (Clos based multi-rooted tree),Al-Fares, Loukissas, Vahdat, SIGCOMM 2008. Jellyfish(random-regular graphs), Singla, USENIX 2012.
I Hybrid-optical switch; send certain traffic to an opticalswitch. Helios, Farrington et. al, SIGCOMM 2010.
I Server-centric networks. DCell, Chuanxiong Guo,SIGCOMM 2008. BCube, Chuanxiong Guo, SIGCOMM2009, and many others.
I Use wavelength division multiplexers to combine links intoa single optical cable. Quartz, Yupeng James Liu,SIGCOMM 2014.
I Free-space optics on steerable controllers for each rack ofservers. Firefly, Hamedazimi, SIGCOMM 2014. 7/38
Overview of Architectures
I 3-layer architecture; single-tier, multi-tier, a.k.a.hierarchical.
I Switch-centric, commodity off-the-shelf (indirectnetworks): fat-tree (Clos based multi-rooted tree),Al-Fares, Loukissas, Vahdat, SIGCOMM 2008. Jellyfish(random-regular graphs), Singla, USENIX 2012.
I Hybrid-optical switch; send certain traffic to an opticalswitch. Helios, Farrington et. al, SIGCOMM 2010.
I Server-centric networks. DCell, Chuanxiong Guo,SIGCOMM 2008. BCube, Chuanxiong Guo, SIGCOMM2009, and many others.
I Use wavelength division multiplexers to combine links intoa single optical cable. Quartz, Yupeng James Liu,SIGCOMM 2014.
I Free-space optics on steerable controllers for each rack ofservers. Firefly, Hamedazimi, SIGCOMM 2014. 7/38
Overview of Architectures
I 3-layer architecture; single-tier, multi-tier, a.k.a.hierarchical.
I Switch-centric, commodity off-the-shelf (indirectnetworks): fat-tree (Clos based multi-rooted tree),Al-Fares, Loukissas, Vahdat, SIGCOMM 2008. Jellyfish(random-regular graphs), Singla, USENIX 2012.
I Hybrid-optical switch; send certain traffic to an opticalswitch. Helios, Farrington et. al, SIGCOMM 2010.
I Server-centric networks. DCell, Chuanxiong Guo,SIGCOMM 2008. BCube, Chuanxiong Guo, SIGCOMM2009, and many others.
I Use wavelength division multiplexers to combine links intoa single optical cable. Quartz, Yupeng James Liu,SIGCOMM 2014.
I Free-space optics on steerable controllers for each rack ofservers. Firefly, Hamedazimi, SIGCOMM 2014. 7/38
Overview of Architectures
I 3-layer architecture; single-tier, multi-tier, a.k.a.hierarchical.
I Switch-centric, commodity off-the-shelf (indirectnetworks): fat-tree (Clos based multi-rooted tree),Al-Fares, Loukissas, Vahdat, SIGCOMM 2008. Jellyfish(random-regular graphs), Singla, USENIX 2012.
I Hybrid-optical switch; send certain traffic to an opticalswitch. Helios, Farrington et. al, SIGCOMM 2010.
I Server-centric networks. DCell, Chuanxiong Guo,SIGCOMM 2008. BCube, Chuanxiong Guo, SIGCOMM2009, and many others.
I Use wavelength division multiplexers to combine links intoa single optical cable. Quartz, Yupeng James Liu,SIGCOMM 2014.
I Free-space optics on steerable controllers for each rack ofservers. Firefly, Hamedazimi, SIGCOMM 2014. 7/38
Switch-centric commodity data centre
Network of commodity (ethernet) switches, typicallyhomogeneous. Server (or server-racks) attach at certainpoints. Linked by cables. (many details ommitted!)
Graph theoretic constraintsConnected graph on servers-nodesof degree 1, switch-nodes of degreeat most 100 (degree 48 is typical).
Figures: (left) Fat-tree Clos (from Al-Fares et. al 2008)(right) Jellyfish random regular from Godfrey’s blog.
8/38
Hybrid-optical switch
Replace some switches in a switch-centric network with opticalswitches. The provides very low latency connections, with asmall setup time (to position mirrors or something).
Graph theoreticconstraintsRoutingalgorithms and 3Dembedding (wiring)must account forheterogeneous linksand switch-nodes(as regardscapability and cost).1
1Figure: Helios topology (from Farrington et. al 2010)9/38
Quartz Ring
Completely interconnect small sets of servers and/or switchesin an existing network using an optical cable (cycle) andwavelength division multiplexers to combine signals.
Graphtheoretic constraintsEdit a graph G a toa new network by adding smallcliques. Routing algorithms,fault tolerance, wiring,and cost to be reconsidered.2
2Figure: Quartz (from Lui et. al 2014)10/38
Free space optics (FSO)
Each rack of servers has a Top-of-Rack switch, connected tocontrollable FSOs that communicate with other racks bybouncing an optical signal off a ceiling mirror (radical...).
Graph theoretic constraintsSwitch-node degree at most 100, links have a maximumembedded length, topology is reconfigurable ... 3
3Figures: Firefly (from Hamedazimi et. al 2014)11/38
Server-centric architecture
A direct network of commodity servers with unintelligent,crossbar, commodity switches. Routing intelligent isprogrammed into the servers.
Graph theoretic constraintsSwitch-nodedegree at most 100, server-nodedegree small, perhaps 1–4.No switch-node-to-switch-nodelinks. Switch-nodes with their(server-node) neighbours may sometimes be abstracted ascliques. The programmability of servers as networking devicesmakes this architecture very flexible; many topologies arepossible.
Figure: Commodore 64 image from Wikipedia
12/38
Server-centric architecture
A direct network of commodity servers with unintelligent,crossbar, commodity switches. Routing intelligent isprogrammed into the servers.
Graph theoretic constraintsSwitch-nodedegree at most 100, server-nodedegree small, perhaps 1–4.No switch-node-to-switch-nodelinks. Switch-nodes with their(server-node) neighbours may sometimes be abstracted ascliques. The programmability of servers as networking devicesmakes this architecture very flexible; many topologies arepossible.
Figure: Commodore 64 image from Wikipedia
12/38
Only scratched the surface...
More sophisticated constraints
I Table lookups are limited by the number of server-nodes.
I Data centre components need to be embedded in 3-spacein a practical way (i.e. packaging/wiring).
I “Uplinks” to outside world. etc.
We also want good networking properties...These do not always translate easily into graph theoreticproperties. Furthermore, they depend on how the data centrewill be used, and networks researchers do not always agree onwhat they are.
13/38
Only scratched the surface...
More sophisticated constraints
I Table lookups are limited by the number of server-nodes.
I Data centre components need to be embedded in 3-spacein a practical way (i.e. packaging/wiring).
I “Uplinks” to outside world. etc.
We also want good networking properties...These do not always translate easily into graph theoreticproperties. Furthermore, they depend on how the data centrewill be used, and networks researchers do not always agree onwhat they are.
13/38
Only scratched the surface...
More sophisticated constraints
I Table lookups are limited by the number of server-nodes.
I Data centre components need to be embedded in 3-spacein a practical way (i.e. packaging/wiring).
I “Uplinks” to outside world. etc.
We also want good networking properties...These do not always translate easily into graph theoreticproperties. Furthermore, they depend on how the data centrewill be used, and networks researchers do not always agree onwhat they are.
13/38
Only scratched the surface...
More sophisticated constraints
I Table lookups are limited by the number of server-nodes.
I Data centre components need to be embedded in 3-spacein a practical way (i.e. packaging/wiring).
I “Uplinks” to outside world. etc.
We also want good networking properties...These do not always translate easily into graph theoreticproperties. Furthermore, they depend on how the data centrewill be used, and networks researchers do not always agree onwhat they are.
13/38
Desirable properties: Real / in Graphs
I High throughput: e.g. High bisection width, efficientrouting, load balancing
I Low latency: Low diameter, efficient routing
I Low equipment/power cost: Few switches and ports,short wires
I Easy wiring: ...!?
I Scalability: e.g. useful families
I Efficiently computable sub-structures for specialcommunications, data centre management, virtualisation:e.g. spanning trees, symmetry (?)
I Fault tolerant, graceful degradation: e.g. internallydisjoint paths, efficient routing
Difficult to find graph theoretic properties that are necessaryand sufficient conditions for real world properties.
14/38
Desirable properties: Real / in Graphs
I High throughput: e.g. High bisection width, efficientrouting, load balancing
I Low latency: Low diameter, efficient routing
I Low equipment/power cost: Few switches and ports,short wires
I Easy wiring: ...!?
I Scalability: e.g. useful families
I Efficiently computable sub-structures for specialcommunications, data centre management, virtualisation:e.g. spanning trees, symmetry (?)
I Fault tolerant, graceful degradation: e.g. internallydisjoint paths, efficient routing
Difficult to find graph theoretic properties that are necessaryand sufficient conditions for real world properties.
14/38
Desirable properties: Real / in Graphs
I High throughput: e.g. High bisection width, efficientrouting, load balancing
I Low latency: Low diameter, efficient routing
I Low equipment/power cost: Few switches and ports,short wires
I Easy wiring: ...!?
I Scalability: e.g. useful families
I Efficiently computable sub-structures for specialcommunications, data centre management, virtualisation:e.g. spanning trees, symmetry (?)
I Fault tolerant, graceful degradation: e.g. internallydisjoint paths, efficient routing
Difficult to find graph theoretic properties that are necessaryand sufficient conditions for real world properties.
14/38
Desirable properties: Real / in Graphs
I High throughput: e.g. High bisection width, efficientrouting, load balancing
I Low latency: Low diameter, efficient routing
I Low equipment/power cost: Few switches and ports,short wires
I Easy wiring: ...!?
I Scalability: e.g. useful families
I Efficiently computable sub-structures for specialcommunications, data centre management, virtualisation:e.g. spanning trees, symmetry (?)
I Fault tolerant, graceful degradation: e.g. internallydisjoint paths, efficient routing
Difficult to find graph theoretic properties that are necessaryand sufficient conditions for real world properties.
14/38
Desirable properties: Real / in Graphs
I High throughput: e.g. High bisection width, efficientrouting, load balancing
I Low latency: Low diameter, efficient routing
I Low equipment/power cost: Few switches and ports,short wires
I Easy wiring: ...!?
I Scalability: e.g. useful families
I Efficiently computable sub-structures for specialcommunications, data centre management, virtualisation:e.g. spanning trees, symmetry (?)
I Fault tolerant, graceful degradation: e.g. internallydisjoint paths, efficient routing
Difficult to find graph theoretic properties that are necessaryand sufficient conditions for real world properties.
14/38
Desirable properties: Real / in Graphs
I High throughput: e.g. High bisection width, efficientrouting, load balancing
I Low latency: Low diameter, efficient routing
I Low equipment/power cost: Few switches and ports,short wires
I Easy wiring: ...!?
I Scalability: e.g. useful families
I Efficiently computable sub-structures for specialcommunications, data centre management, virtualisation:e.g. spanning trees, symmetry (?)
I Fault tolerant, graceful degradation: e.g. internallydisjoint paths, efficient routing
Difficult to find graph theoretic properties that are necessaryand sufficient conditions for real world properties.
14/38
Desirable properties: Real / in Graphs
I High throughput: e.g. High bisection width, efficientrouting, load balancing
I Low latency: Low diameter, efficient routing
I Low equipment/power cost: Few switches and ports,short wires
I Easy wiring: ...!?
I Scalability: e.g. useful families
I Efficiently computable sub-structures for specialcommunications, data centre management, virtualisation:e.g. spanning trees, symmetry (?)
I Fault tolerant, graceful degradation: e.g. internallydisjoint paths, efficient routing
Difficult to find graph theoretic properties that are necessaryand sufficient conditions for real world properties.
14/38
How data centre design happens
Engineers:Propose a holistic package based on engineering experience,including basic routing, fault-tolerance, load balancing,operating system integration, etc.Validation by simulation (or Microsoft’s data centre test bed!).
Theoreticians:Come in later to improve routing, fault tolerance, throughput,etc, for sometimes simplified (blue sky) abstractions of thedata centre.Validation by proving theorems.
A new approach (for theoreticians):Stay close to current technology, embrace simulation,collaborate with engineers!
15/38
How data centre design happens
Engineers:Propose a holistic package based on engineering experience,including basic routing, fault-tolerance, load balancing,operating system integration, etc.Validation by simulation (or Microsoft’s data centre test bed!).
Theoreticians:Come in later to improve routing, fault tolerance, throughput,etc, for sometimes simplified (blue sky) abstractions of thedata centre.Validation by proving theorems.
A new approach (for theoreticians):Stay close to current technology, embrace simulation,collaborate with engineers!
15/38
How data centre design happens
Engineers:Propose a holistic package based on engineering experience,including basic routing, fault-tolerance, load balancing,operating system integration, etc.Validation by simulation (or Microsoft’s data centre test bed!).
Theoreticians:Come in later to improve routing, fault tolerance, throughput,etc, for sometimes simplified (blue sky) abstractions of thedata centre.Validation by proving theorems.
A new approach (for theoreticians):Stay close to current technology, embrace simulation,collaborate with engineers!
15/38
Generalized DCell (2008)
16/38
Generalized DCell (2008)
16/38
Generalized DCell (2008)
16/38
Generalized DCell (2008)
16/38
Generalized DCell (2008)31 copies of level 1 DCell
16/38
Generalized DCell (2008)31 copies of level 1 DCell
16/38
DCell’s k-level links
17/38
DCell’s k-level links
Lettk be the number of server-nodesin Dk,n. Let [a, b] be the bthserver in ath copy of D0,n. Belowis the β-DCellk,n connection rule:
I Build D1,n: connect[a, b]↔ [a + b + 1 (mod t0 + 1), t0 − 1− b]
I Label servers in D1,n byat0 + b. Reuse the connectionrule to make D2,n:[a, b]↔ [a + b + 1 (mod tk−1 + 1), tk−1 − 1− b]
17/38
DCell’s k-level links
Lettk be the number of server-nodesin Dk,n. Let [a, b] be the bthserver in ath copy of D0,n. Belowis the β-DCellk,n connection rule:
I Build D1,n: connect[a, b]↔ [a + b + 1 (mod t0 + 1), t0 − 1− b]
I Label servers in D1,n byat0 + b. Reuse the connectionrule to make D2,n:[a, b]↔ [a + b + 1 (mod tk−1 + 1), tk−1 − 1− b]
17/38
DCell’s k-level links
Lettk be the number of server-nodesin Dk,n. Let [a, b] be the bthserver in ath copy of D0,n. Belowis the β-DCellk,n connection rule:
I Build D1,n: connect[a, b]↔ [a + b + 1 (mod t0 + 1), t0 − 1− b]
I Label servers in D1,n byat0 + b. Reuse the connectionrule to make D2,n:[a, b]↔ [a + b + 1 (mod tk−1 + 1), tk−1 − 1− b]
17/38
DCell properties
n-port, k-level DCellk,n
I Number of Servers: N > (n + 1/2)2k − 1/2
I Number of Switches: N/n
I Diameter: at most 2k+1 − 1 (and unknown)
I Number of internally disjoint paths: unknown (and howmany i.d.p.s are there between two server-nodes that areconnected to the same switch-node?)
I What about bisection width?... (unknown)
18/38
DCell properties
n-port, k-level DCellk,n
I Number of Servers: N > (n + 1/2)2k − 1/2
I Number of Switches: N/n
I Diameter: at most 2k+1 − 1 (and unknown)
I Number of internally disjoint paths: unknown (and howmany i.d.p.s are there between two server-nodes that areconnected to the same switch-node?)
I What about bisection width?... (unknown)
18/38
DCell properties
n-port, k-level DCellk,n
I Number of Servers: N > (n + 1/2)2k − 1/2
I Number of Switches: N/n
I Diameter: at most 2k+1 − 1 (and unknown)
I Number of internally disjoint paths: unknown (and howmany i.d.p.s are there between two server-nodes that areconnected to the same switch-node?)
I What about bisection width?... (unknown)
18/38
DCell properties
n-port, k-level DCellk,n
I Number of Servers: N > (n + 1/2)2k − 1/2
I Number of Switches: N/n
I Diameter: at most 2k+1 − 1 (and unknown)
I Number of internally disjoint paths: unknown (and howmany i.d.p.s are there between two server-nodes that areconnected to the same switch-node?)
I What about bisection width?... (unknown)
18/38
DCell properties
n-port, k-level DCellk,n
I Number of Servers: N > (n + 1/2)2k − 1/2
I Number of Switches: N/n
I Diameter: at most 2k+1 − 1 (and unknown)
I Number of internally disjoint paths: unknown (and howmany i.d.p.s are there between two server-nodes that areconnected to the same switch-node?)
I What about bisection width?... (unknown)
18/38
DCell (2008)
I First one of its kind, tends to be a reference point.
I Poor load balancing, difficult to analyse.
BCube (2009)
I Based on generalized hypercubes, K kn ; replace each
n-clique with a switch-node connected to n server-nodes.
I Easier to analyse, low diameter, fault tolerant.
I Expensive, small scale, difficult wiring.
FiConn (2009)
I Similar to DCell, but server-nodes are degree 1 or 2
I Idea: Level k edges use only half the server-nodes ofdegree 1 from FiConnk−1,n, to build FiConnk,n.
19/38
DCell (2008)
I First one of its kind, tends to be a reference point.
I Poor load balancing, difficult to analyse.
BCube (2009)
I Based on generalized hypercubes, K kn ; replace each
n-clique with a switch-node connected to n server-nodes.
I Easier to analyse, low diameter, fault tolerant.
I Expensive, small scale, difficult wiring.
FiConn (2009)
I Similar to DCell, but server-nodes are degree 1 or 2
I Idea: Level k edges use only half the server-nodes ofdegree 1 from FiConnk−1,n, to build FiConnk,n.
19/38
DCell (2008)
I First one of its kind, tends to be a reference point.
I Poor load balancing, difficult to analyse.
BCube (2009)
I Based on generalized hypercubes, K kn ; replace each
n-clique with a switch-node connected to n server-nodes.
I Easier to analyse, low diameter, fault tolerant.
I Expensive, small scale, difficult wiring.
FiConn (2009)
I Similar to DCell, but server-nodes are degree 1 or 2
I Idea: Level k edges use only half the server-nodes ofdegree 1 from FiConnk−1,n, to build FiConnk,n.
19/38
DCell (2008)
I First one of its kind, tends to be a reference point.
I Poor load balancing, difficult to analyse.
BCube (2009)
I Based on generalized hypercubes, K kn ; replace each
n-clique with a switch-node connected to n server-nodes.
I Easier to analyse, low diameter, fault tolerant.
I Expensive, small scale, difficult wiring.
FiConn (2009)
I Similar to DCell, but server-nodes are degree 1 or 2
I Idea: Level k edges use only half the server-nodes ofdegree 1 from FiConnk−1,n, to build FiConnk,n.
19/38
DCell (2008)
I First one of its kind, tends to be a reference point.
I Poor load balancing, difficult to analyse.
BCube (2009)
I Based on generalized hypercubes, K kn ; replace each
n-clique with a switch-node connected to n server-nodes.
I Easier to analyse, low diameter, fault tolerant.
I Expensive, small scale, difficult wiring.
FiConn (2009)
I Similar to DCell, but server-nodes are degree 1 or 2
I Idea: Level k edges use only half the server-nodes ofdegree 1 from FiConnk−1,n, to build FiConnk,n.
19/38
DCell (2008)
I First one of its kind, tends to be a reference point.
I Poor load balancing, difficult to analyse.
BCube (2009)
I Based on generalized hypercubes, K kn ; replace each
n-clique with a switch-node connected to n server-nodes.
I Easier to analyse, low diameter, fault tolerant.
I Expensive, small scale, difficult wiring.
FiConn (2009)
I Similar to DCell, but server-nodes are degree 1 or 2
I Idea: Level k edges use only half the server-nodes ofdegree 1 from FiConnk−1,n, to build FiConnk,n.
19/38
DCell (2008)
I First one of its kind, tends to be a reference point.
I Poor load balancing, difficult to analyse.
BCube (2009)
I Based on generalized hypercubes, K kn ; replace each
n-clique with a switch-node connected to n server-nodes.
I Easier to analyse, low diameter, fault tolerant.
I Expensive, small scale, difficult wiring.
FiConn (2009)
I Similar to DCell, but server-nodes are degree 1 or 2
I Idea: Level k edges use only half the server-nodes ofdegree 1 from FiConnk−1,n, to build FiConnk,n.
19/38
DCell
20/38
“Star”-replaced networks
Let G be a graph, and let G ? be the graph that is obtained bysubdividing each edge of G twice.Vertices of G become switch-nodes W and the new verticesare server-nodes S .
21/38
“Star”-replaced networks
Let G be a graph, and let G ? be the graph that is obtained bysubdividing each edge of G twice.Vertices of G become switch-nodes W and the new verticesare server-nodes S .
21/38
“Star”-replaced networks
Let G be a graph, and let G ? be the graph that is obtained bysubdividing each edge of G twice.Vertices of G become switch-nodes W and the new verticesare server-nodes S .
21/38
Relevance of G ?
Every dual port server-centric datacentre network in which each server-nodeconnects to one switch-node and one otherserver-node can be obtained by a star-replaced construction.I Commodity servers tend to have 2 NIC (network interface
controller) ports.I More server-nodes per switch-node than 1 subdivision.I Each server-node “belongs” to exactly 1 switch-node.I Retains some aspects of node/edge/arc symmetry of G .I Many “networking” properties of G ? can be derived from
those of G .
22/38
Relevance of G ?
Every dual port server-centric datacentre network in which each server-nodeconnects to one switch-node and one otherserver-node can be obtained by a star-replaced construction.I Commodity servers tend to have 2 NIC (network interface
controller) ports.I More server-nodes per switch-node than 1 subdivision.I Each server-node “belongs” to exactly 1 switch-node.I Retains some aspects of node/edge/arc symmetry of G .I Many “networking” properties of G ? can be derived from
those of G .
22/38
Relevance of G ?
Every dual port server-centric datacentre network in which each server-nodeconnects to one switch-node and one otherserver-node can be obtained by a star-replaced construction.I Commodity servers tend to have 2 NIC (network interface
controller) ports.I More server-nodes per switch-node than 1 subdivision.I Each server-node “belongs” to exactly 1 switch-node.I Retains some aspects of node/edge/arc symmetry of G .I Many “networking” properties of G ? can be derived from
those of G .
22/38
Relevance of G ?
Every dual port server-centric datacentre network in which each server-nodeconnects to one switch-node and one otherserver-node can be obtained by a star-replaced construction.I Commodity servers tend to have 2 NIC (network interface
controller) ports.I More server-nodes per switch-node than 1 subdivision.I Each server-node “belongs” to exactly 1 switch-node.I Retains some aspects of node/edge/arc symmetry of G .I Many “networking” properties of G ? can be derived from
those of G .
22/38
Relevance of G ?
Every dual port server-centric datacentre network in which each server-nodeconnects to one switch-node and one otherserver-node can be obtained by a star-replaced construction.I Commodity servers tend to have 2 NIC (network interface
controller) ports.I More server-nodes per switch-node than 1 subdivision.I Each server-node “belongs” to exactly 1 switch-node.I Retains some aspects of node/edge/arc symmetry of G .I Many “networking” properties of G ? can be derived from
those of G .
22/38
Properties of G ?
Certain properties of G ? are a function of properties of G
I Internally disjoint paths (between switch-nodes).
I Number of server-nodes is twice E (G )
I Diameter of G ?, measured in server-server hops, is abouttwice that of G .
I Routing algorithms in G translate directly to routingalgorithms in G ?
I When G is regular the bisection width of G ? can becomputed exactly from the solution to theedge-isoperimetric problem on G .
23/38
Properties of G ?
Certain properties of G ? are a function of properties of G
I Internally disjoint paths (between switch-nodes).
I Number of server-nodes is twice E (G )
I Diameter of G ?, measured in server-server hops, is abouttwice that of G .
I Routing algorithms in G translate directly to routingalgorithms in G ?
I When G is regular the bisection width of G ? can becomputed exactly from the solution to theedge-isoperimetric problem on G .
23/38
Properties of G ?
Certain properties of G ? are a function of properties of G
I Internally disjoint paths (between switch-nodes).
I Number of server-nodes is twice E (G )
I Diameter of G ?, measured in server-server hops, is abouttwice that of G .
I Routing algorithms in G translate directly to routingalgorithms in G ?
I When G is regular the bisection width of G ? can becomputed exactly from the solution to theedge-isoperimetric problem on G .
23/38
Properties of G ?
Certain properties of G ? are a function of properties of G
I Internally disjoint paths (between switch-nodes).
I Number of server-nodes is twice E (G )
I Diameter of G ?, measured in server-server hops, is abouttwice that of G .
I Routing algorithms in G translate directly to routingalgorithms in G ?
I When G is regular the bisection width of G ? can becomputed exactly from the solution to theedge-isoperimetric problem on G .
23/38
Properties of G ?
Certain properties of G ? are a function of properties of G
I Internally disjoint paths (between switch-nodes).
I Number of server-nodes is twice E (G )
I Diameter of G ?, measured in server-server hops, is abouttwice that of G .
I Routing algorithms in G translate directly to routingalgorithms in G ?
I When G is regular the bisection width of G ? can becomputed exactly from the solution to theedge-isoperimetric problem on G .
23/38
Data Centre Network from K k?n
Let GQk,n = K kn =
k times︷ ︸︸ ︷Kn × Kn × . . .× Kn, which is a generalized
hypercube. e.g. GQ2,3= K3 × K3.Each vertex has a label a0a1 . . . ak−1 with 0 ≤ ai ≤ n − 1, andvertices are connected by an edge if their labels differ inexactly one coordinate.
24/38
Data Centre Network from K k?n
Let GQk,n = K kn =
k times︷ ︸︸ ︷Kn × Kn × . . .× Kn, which is a generalized
hypercube. e.g. GQ?2,3= (K3 × K3)?.
Each vertex has a label a0a1 . . . ak−1 with 0 ≤ ai ≤ n − 1, andvertices are connected by an edge if their labels differ inexactly one coordinate.
24/38
Bisection width vs S -bisection width
Bisection width bw(G )The minimum number of edges whoseremoval partitions the vertices into twohalves.
I Let G be an interconnection network on N nodes withbw(G ) = β under the random traffic pattern.
I Throughput of a link = 1/#(flows using the link).I Throughput of G = min{ throughput of a link }I On average, half the flows use a cut-link so, at least 1
link involved in at least (N/2)/β flows so throughput isat most 2β/N .
I So, the higher the bisection width, the higher the(estimated) throughput.
25/38
Bisection width vs S -bisection width
Bisection width bw(G )The minimum number of edgeswhose removal partitions thevertices into two halves.
S-Bisection width bwS(G ?)The minimum number of edgeswhose removal partitions the nodesinto two parts, each containing halfof the server-nodes.
The standard text (Dally andTowled) insists on partitioning bothswitches and processers, but inpractice, S-bisection width is used.This seems to be the first time thishas been formalised! 26/38
Bisection width vs S -bisection width
Bisection width bw(G )The minimum number of edgeswhose removal partitions thevertices into two halves.
S-Bisection width bwS(G ?)The minimum number of edgeswhose removal partitions the nodesinto two parts, each containing halfof the server-nodes.
The standard text (Dally andTowled) insists on partitioning bothswitches and processers, but inpractice, S-bisection width is used.This seems to be the first time thishas been formalised! 26/38
Bisection width vs S -bisection width
Bisection width bw(G )The minimum number of edgeswhose removal partitions thevertices into two halves.
S-Bisection width bwS(G ?)The minimum number of edgeswhose removal partitions the nodesinto two parts, each containing halfof the server-nodes.
Can we characterise optimal parti-tions (below) of the switch-nodes?Can we compute these partitions ef-ficiently?
26/38
A slight misfortune...
LemmaLet G = (V ,E ) be a regular graph. It is always the case thatbwS(G ?) ≤ bw(G ).
LemmaFor all n ≥ 2, we have that bwS(K ?
2n) < bw(K2n).
nn − 1
n
x
y
[R,T ]
Legend
server
switch
[R′,T ′]Partitions:Proof.Let [R ,T ] be the (minimum)edge-cut that bipartitionsswitches and servers. Themodified edge-cut [R ′,T ′]is smaller than [R ,T ] andit bipartitions servers.
27/38
Edge-isoperimetric problems End
Let G = (V ,E ), and let R ⊆ V .IG (R): the number of edges with both ends in R .
ΘG (R): the number of edges with exactly one end in R .
Edge-isoperimetric subsets R
1. Find IG (r) = maxR:|R|=r IG (R)
2. Find ΘG (r) = minR:|R|=r ΘG (R)
The study of isoperimetric subsets has a long history(Bezrukov 1999), e.g., n-partite graphs; hypercubes; Cartesianproducts of cliques, bipartite graphs, the Petersen graph;grids,...Muradyan, Piliposjan ’80, Harper ’64, Lindsey ’64, Ahlswede,Cai ’97, Bezrukov, Elssser ’98, Bollobas, Leader ’91, Golovach’94, Mohar ...
Theorem (E., Kiasari, Navaridas, Stewart (2015))The G = (V ,E ) be a d-regular graph. The S-bisection widthof G ? is given by min{wr : 0 ≤ r ≤ |V |}, where
wr =
rd − 2b |E |
2c if |E | ≤ 2IG (r)
θG (r) if 2IG (r) < |E | ≤ 2IG (r) + 2θG (r)
2d |E |2e − rd if 2IG (r) + 2θG (r) < |E |.
28/38
Edge-isoperimetric problems End
Let G = (V ,E ), and let R ⊆ V .IG (R): the number of edges with both ends in R .
ΘG (R): the number of edges with exactly one end in R .
Edge-isoperimetric subsets R
1. Find IG (r) = maxR:|R|=r IG (R)
2. Find ΘG (r) = minR:|R|=r ΘG (R)
Theorem (E., Kiasari, Navaridas, Stewart (2015))The G = (V ,E ) be a d-regular graph. The S-bisection widthof G ? is given by min{wr : 0 ≤ r ≤ |V |}, where
wr =
rd − 2b |E |
2c if |E | ≤ 2IG (r)
θG (r) if 2IG (r) < |E | ≤ 2IG (r) + 2θG (r)
2d |E |2e − rd if 2IG (r) + 2θG (r) < |E |.
28/38
Proof idea
Let G = (V ,E ) and let G ? = (S ∪W ,E ?),with server-nodes S and switch-nodes W . Let (RW ,TW ) be apartition of W , and let B = [RW ∪ RS ,TW ∪TS ] be a minimalS-bisection of G ? (i.e., with |RS | = |TS | = |E |).Types of 3-paths:
Type-R : both ends in RW : contribute 0 or 2 edges to B .
Type-RT : one end in RW one end in TW : 1 edge to B .
Type-T : both ends in TW : 0 or 2 edges to B .
29/38
Proof idea
Let G = (V ,E ) and let G ? = (S ∪W ,E ?),with server-nodes S and switch-nodes W . Let (RW ,TW ) be apartition of W , and let B = [RW ∪ RS ,TW ∪TS ] be a minimalS-bisection of G ? (i.e., with |RS | = |TS | = |E |).Types of 3-paths:
Type-R : both ends in RW : contribute 0 or 2 edges to B .
Type-RT : one end in RW one end in TW : 1 edge to B .
Type-T : both ends in TW : 0 or 2 edges to B .
29/38
Proof idea
Claim[RW ∪ RS ,TW ∪ TS ] is a minimal S-bisection (w.r.t. r) if RW
maximises IG (RW ) (with |RW | = r), of size:
wr =
rd − 2b |E |
2c if |E | ≤ 2IG (r)
θG (r) if 2IG (r) < |E | ≤ 2IG (r) + 2θG (r)
2d |E |2e − rd if 2IG (r) + 2θG (r) < |E |.
�30/38
S -bisection width of GQ?k ,n End
Theorem (Lindsey (1964), Nakano (1994))For 1 ≤ t ≤ kn, IGQk,n
(t) =∑t−1
i=0 wn(i), where wn(i) is thesum of the k (base n) ‘digits’ of i . Nakano gives a formula.
k n servers bw(GQn,k) bwS(GQ?n,k) difference
2 32 63, 488 8, 192 7, 788 4042 33 69, 696 9, 248 8, 580 6683 20 456, 000 40, 000 39, 600 4003 21 555, 660 50, 930 47, 628 3, 3023 22 670, 824 58, 564 57, 856 7084 9 209, 952 16, 400 14, 580 1, 8204 10 360, 000 25, 000 25, 000 04 11 585, 640 43, 920 39, 930 3, 9904 12 912, 384 62, 208 62, 208 04 13 1, 370, 928 99, 960 92, 274 7, 6864 14 1, 997, 632 134, 456 134, 400 564 15 2, 835, 000 202, 496 189, 000 13, 4964 16 3, 932, 160 262, 144 258, 048 4, 0965 5 62, 500 4, 686 4, 250 4365 6 194, 400 11, 664 11, 664 05 7 504, 210 33, 612 30, 562 3, 0505 8 1, 146, 880 65, 536 65, 536 05 9 2, 361, 960 147, 620 131, 220 16, 4006 4 73, 728 4, 096 4, 096 06 5 375, 000 23, 436 21, 632 1, 8046 6 1, 399, 680 69, 984 69, 984 0
31/38
S -bisection width of GQ?k ,n End
Theorem (Lindsey (1964), Nakano (1994))For 1 ≤ t ≤ kn, IGQk,n
(t) =∑t−1
i=0 wn(i), where wn(i) is thesum of the k (base n) ‘digits’ of i . Nakano gives a formula.
k n servers bw(GQn,k) bwS(GQ?n,k) difference
2 32 63, 488 8, 192 7, 788 4042 33 69, 696 9, 248 8, 580 6683 20 456, 000 40, 000 39, 600 4003 21 555, 660 50, 930 47, 628 3, 3023 22 670, 824 58, 564 57, 856 7084 9 209, 952 16, 400 14, 580 1, 8204 10 360, 000 25, 000 25, 000 04 11 585, 640 43, 920 39, 930 3, 9904 12 912, 384 62, 208 62, 208 04 13 1, 370, 928 99, 960 92, 274 7, 6864 14 1, 997, 632 134, 456 134, 400 564 15 2, 835, 000 202, 496 189, 000 13, 4964 16 3, 932, 160 262, 144 258, 048 4, 0965 5 62, 500 4, 686 4, 250 4365 6 194, 400 11, 664 11, 664 05 7 504, 210 33, 612 30, 562 3, 0505 8 1, 146, 880 65, 536 65, 536 05 9 2, 361, 960 147, 620 131, 220 16, 4006 4 73, 728 4, 096 4, 096 06 5 375, 000 23, 436 21, 632 1, 8046 6 1, 399, 680 69, 984 69, 984 0
31/38
S -bisection width of GQ?k ,2; the hypercube
?End
Theorem (E., Kiasari, Navaridas, Stewart (2015))bwS(Q?
n) = 2n−1 = bw(Qn)
Proof sketch.Let [RW ∪ RS ,TW ∪ TS ] be an S-bisection of Q?
n . Case|RW | = |TW | is easy so assume r := |RW | < 2n−1 and supposebwS(Q?
n) < 2n−1.We have 2n−1 > bwS(Q?
n) ≥ΘQn(RW ) = rn − 2I (RW )≥ rn − 2I (r), and we use propertiesof I (r) (e.g., I (2n−2 + a) = I (2n−2 + a + I (a)), for a < 2n−2)to show that r < 2n−2.Use the previously mentioned Type-R, RT, T paths to showthat |RS | = n2n−1 ≤ 2(I (RW ) + Θ(RW ) + b), where b areservers from Type-T paths, and plug in I (r) for anothercontradiction.
32/38
S -bisection width of GQ?k ,2; the hypercube
?End
Theorem (E., Kiasari, Navaridas, Stewart (2015))bwS(Q?
n) = 2n−1 = bw(Qn)
Proof sketch.Let [RW ∪ RS ,TW ∪ TS ] be an S-bisection of Q?
n . Case|RW | = |TW | is easy so assume r := |RW | < 2n−1 and supposebwS(Q?
n) < 2n−1.We have 2n−1 > bwS(Q?
n) ≥ΘQn(RW ) = rn − 2I (RW )≥ rn − 2I (r), and we use propertiesof I (r) (e.g., I (2n−2 + a) = I (2n−2 + a + I (a)), for a < 2n−2)to show that r < 2n−2.Use the previously mentioned Type-R, RT, T paths to showthat |RS | = n2n−1 ≤ 2(I (RW ) + Θ(RW ) + b), where b areservers from Type-T paths, and plug in I (r) for anothercontradiction.
32/38
S -bisection width of GQ?k ,2; the hypercube
?End
Theorem (E., Kiasari, Navaridas, Stewart (2015))bwS(Q?
n) = 2n−1 = bw(Qn)
Proof sketch.Let [RW ∪ RS ,TW ∪ TS ] be an S-bisection of Q?
n . Case|RW | = |TW | is easy so assume r := |RW | < 2n−1 and supposebwS(Q?
n) < 2n−1.We have 2n−1 > bwS(Q?
n) ≥ΘQn(RW ) = rn − 2I (RW )≥ rn − 2I (r), and we use propertiesof I (r) (e.g., I (2n−2 + a) = I (2n−2 + a + I (a)), for a < 2n−2)to show that r < 2n−2.Use the previously mentioned Type-R, RT, T paths to showthat |RS | = n2n−1 ≤ 2(I (RW ) + Θ(RW ) + b), where b areservers from Type-T paths, and plug in I (r) for anothercontradiction.
32/38
Practical application: Compare with FiConn End
102 103 104 105101
102
103
104
Number of servers
Bis
ecti
onw
idth
GQ?, D = 5GQ?, D = 7GQ?, D = 9
FiConn2,n
FiConn3,n
GQ?3,10 GQ?
4,6 FiConn2,24
server-nodes 27, 000 25, 920 24, 648switch-nodes 1, 000 1, 296 1, 027switch-radix 27 20 24
links 81, 000 77, 760 67, 782diameter 7 9 7bwS 2, 496 1, 944 1, 560 33/38
Results on DCell End
Theorem (Wang, E., Fan, Jia, 2015)DCell is (almost always) Hamiltonian connected
Re: DCell+ (E., Kiasari, Navaridas, Stewart (2015))Simultaneously compute shorter (than best known) paths andbalance load by doing an efficient “intelligent” search.
Percent mean hop-length savings over best known routing.34/38
DPillarn,k dual-port SCDCN End
Server-nodes are labelled (c , vk−1vk−1 · · · v0) with columnc ∈ {0, 1, . . . , k − 1} and vi ∈ {0, 1, . . . , n/2− 1}.Sever (c , vk−1vk−1 · · · v0) and(c + 1, vk−1vk−1 · · · vc+1 ∗ vc−1 · · · v0) connect to the sameswitch.
35/38
Shortest path routing on DPillar End
In spite of considerable fanfare, no shortest path routingalgorithm was known for DPillar
Theorem (E., Kiasari, Navaridas, Stewart (2015))There is a routing algorithm that computes shortest paths inO(k) time.
Sketch.Let src anddst be server-nodes ofDPillarn,k , which differat coordinate c . Weneed to route through(or near) column c tochange coordinate c .
36/38
Shortest path routing on DPillar End
In spite of considerable fanfare, no shortest path routingalgorithm was known for DPillar
Theorem (E., Kiasari, Navaridas, Stewart (2015))There is a routing algorithm that computes shortest paths inO(k) time.
Sketch.Let src and dst be server-nodes of DPillarn,k , which differ atcoordinate c . We need to route through (or near) column c tochange coordinate c .Map DPillar routing to visiting marked vertices on a k-cycle.Given x and y vertices in the cycle, we need to “cover” themarked vertex c in order to change the symbol vc .If such a walk is of minimum length, it changes direction atmost twice. We need to perform up to k steps to find thesechanges of direction.
36/38
Shortest path routing on DPillar End
In spite of considerable fanfare, no shortest path routingalgorithm was known for DPillar
Theorem (E., Kiasari, Navaridas, Stewart (2015))There is a routing algorithm that computes shortest paths inO(k) time.
Sketch.Let src and dst be server-nodes of DPillarn,k , which differ atcoordinate c . We need to route through (or near) column c tochange coordinate c .Map DPillar routing to visiting marked vertices on a k-cycle.Given x and y vertices in the cycle, we need to “cover” themarked vertex c in order to change the symbol vc .If such a walk is of minimum length, it changes direction atmost twice. We need to perform up to k steps to find thesechanges of direction.
36/38
Shortest path routing on DPillar End
In spite of considerable fanfare, no shortest path routingalgorithm was known for DPillar
Theorem (E., Kiasari, Navaridas, Stewart (2015))There is a routing algorithm that computes shortest paths inO(k) time.
Sketch.Let src and dst be server-nodes of DPillarn,k , which differ atcoordinate c . We need to route through (or near) column c tochange coordinate c .Map DPillar routing to visiting marked vertices on a k-cycle.Given x and y vertices in the cycle, we need to “cover” themarked vertex c in order to change the symbol vc .If such a walk is of minimum length, it changes direction atmost twice. We need to perform up to k steps to find thesechanges of direction.
36/38
There is a lot to do!
I Improve on recently proposed designs by finding newrouting algorithms or discovering useful properties.
I Propose your own designs, soundly based in theory.
I Apply graph embedding algorithms to improve datacentre virtualisation.
I Determine the set of relevant graph theoretic propertiesfor each data centre usage situation.
I Relate topology to energy usage.
I Consider the wiring problem.
I A lot of data centre research is effectively being gifted totheorists. Take hold of it!
37/38
There is a lot to do!
I Improve on recently proposed designs by finding newrouting algorithms or discovering useful properties.
I Propose your own designs, soundly based in theory.
I Apply graph embedding algorithms to improve datacentre virtualisation.
I Determine the set of relevant graph theoretic propertiesfor each data centre usage situation.
I Relate topology to energy usage.
I Consider the wiring problem.
I A lot of data centre research is effectively being gifted totheorists. Take hold of it!
37/38
There is a lot to do!
I Improve on recently proposed designs by finding newrouting algorithms or discovering useful properties.
I Propose your own designs, soundly based in theory.
I Apply graph embedding algorithms to improve datacentre virtualisation.
I Determine the set of relevant graph theoretic propertiesfor each data centre usage situation.
I Relate topology to energy usage.
I Consider the wiring problem.
I A lot of data centre research is effectively being gifted totheorists. Take hold of it!
37/38
There is a lot to do!
I Improve on recently proposed designs by finding newrouting algorithms or discovering useful properties.
I Propose your own designs, soundly based in theory.
I Apply graph embedding algorithms to improve datacentre virtualisation.
I Determine the set of relevant graph theoretic propertiesfor each data centre usage situation.
I Relate topology to energy usage.
I Consider the wiring problem.
I A lot of data centre research is effectively being gifted totheorists. Take hold of it!
37/38
There is a lot to do!
I Improve on recently proposed designs by finding newrouting algorithms or discovering useful properties.
I Propose your own designs, soundly based in theory.
I Apply graph embedding algorithms to improve datacentre virtualisation.
I Determine the set of relevant graph theoretic propertiesfor each data centre usage situation.
I Relate topology to energy usage.
I Consider the wiring problem.
I A lot of data centre research is effectively being gifted totheorists. Take hold of it!
37/38
There is a lot to do!
I Improve on recently proposed designs by finding newrouting algorithms or discovering useful properties.
I Propose your own designs, soundly based in theory.
I Apply graph embedding algorithms to improve datacentre virtualisation.
I Determine the set of relevant graph theoretic propertiesfor each data centre usage situation.
I Relate topology to energy usage.
I Consider the wiring problem.
I A lot of data centre research is effectively being gifted totheorists. Take hold of it!
37/38
There is a lot to do!
I Improve on recently proposed designs by finding newrouting algorithms or discovering useful properties.
I Propose your own designs, soundly based in theory.
I Apply graph embedding algorithms to improve datacentre virtualisation.
I Determine the set of relevant graph theoretic propertiesfor each data centre usage situation.
I Relate topology to energy usage.
I Consider the wiring problem.
I A lot of data centre research is effectively being gifted totheorists. Take hold of it!
37/38