Efficiency of small size tasks calculation in grid clusters using
parallel processing
Efficiency of small size tasks calculation in grid clusters using
parallel processing..
Olgerts BelmanisJānis Kūliņš
RTU ETF
Riga Technical University
..
Krakow, CGW 07 , 15-16 Okt 2
3
RTU ClusterRTU Cluster
■ Initially RTU cluster started with five servers
AMD Opteron 146. 1TB■ Additionaly was installed eight dual core AMD
Opteron 2210 M2. ■ Therefore now there are 9 working nodes
with 21 CPU units. ■ Total amount of memory is 1,8 TB. ■ RTU cluster successfully completed many
calculation tasks including LHCB virtual organization orders.
Krakow, CGW 07 , 15-16 Okt
4
RTU ClusterRTU Cluster
Krakow, CGW 07 , 15-16 Okt
RTU ClusterRTU Cluster
5Krakow, CGW 07 , 15-16 Okt
Computing AlgorithmsComputing Algorithms
■ Serial algorithm
One task – one WN (working node);
Parts of task performed serial;
Task execution time depend on WN performance only!
■ Paralel algorithm
One task – several WN; Parts of task performed:
► Consecutive on separate WN
► In parallel on number of WN; rezults summerizing
Task execution time depend on:
► WN performance;► Network performance;► Bandwith of shared data
stocks;► Type of coding
.
6Krakow, CGW 07 , 15-16 Okt
Bottlenecks in distributive computing system
Bottlenecks in distributive computing system
7Krakow, CGW 07 , 15-16 Okt
Krakow, CGW 07 , 15-16 Okt 8
Interconnections between CPU nodesInterconnections between CPU nodes
9
************************************************************task 0 is on wn03.grid.etf.rtu.lv partner= 2task 1 is on wn10.grid.etf.rtu.lv partner= 3task 1 is on wn10.grid.etf.rtu.lv partner= 3task 2 is on wn10.grid.etf.rtu.lv partner= 0task 2 is on wn10.grid.etf.rtu.lv partner= 0task 3 is on wn10.grid.etf.rtu.lv partner= 1task 3 is on wn10.grid.etf.rtu.lv partner= 1
***************************************************************Message size: 1000000 *** best / avg / worst (MB/sec)
task pair: 0 - 2: 103.31 / 102.29 / 53.64 task pair: 1 - 3: 371.33 / 197.63 / 134.05 task pair: 1 - 3: 371.33 / 197.63 / 134.05 OVERALL AVERAGES: 237.32 / 149.96 / 93.84
...use of multicore servers help to achieve higher data transmission rate in MPI applications!
Krakow, CGW 07 , 15-16 Okt
Local interconnection rateLocal interconnection rate
CPU numberLow rate
Mb/s Medium rate Mb/sPeek rate
Mb/s
4 95 140 240
6 18 60 90
8 3 60 98
10
Transmission rate dependence of number of CPU....MPI used number of CPU have influence to
intermediate connection rate!!!
Krakow, CGW 07 , 15-16 Okt
Parallel application execution timeParallel application execution time
Krakow, CGW 07 , 15-16 Okt 11
Paralel speedup determinationParalel speedup determination
■ During experiment multiplication of large matrixes has been done.
■ Test create traffic between WN more than some 10 Mb and loaded processors.
■ Main task of the experiment is to find beginning of horizontal part of speed up curve.
■ Experiment on 1 CPU in RTU cluster takes 420 seconds.
Krakow, CGW 07 , 15-16 Okt 12
2x WN ≠ H/22x WN ≠ H/2
...according to Amdal’s law that speed-up conform with 20% serial algorithm code!
13Krakow, CGW 07 , 15-16 Okt
Possible solutions:Possible solutions:■ Internal connection improvement:
Infiniband, Myranet….connections between WN; Multicore WN implementation (RTUETF); NFS network file system abandonment.
■ Data transfer process optimizing: Number of flows using; Replace standard TCP protocol to Scalable TCP;
■ Parallel algorithm processing optimization: Minimize transactions between WN; Reduce sequential part of MPI code; Optimization of MPI threat number.
■ Optimization of requested resource management
14
Krakow, CGW 07 , 15-16 Okt
..
Thank you for attention!
15Krakow, CGW 07 , 15-16 Okt
Top Related