search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY...

32
APPLICATIONS OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information Systems Directorate Ames Research Center, Moffett Field CA 94035. gguruswamy @mail.arc.nasa.gov Since the initiation of the HPCC Project at NASA in the early 90s, several computational tools have been developed under the computational aerosciences(CAS) area[ 1]. One such tool is HiMAP, a supermodular, 3-level parallel, portable high fidelity, tightly coupled analysis process[2,3]. This process is designed using the state-of-the-art information technology tools such as MPIRUN[4], an efficient protocol to allow groups of processors to communicate with other groups of processors for mutltidisciplinary type computations. MPIRUN is based on IEEE standard Message Passing Interface (MPI)[5] that is currently supp'6rted by most major computer vendors. The modularity in HiMAP is based on the function of the individual discipline module. In general, an analysis process in HiMAP is divided into independent and dependent modules. Independent modules are those dealing with specific physics and can be used in stand alone mode as a single discipline code. For example, the fluids module is an independent module. Dependent modules are usually required for multidisciplinary computations.They depend on more than one module for their function. A typical example of a dependent module is thermal loads module in aero-theromoelastic computations. This module requires temperature data from the fluids module and also structural data from the structures module to convert temperature to thermal loads. Figure 1 shows a typical process chart for HiMAP. The three level (intra-discipline, inter-discipline and multiple run) parallel capability is built into HiMAP using MPI and MPIRUN protocols. On a typical massively parallel super-computer, a set of processors are assigned to each discipline as needed. Communication within each discipline is accomplished using MPI. Between disciplines and cases communications are achieved using MPIRUN. Figure 2 shows the multi level parallel capability of HiMAP. The hybrid coarse-fine grain parallelization achieves the goal of load-balanced execution provided that there are enough processors available to handle the total number of blocks. On the other hand, the load-balancing does not guarantee the efficient use of the computational nodes. The computational nodes might be working with less than the optimal computational load and performing a lot of expensive inter-processor communications, hence data-starved. Both problems are alleviated by introducing domain-coalescing capability to the parallelization scheme. In domain coalescing, a number of blocks are assigned to a single processors resulting in economy in number of the computational resources and also a more favorable communications-to-computations ratio during the execution. This process which is illustrated in Fig. 3 is described in Ref. 6 that was presented at SC97. HiMAP is suitable for large scale multidiscipinary analyses. It incorporates Euler/Navier-Stokes based flow solvers such as ENSAERO[7], USM3D[8] and finite element based structures solvers such as NASTRAN[9]. To-date HiMAP has been demonstrated for large scale aeroelastic applications that required 16 million fluid grid points and 20,000 structural finite elements. Cases have been demonstrated using up to 228 nodes on IBM SP2 and 256 nodes on SGI Origin2000 computers. Typical configurations analyzed are full subsonic and supersonic aircraft. Figure 4 shows the 34 block grid for the sub-scale wind tunnel model of an L1011 transport https://ntrs.nasa.gov/search.jsp?R=20010087134 2018-07-14T09:57:15+00:00Z

Transcript of search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY...

Page 1: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

APPLICATIONS OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY

PROBLEMS

Guru P. Gun_wamy, Mark Potsdam and David Rodriguez

NAS Division, Information Systems DirectorateAmes Research Center, Moffett Field CA 94035.

gguruswamy @mail.arc.nasa.gov

Since the initiation of the HPCC Project at NASA in the early 90s, several computational tools have

been developed under the computational aerosciences(CAS) area[ 1]. One such tool is HiMAP, a

supermodular, 3-level parallel, portable high fidelity, tightly coupled analysis process[2,3]. This process

is designed using the state-of-the-art information technology tools such as MPIRUN[4], an efficient

protocol to allow groups of processors to communicate with other groups of processors for

mutltidisciplinary type computations. MPIRUN is based on IEEE standard Message Passing Interface

(MPI)[5] that is currently supp'6rted by most major computer vendors.

The modularity in HiMAP is based on the function of the individual discipline module. In general, an

analysis process in HiMAP is divided into independent and dependent modules. Independent modules

are those dealing with specific physics and can be used in stand alone mode as a single discipline code.

For example, the fluids module is an independent module. Dependent modules are usually required for

multidisciplinary computations.They depend on more than one module for their function. A typical

example of a dependent module is thermal loads module in aero-theromoelastic computations. This

module requires temperature data from the fluids module and also structural data from the structures

module to convert temperature to thermal loads. Figure 1 shows a typical process chart for HiMAP.

The three level (intra-discipline, inter-discipline and multiple run) parallel capability is built into

HiMAP using MPI and MPIRUN protocols. On a typical massively parallel super-computer, a set of

processors are assigned to each discipline as needed. Communication within each discipline is

accomplished using MPI. Between disciplines and cases communications are achieved using MPIRUN.

Figure 2 shows the multi level parallel capability of HiMAP.

The hybrid coarse-fine grain parallelization achieves the goal of load-balanced execution provided that

there are enough processors available to handle the total number of blocks. On the other hand, the

load-balancing does not guarantee the efficient use of the computational nodes. The computational

nodes might be working with less than the optimal computational load and performing a lot of expensive

inter-processor communications, hence data-starved. Both problems are alleviated by introducing

domain-coalescing capability to the parallelization scheme. In domain coalescing, a number of blocks

are assigned to a single processors resulting in economy in number of the computational resources and

also a more favorable communications-to-computations ratio during the execution. This process which

is illustrated in Fig. 3 is described in Ref. 6 that was presented at SC97.

HiMAP is suitable for large scale multidiscipinary analyses. It incorporates Euler/Navier-Stokes basedflow solvers such as ENSAERO[7], USM3D[8] and finite element based structures solvers such as

NASTRAN[9]. To-date HiMAP has been demonstrated for large scale aeroelastic applications that

required 16 million fluid grid points and 20,000 structural finite elements. Cases have been

demonstrated using up to 228 nodes on IBM SP2 and 256 nodes on SGI Origin2000 computers. Typical

configurations analyzed are full subsonic and supersonic aircraft.

Figure 4 shows the 34 block grid for the sub-scale wind tunnel model of an L1011 transport

https://ntrs.nasa.gov/search.jsp?R=20010087134 2018-07-14T09:57:15+00:00Z

Page 2: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

aircraft[10,11].Thetotal grid sizeis 9M points.Dueto geometriccomplexitygrid sizesvarysignificantly from block to block.Theratioof smallestto largestgrid sizeis 0.07.This distributioncanleadto inefficient computationson MPP.An algorithmbasedon nodefilling is currenlyincorporatedinHiMAP. This algorithmassignsmultiplegridsto asingleprocessor.Usingthenodefilling algorithmthat will beexplainedin detail in the full paper,the34block grid wasmappedon to 24Origin-2000processors.Figure5 showstheoriginal grid distributionby assigningoneblock perprocessorandmodifiedblockdistribution by usingthenode-finingalgorithm.Thecomputationalefficiency isincreasedby 30%with thenewdistribution.Furtherresearchis ongoingin usingneuralnetworksapproachfor loadbalancing.

Fig 6 showsoneof the5 structuralmodesfrom the finite elementcomputations.One02000 nodewasassignedto themodaldata.ComputationsweremadeusingGO3D(Goorjian-ObayashiStreamwiseUpwind Algorithm) fluids solver[12]andENSAEROmodalsolver[13]alongwithMBMG(MultiBlockMovingGri-_i)[11,14]movinggrid moduleavailablein HiMAP process.A typicalaeroelasticsolutionis shownin Fig 7. Thestability andconvergenceof GO3D algorithmwasnotaffectedby re-distribution of patched grids to different processors. Figure 8 shows a plot of current grid

size versus grid blocks. This grid is for a complex state-of-the-art aircraft.

HiMAP is multi-platform middleware. It has been tested on IBMSP2, SUN HPC6000 and SGI 02000

systems. Scalable performance and portability is shown in Fig 9. Work is in progress to port HiMAP to

other latest platforms such as SGI 03000.

Current effort is in progress to apply HiMAP for aeroelastic computations of more complex

configurations than presented in this paper. Future work involves efforts to map HiMAP on to PSE

(Problem Solving Environment) and IPG (Information Power Grid) tools.

REFERENCES

1 Hoist, T. L, Salas, M.D. and Claus, R. W.: The NASA Computational Aerosciences Program-Toward

Teraflops Computing. AIAA Aerospace Meeting, Jan 1992,

2. Guruswamy, G.P, Byun, C., Farhangnia, M and Potsdam, M. • User's Manual for HiMAP • A

Portable Super Modular 3-Level Parallel Multidisciplinary Analysis Process. NASA/TM-1999-209578.

3. Guruswamy, G. P.: HiMAP- A Portable Super Modular Multilevel Parallel Multidisciplinary Process

for Large Scale Analysis: 5th NASA National Symposium on Intelligent Systhesis Environment, NASA

Langley Research Center, Oct 99. (Also to appear in J1 of Advances in Engineering Software published

by Elsevier)

4. Finberg, S. • MPIRUN: A Loader for Multidisciplinary and Mutizonal MPI Applications. NAS News,Vol 2 No. 6, Nov-Dec 1994.

5. Message Passing Interface Forum, MPI: A Message Passing Interface Standard, University of

Tennesse, May 94.

6. Hatay, F, Jespersen D. C.. Guruswamy, G.P., Rizk, Y, and Byun, C. • A multi-level parallelization

concept for high-fidelity multi-block solvers. Supercomputing 97, San Jose, Nov 1997.

Page 3: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

7. Guruswamy, G.P: ENSAERO-A Multi - disciplinary Program for Fluid/Structural Interaction Studies

of Aerospace Vehicles. Computing Systems in Engineering, Vol. 1, No.2- 4, 90, pp 237-256.

8. Frink, N.T., Pitzadeh, S. and Parilda, P: An Unstructured-Grid Software System for Solving Complex

Aerodynamics Problems. NASA CP-329 l, 1995.

9. Eldred, L and Guruswamy, G.P. : User's Guide for ENSAERO_FE Parallel Finite Element Solver.

NASA TM- 1999-208781.

10. Goodwin, S., Byun, C. and Farhangnia, M. • Aeroelastic Computations Using Parallel Computing.

AIAA Paper 99-0795, Jan 1999.

11. Potsdam, M and Guruswamy, G.P. • A Parallel Multiblock Mesh Movement Scheme For Complex

Aeroelastic Applications: (to bE published)

12. Obayashi, S., Guruswamy, G.P., and Goorjian, P.M.: Streamwise Upwind Algorithm for Computing

Unsteady Transonic Flows Past Oscillating Wings. AIAA Journal, Vol. 29, No. 10, Oct. 1991, pp

1668-1677.

13. Guruswamy, G. P.: Unsteady Aerodynamic and Aeroelastic Calculations for Wings Using Euler

Equations. AIAA J1 Vol. 28, No. 3, March 1990, pp 461-469.

14. Byun, C. and Guruswamy G.P. • A Parallel Multi-Block Moving Grid Method for Aeroelastic

Applications of Full Aircraft. AIAA-98-4782, AIAA MDO/MDA Conf, St. Louis MO. Sept 1998.

TYPICAL ANALYSIS PROCESS IN HIMAP

FLU IDJSTR U CTU RLr¢ ONTRO L_PROPULSIO Id

I

/

EuM _U_e_tmiUlml

F_mmBllW_,

I PRC Pt..- SIE, I_J I

_r C 0NTROLS I

• _¢E_N0_N]_A00dULE5

Fig. 1 Parallel Process

Page 4: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

mpi_comm_wodd

..-] .Rar/((N- 11..."+

- RanK(O) l i !

o i ! i

, OiO ,'"i'...-.ili::°': "-,li,_,:_q

r ._,.rI _pp +mr. II ; •

Controls domain

comm_C

....7,,R..ktw-1>."'I

o "qn'n' +I3F' '_" i l I I

:+.oI I

, 2eO ,

i .,,,, ,,rJ

I"'rllf,llilii dpI) IXIl'fl ] .'"

o- '_"I _ II iI

:_l;i<;_,.'; :-:C l_U{!r-

-:._ ¢,.,--

I ."1-.,R_._o++).."1! u : i Ranis(O+ .,l+ I

r_,,-+.-_, Ii i II" : ': I _ _ l i i,--,J

I'FLIt]tL_P£ 0 ::7ic,r-rlG,,I"

,:;c_r"qi,_ S

,',q ......... -ll,-

I::,1.1o -I I

I

inter, discipline communication

inter-zone communication

+1 ,.,,i; i,.._<IFig.2 Multilevel process

'- Cilni-wll -_

lllnl - lllln +lilililloli

t

Fig.3 Load balacing approach in HiMAP

Page 5: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

Fig.4 Grid Blocks for LIO11 wind tunnel mode

:.)

MULTI I_L¢_C.K IJ3AD I_'A/.,ANC_G

10"

I0-_:

IO o

_.'i-I¢ I. |

• II I4 ' '" I I

, Ii1'; i * I ....

.................. : ................. ,i ." ...... r :........... • "'i'" ""._ ......

._ OF LaRUCE,.%£i.UP,_

Fig.5 Results load balancing

Page 6: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

LIOI 1 made 2

Fig.6 Modal data from finite element analysis

Fig.7 Typical aeroelastic results for L 1011 sub-scale wind tunnel model

Page 7: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

l_2M GRID _3LNT C(_PLI2_ A[_ROSPACE VI_II_,E

10_!

I0 0 _ 50

I ..........

"/5 I00 125

BLOCK

Fig.8 Grid distribution for a complex aerospace vehicle

PORTABILITY AND PERFORMANCE OF HiblAP

1,1

2 ,I !1 16HlrtJ al _1'-¢

Fig.9 Demonstration of Portability of HiMAP

Page 8: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

C_lO.

, t

_o_ _o_ _,,oO

(_, _z

o _

mr."

,=i

I_, 0"

IL

Z

Page 9: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

IILI

0

_o_

_rn

OU.J co

__2 za:o#_z_o

._1In -m

',

!i

Page 10: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

Q.,¢

m

m

n-O

Zm

isW

0Z

0IZis

f t

Z

I

"'x ; ir-_ o

"22221"........ j ....... _

H"LIS(If'olD

I---c_

00_.cr"a_m 0Dcr'c_(DzR

ILl

W(/)a_00

f/]..-dLUw(DZz 0

i...._-

_m

.._Z

ILl

Page 11: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

0o_

Oc_is_

0

nr'-

o_

El

oZ00El¢/)

0m

Page 12: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

(5g]ZooOo

O az

_p

-'a.Z O_0 a'o_ go

O0

°O

Page 13: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

Z_o

0 a

Page 14: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

A

N O)

• • •

IiI

I

u.Ift-

0

frh-

I

iI

_L,--I

Z

f_ZI

mLUIZ

o _,

m _

n_

mI-

|I

0

Page 15: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information
Page 16: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information
Page 17: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

5

oo°__

Z_

Page 18: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

awaww

z z z

• • • •

Page 19: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

A 0

zo

¢00o0

I.IJt_

Z

Page 20: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

zw

_o(/)-J

UJ

• • • •

Page 21: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

A

zu. :E_zU,I ..I..ion o

• • O0

Page 22: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

U)Z0

n-

az zLUUJ 0

IIC

i- :3 ul

0 0 m

_ ,. _I I I

e

oO_

0 ZZOO0_'u.¢0_.0

Page 23: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

rr<

Z0I

I

>n<m

Page 24: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

C_W

0

a.

wm

m

W _

W

WW

I-

WZm

_1a,

O)

"r

Zm

-I

_1_1

Z

LUC/)n"

o

0

-ILU=..I..J

ZB

LUZm

...Ia.

B

a

o

=._

-IU.I_1==1

Zm

LU

m--Ia.

5Z

_, 1.__._11̧ -_

0

Page 25: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

00_W

_- _ |°8 --

IL

0

Page 26: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

Q.

Z

o=0

_1

o_

__ _'"__o°

___ -Z_=I

III

Page 27: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

1

I---I

E

_ E.--. 0

I I e', e'-

-g._

'_ III

Page 28: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

I | I ozI m |

1_421

0

Z

Z

L_

Page 29: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

W

Page 30: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

0Z

Page 31: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

0. OKr3 .

"r" "_oI,i. =_- x

111

0

I I I I i

(spuooes) dm,s/aw!JL

¢0

¢0

0

Page 32: search.jsp?R=20010087134 2018-06 … OF PARALLEL PROCESS HIMAP FOR LARGE SCALE MULTIDISCIPLINARY PROBLEMS Guru P. Gun_wamy, Mark Potsdam and David Rodriguez NAS Division, Information

• • • • •