Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed...

Post on 19-Jan-2016

212 views 0 download

Transcript of Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed...

Parallel Computing in Numerical Simulation of Laser Deposition

The objective of this proposed project is to research and develop an effective prediction tool for additive manufacturing processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data.

Introduction

Objective

Figure 2. Macroscopic simulation results

Results of simulation

Future work

Figure 3. Evolution of solidification microstructure

Conclusions

Figure 1. Laser deposition

Using MPI in parallel computing seems

to become more efficient with a higher

number of processes.Initial partitioning can be very crucial in

parallel computing with MPI.With variable partitioning in each step

considering the available processes, some

processes may not be efficiently used. In

fact for the worst case with a higher

number of processes for a small data set,

some processes may leave out without

any data to sort sequentially which shows

inefficient use of processes.

The first step is to establish macroscopic models including the thermal/fluid dynamics models and the residual stress model.

Then we should also establish microscopic models,, such as the models for solidification microstructure and the solid-state phase transformations. We need modeling as-received grain structure, melting and epitaxial growth.

The last thing is to do data collection and model validation.

Parallel computing is accomplished by splitting up a large computational problem into smaller tasks that may be performed simultaneously by multiple processors.

•MPI stands for “Message Passing Interface”.Library standard defined by a committee of vendors, implementers, and parallel programmersUsed to create parallel programs based on message•100%portable: one standard, many implementations•Available on almost all parallel machines in C and Fortran•Over 100 advanced routines but 6 basic

Approach

Key Concepts of MPI

•Used to create parallel programs based on message passing

Normally the same program is running on several different processors

Processors communicate using message passing

•MPI is used to create parallel programs based on message passing

•Usually the same program is run on multiple processors

•The 6 basic calls in MPI are:MPI_INIT(…);MPI_COMM_SIZE(…);MPI_COMM_RANK(…);MPI_SEND(…);MPI_RECV(…);Call MPI_FINALIZE(…);

Write a parallel program:

#include <stdio.h>#include "mpi.h"

main( int argc, char *argv[] ){ int myid, numprocs;

MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &myid ); MPI_Comm_size( MPI_COMM_WORLD, &numprocs ); printf(“I am %d of %d\n", myid, numprocs ); MPI_Finalize();}

Acknowledgments

This research was partially supported by the National Aeronautics and Space Administration Grant Number NNX11AI73A, the grant from the U.S. Air Force Research Laboratory, and Missouri S&T’s Intelligent Systems Center and Manufacturing Engineering program. Their support is greatly appreciated.

Students: Xueyang Chen, Zhiqiang Fan, Todd E.Sparks,Department of Manufacturing Engineering

Faculty Advisor: Dr. Frank Liou,Department of Manufacturing Engineering