University of California Los Angeles...
Transcript of University of California Los Angeles...
![Page 1: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/1.jpg)
UCLA
UCLA Communications & Public Outreach • 1147 Murphy Hall,
Box 951436 • Los Angeles, CA 90095-1436
Department of Materials Science and
Engineering
IPAM, February 3, 2017
JaimeMarianDepartment of Materials Science and EngineeringDepartment of Mechanical and Aerospace EngineeringIDRE Executive Committee MemberUniversity of California Los Angeles
![Page 2: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/2.jpg)
2
LLNL (500 teraFLOPs)
ANL (180 petaFLOPs)
ORNL (10 petaFLOPs)
LLNL (120 petaFLOPs)
![Page 3: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/3.jpg)
3
The Predictive Science ParadigmThe Predictive Science Paradigm
•• Aim:Aim: Predict the behavior of complex physical/engineered systems with quantified uncertaintieswith quantified uncertaintiesy
•• Paradigm shift Paradigm shift in experimental science, modeling and simulation, scientific computing (predictive sciencepredictive science):
Deterministic Non deterministic s stems– Deterministic → Non-deterministic systems– Mean performance → Mean performance + Uncertainty
PSAAP: Predictive Science Academic Alliance Program
M. OrtizUni-Stuttgart 12/12- 4
Old single-calculation paradigm New ensemble-of-calculations paradigmOld (single-calculation) paradigm
New(ensemble of calculations)
paradigm
![Page 4: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/4.jpg)
4
The Predictive Science ParadigmThe Predictive Science Paradigm
Uncertainty Uncertainty QuantificationQuantification
ExperimenExperimentaltal
Modeling Modeling andand tal tal
ScienceScienceand and
SimulationSimulation
PSAAP: Predictive Science Academic Alliance Program
M. OrtizUni-Stuttgart 12/12- 5
![Page 5: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/5.jpg)
Materials science is a great example of massively-parallel computation thrusts• Siesta,VASP,QuantumEspresso,
QBox:first-principlesatomisticsimulations
• Moldy,MDCASK,LAMMPS:domaindecompositionmoleculardynamics
• ParaDis,Numodis,MODEL:continuumdislocationdynamics
• Diablo,Abaqus:finiteelements,continuummechanics
• Ale3D:hydrodynamics• etc…
Incr
easi
ng s
cale
Hydrodynamics
Dislocation dynamics
Classical MDFirst principles
Finite elements
![Page 6: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/6.jpg)
7
4
Table 2. Select materials science drivers for leadership computing at the petascale (1 to 3 years)
Application area Science driver Science objective Impact
Nanoscale science
Material-specific understanding of high-temperature superconductivity theory
Understand the quantitative differences in the transition temperatures of high-temperature superconductors
Macroscopic quantum effect at elevated temperatures (>150K) New materials for power transmission and oxide electronics
Thermodynamics of nanostructures
Understand and improve colossally magneto-resistive oxides and magnetic semiconductors Develop new switching mechanism in magnetic nanoparticles for ultrahigh-density storage Simulate and design molecular-scale electronics devices
Magnetic data storage Economically viable ethanol production Energy storage via structural transitions in nanoparticles
Evolution of an understanding of biological system behavior
Elucidate the physical-chemical factors and mechanisms that control damage to DNA
Medicine, biomimetics, sequence dependencies, and inhibiting agents of hazardous bioprocesses
Material response
Elucidation of the causes leading to eventual brittle or ductile fragmentation and failure of a solid
Understand macro-cracking due to coalescence of subscale cracks, local deformation due to void coalescence, and dynamic propagation of cracks or shear bands
Reduction of engineering margins to within required safe operating envelop
2.3 LONGER-TERM (SUSTAINED PETASCALE AND EXASCALE) SCIENCE DRIVERS
Materials science drivers, objectives, and impacts have been identified for leadership computing accomplishments considered possible on an exascale leadership computing platform deployed within the next decade (Table 3). These more speculative achievements are based on recent workshops [2] and personal communication with select members of the computational materials science community.
3. EARTH SCIENCE
3.1 RECENT ACCOMPLISHMENTS WITH LEADERSHIP COMPUTING
The LCF at ORNL provided more than a third of the U.S. contribution of computational resources to the February 2007 report of the Intergovernmental Panel on Climate Change (IPCC). High-performance computing guided the studies and conclusions that went into the report, leading to the 2007 Nobel Peace Prize for the IPCC in recognition of its work.
Earth science simulations at the LCF continue to play a key role in U.S. climate change research, bringing ORNL’s leadership computing resources to studies of weather, carbon management, climate change mitigation and adaptation, and the environment, to name a few. For example, LCF systems provide much of the computing power for the Community Climate System Model (CCSM), a fully coupled, global climate model that provides state-of-the-art computer simulations of the Earth’s past, present, and future climates (Fig. 3). In fact, the last few months of 2007 have seen many high-impact
![Page 7: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/7.jpg)
11
Michael OrtizROME0611
Metal plasticity í�Multiscale analysis
Lattice defects, EoS
Dislocation dynamics
Subgrainstructures
length
time
mmnm µm
ms
µsns
Polycrystals
Engineeringapplications
Lecture #2: Dislocation energies, the line-tension approximation
Lecture #3: Dislocation kinetics,the forest-hardening model
Objective: Derive ansatz-free,physics-based, predictive models of macroscopic behavior
Lecture #4: Subgraindislocation structures
Objective:deriveansatz-free,physics-based,predictivemodelsofmacroscopicbehavior
Modelingparadigm:sequentialconnection(parameterpassing)acrossscales
e– structure,Molecularstatics
kMC,MD
DD,phasefield
Crystalplasticity,levelset
FE
Mesoscale computing
![Page 8: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/8.jpg)
12
Laser ablation of Au column
(Gilmer et al)
Deformation of nano-twinned
metals (Sansoz et al)We routinely simulate systems with several
billion atoms using 105~6 processors
![Page 9: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/9.jpg)
13
Ian Robertson (UIUC)
![Page 10: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/10.jpg)
14
![Page 11: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/11.jpg)
15
Use appropriate filters that help with visualization while preserving key physical features
(DXA algorithm, Stukowski et al)
![Page 12: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/12.jpg)
16
Constrained energy
minimization
Adaptive mesh
refinement
Lattice summation
rulesQC
![Page 13: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/13.jpg)
17
NSCL Talk
![Page 14: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/14.jpg)
18
![Page 15: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/15.jpg)
19
![Page 16: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/16.jpg)
20
NSCL Talk
• Effect of cluster size:
![Page 17: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/17.jpg)
21
Marian,
Knap,
Ortiz,
2004
![Page 18: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/18.jpg)
22
Discrete)disloca0on)dynamics)in)bulk)systems
!��
!�
!
!�
��
X
i
bi = 0
Disloca0on)network)represented)by)interconnected)line)segments:
Bulatov*et*al,*Nature*(2006)
fi = ��E(ri)�ri
vi = Mfi
ri = ri + vi�t
Node)force
Node)velocity
Move)nodes
Isotropic)elas0city
Obtained)from)MD
Topology)changesParaDiS)integrates)the)mul0plica0on)and)interac0ons)of)disloca0ons)for)simula0ng)evolu0on)of)strength
Discre0za0on)nodes Physical)
nodes
Bulatov etal ,Nature(2006)
![Page 19: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/19.jpg)
Large-scale simulations of irradiation hardening in FeSimula0ons)of)irradia0on)hardening)in)bcc)Fe
• Progressive)hardening)occurs)as)defect)density)(irradia0on)dose))increases.
• Strain)hardening)behavior)is)eventually)lost.•Unstable)(soTening))behavior)seen)at)high)defect)densi0es)(doses).
![Page 20: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/20.jpg)
• Channelsformaboveacriticalobstacledensity.
• Channelsareorientedalong⟨112⟩ directions.
• Channelwidtharound180nm.
• Localizationstartswithlocalremovalofobstacles.
Jiaoetal,2009 JiaoandWas,2010
Simulations of plastic localization in irradiated steels
![Page 21: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/21.jpg)
26
Crystal)plas0city)model)for)irradiated)Fe
�� /1-,��� /( ,���/0$,*(0�����
H =3
2V
X
i
dil
�I� ni
l ⌦ nil
�
H = �⌘X
↵
(N↵ : H)N↵|�↵|
� =X
↵
|�↵| �↵ = �o
✓⇥↵
g↵
◆ 1m
7 Power%law%kine*cs.% is%RSS%on%slip%system%⌧↵ ↵
g↵ = go
+ µb⇣p
hn
�n
+p
hd
N↵ : H⌘
N↵ ⌘ n↵ ⌦ n↵Slip%system%strength.% Peierls%resistance strain%hardening radia*on%hardening
Irradia*on%loop%density%as%a%tensorial%quan*ty.%
Loop%density%evolu*on.%
h ⌘ �n
/�?o
h = �⇣k1
ph� k
2
h⌘
, k2
= k2o
✓�ko
�
◆%%where:
KocksLMecking%hardening%law
Network%density%evolu*on:%
F = FpFe Fp =
X
↵
�↵s↵ ⌦ n↵
!Fp
Multiplicative decomposition
Flow rule
![Page 22: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/22.jpg)
Calibration and fitting of crystal plasticity model to dislocation dynamics simulations (lower scale)
Crystal)plas0city)calibra0on
�/501 *�.* 01("(15�+-#$*� & (,01�#(0*-" 1(-,�#5, +("0�/$02*10
![Page 23: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/23.jpg)
FEM Polycrsytal Simulation of the Single Crystal Constitutive Model
After initial localization, the regions of highest shearing rate
(darker contrast) split and spread across the grains
![Page 24: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/24.jpg)
Stress-Strain Curve with Spatial Deformation Observation (Unirradiated)
Wuetal,JNM(2005)
![Page 25: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/25.jpg)
32
���������� ���������
![Page 26: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/26.jpg)
33
• Stochasticity associated with UQ is not an intrinsic feature in most models.
• Using ‘naturally-stochastic’ methods such as kinetic Monte Carlo (KMC) is gaining more traction but serial versions are slow.
• Can KMC take advantage of tera/peta/exa scale computing?
![Page 27: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/27.jpg)
What is the place of kMC in materials science simulations?
0
5000
10000
15000
20000
25000
30000
1965
1967
1969
1971
1973
1975
1977
1979
1981
1983
1985
1987
1989
1991
1993
1995
1997
1999
2001
2003
2005
2007
2009
2011
2013
FE kMC
DD MD
If we look at the number of research papers published over the last half a
century in a number of sub-disciplines of materials science simulations, we
observe several features:
Of these, MD, FE, and DD are amenable to large-scale parallelization.
![Page 28: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/28.jpg)
What about parallelization of kMC?The parallelization effort is just marginal relative
to the volume of kMC work.
0
50
100
150
200
250
300
All kMC
parallel
kMC
Num
ber
of
papers
![Page 29: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/29.jpg)
36
• Discrete event kinetics are inherently difficult to parallelize.• Traditional parallelization approaches based on asynchronouskinetics (Lubachevsky 1988, Jefferson 1985, Korniss 2003).
• Causality errors arise with these approaches: mutually affectingevents occurring in different domains.
• This requires ‘roll-back’ techniques to reconcile the timeevolution of different processors.
• This leads to implementation complexity and regions of lowefficiency.
• Conservative and optimistic (semi-rigorous) algorithms havebeen proposed (Amar and Shim 2003, Shim and Amar 2005, Fichthorn et
al.)
![Page 30: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/30.jpg)
37
Sim
ulat
ed ti
me
Processor domains
Virtual Time Horizon
![Page 31: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/31.jpg)
38
Asynchronous methods are highly complex
• TheVTHproblemisabottleneckofparalleldiscreteeventsimulations.
• Whataboutsynchronousmethods?
![Page 32: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/32.jpg)
39
Assume a spatial domain containing N walkers:
qi, Rtot
=NX
i
qiEach walker defined by a rate
!
Perform spatial domain decomposition:
![Page 33: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/33.jpg)
40
The r0k are the ‘dummy’ rates (no
event) that ensure synchronicity:
Now, for parallel kMC, perform K (4) domain partitions and construct frequency lines:
For optimum scalability,
perform domain
decomposition subject to the
following constraint:
1:
R1
2:
3:R3
4:R4
Rmax
r01
r03
r04
qi
We have developed a synchronous parallel kMC algorithm to study general systems
![Page 34: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/34.jpg)
41
1 Perform spatial decomposition into K domains.
2 Define partial aggregate rates in each :
3 Choose the maximum partial rate as:
4 Assign ‘null’ rates to each such that:
5 Sample event from each subdomain with probability
6 Execute event and advance time by
rk =nkX
i
qik
Rtot
=KX
k
rk
Rmax
= max
k{rk}
⌦k
⌦k
rk0 = Rmax
� rk
�tp = � ln ⇠
Rmax
qik/Rmax
⌦k
![Page 35: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/35.jpg)
42
•AtthecriticaltemperatureTc,domainsofalignedspinsarecreated.
• Thesedomainsaredefinedbyacorrelationlengthξ:
• ν isthe‘scale’criticalexponent.
•Criticalexponentsnotconvergedfor3D.
JPSethna (2009)
T >Tc T =Tc
T < Tc
paramagnetic critical
ferromagnetic
![Page 36: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/36.jpg)
43
m(t)
Time [λ–1]
Thevalueofthecriticalexponentsin1Dand2Disanalyticallyknownandcanbeconvergedfor4andhigherdimensions.In3D:noconvergednumericalsolution.
SerialkMC notsufficient
![Page 37: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/37.jpg)
44
Boundaryconflictsappearwhenmutually-influencingeventsoccursimultaneouslyondifferentdomains
Asimplesolutionistouseasublatticedecomposition(chessmethodin2D)
Co-occurringeventsonlyonidentically-coloredsubcells
Amar etal.(2004,2005)
![Page 38: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/38.jpg)
45
4
5
1024×512×512
1024×1024×512
1024×1024×1024
Martinez,Monasterio,Marian,JCP(2010)
![Page 39: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/39.jpg)
46
Parallel efficiency governed
by local MPI calls:
K
![Page 40: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/40.jpg)
47
• Computer architectures are becoming increasingly heterogeneous and hierarchical, with greatly increased flop/byte ratios, architectural design uncertain.
• The algorithms, programming models, and tools that will thrive in this environment must mirror these characteristics, codes will need to be rewritten.
• Standard bulk synchronous parallelism (message passing, MPI) will no longer be viable.
• Power, energy, and heat dissipation are increasingly important, presently unsolved technological bottleneck.
• Traditional global checkpoint/restart is becoming impractical (fault tolerance and resilience!).
• Analysis and visualization.
![Page 41: University of California Los Angeles UCLAhelper.ipam.ucla.edu/publications/dmc2017/dmc2017_14153.pdf · UCLA UCLA Communications & Public Outreach • 1147 Murphy Hall, Box 951436](https://reader033.fdocuments.us/reader033/viewer/2022060313/5f0b41bf7e708231d42f9e97/html5/thumbnails/41.jpg)
48
Evolution of Predictive Science Evolution of Predictive Science
A Walk through CSE evolution Must lid t
Odds?
Computefast, big
Must havephysics
validate
g p y
circa 1993 circa 1998 circa 2003 circa 2007
PSAAP: Predictive Science Academic Alliance Program
M. OrtizUni-Stuttgart 12/12- 50