Supercomputing in Fluid Mechanics (Turbulence)

26
Supercomputing in Fluid Mechanics (Turbulence) Javier Jiménez ETSI Aeronáuticos Madrid Supercomputing ETSIA 2008

Transcript of Supercomputing in Fluid Mechanics (Turbulence)

Supercomputing in Fluid Mechanics (Turbulence)

Javier Jiménez

ETSI Aeronáuticos Madrid

Supercomputing ETSIA 2008

1. Introduction to the problem

2. Present Status

3. Infrastructure

Turbulence

Laminar Turbulent

The effects of Turbulence

Pressure LossMixingDragEtc.

x 100

Why Turbulence?

Energy (pressure loss)

Viscous dissipation

CASCADE

Why Turbulence?

Energy (pressure loss)

Viscous dissipation

CASCADE

Energy (impact)

BREAKING

Surface tension

The Computation of Turbulence

Degrees of Freedom(grid points)

PhysicalMODELS

of the CASCADE

Boundary Layers, Pipes, etc.

Computing (Wall) Turbulence

(Simens, 2008)

The Atmospheric Boundary Layer

Outer scale ~ 200 m

• Inner scale ~ 1 mm

Outer/Inner ~ 200,000

Wall Turbulence can be Computed

•400 GB•1.2 TB/step•7M CPUh•2100 procs•6 months•25 TB raw data

Hoyas, Flores (2005)

Cascade range ≈ 10 !!

Computers keep getting FASTER

Vector

Parallel

Cache

?

(x 2)/year

What to do with Faster Computers

Do Bigger Things(higher Re)

Do the Same ThingsFASTER

The State of the Art 2007

Channel Reτ=2000

(Hoyas, Flores)

Boundary Layer Reθ=2100

(Hoyas, Mizuno)

Reθ=1900 APG Boundary Layer

(Simens)

cascade

The State of the Art 1987

•240 MB•250 CPUh•Cray XMP•1 month•4 GB raw data

Kim, Moin, Moser (1987)

Computing Turbulence is GOODNear-Wall Turbulence in 1987

After KMM1987

Vortices, Jets, Layers ..

Before KMM1987

Streaks, Sweeps,Ejections ..

Doing Same Things Faster

10 years

Heroic (Research)

Trivial (Industrial)1/

1000

Computing the Viscous Layer (2001)1) Streamwise-velocity streaks + Streamwise vortices

2) A regeneration cycle

3) A steady nonlinear wave

Computing the Viscous Layer (2001)1) Streamwise-velocity streaks + Streamwise vortices

2) A regeneration cycle

3) A steady nonlinear wavePostprocessing

gets thingsUNDERSTOOD

“Postprocessing”

•Access !!! (Sharing)•Local or Distributed?

“Less” Respect

• Postprocessing = 2 x simulations• “Extra” simulations, statistics, ...

(and also graphics ...)• 5-10 years and “everywhere”• Storage (1KB/point) TBs PBs

Numerics and Turbulence 2000

“Numerics”eddies, buffer

cycles, ...

“Experiments”log layer, cascades

interm., LES

data

den

sity

higher Reynolds

Reτ>2000

Reτ=590

Reτ=180 “SOLVED”

Numerics and Turbulence 2010s

Overlap!

“Numerics and Experiments”log layer, cascades, interm., LES, ...

data

den

sity

higher ReynoldsReτ=5000

Reτ=2000 “SOLVED”

• Things that have been computed tend to be understood within 10-15 years

• Computer centres Data centres

• In the next 10 years numerics and requirementswill converge for turbulence science

• Many questions of turbulence science (cascades, LES...)WILL then get “solved”

• Turbulence engineering can then begin seriously

Summary

Computer Infrastructure• Supercomputers: Marenostrum, Cesvima

(the large simulations)

POSTPROCESSING

• Storage: 100 TB (easily accesible!!)

• Pre- y post-processing: 5-10% of the supercomputer(private!! 24/7)

Supercomputing ETSIA 2004-07

Marenostrum & Cesvima

256-2100 CPUs

2-4 MCPUh/year

Storage ETSIA

External (100 TB archive)•PIC (Barcelona) 10 CPUs•BSC (Barcelona) 256 CPUs

Internal (Etsia)40 TB

(30 permanent + 10 scratch) 15 CPUs

Post-processing ETSIA

“Computing Clusters”