Post on 19-Dec-2015
Alpine3D: an alpine surface processes model
Mathias BavayWSL Institute for Snow and Avalanche Research SLF, Davos, Switzerland
© Mathias Bavay
1. Goals
Alpine surface processes modeling over an area. Inputs: DEM + weather stations data Used for snow hydrology, snow cover studies,
climate change studies Water availability? Flooding? Hydropower potential? Avalanche danger? Permafrost?
Possible tool for computing distributed physical parameters High resolution surface temperature data High resolution radiation data
© Mathias Bavay
2.1 Snowpack
Base element:•Lateral exchanges limited•soil/snow/canopy column•Known forcing (radiation, precipitations, temperature, etc)•How is the snowpack at this location (depth, layering)?
Distributed snow cover•Our domain is N*M individual 1D columns
© Mathias Bavay
2.1 SNOWPACK
1D soil/snow/canopy column•no lateral exchanges•Arbitrary number of layers•Heat diffusion•Models for albedo, settling, canopy...•Each cell of the grid is 1 SNOWPACK simulation
Parallelization by cell rangesNo exchanges between cells
© Mathias Bavay
2.2 Energy input
Good energy input absolutely necessary!
•Mostly from radiation•Thermal radiation: long wave (sky + terrain)•Direct & diffuse short wave radiation (atmosphere, sun/shadow + terrain reflections)•How to deal with clouds?
© Mathias Bavay
2.2 Energy Balance
3D radiation balance•Radiosity approach•sun/atmosphere parameters•Shading•Arbitrary multiple terrain reflections•Short and long wave treated separately•Very CPU intensive
No parallelization yetExchanges between neighboring cells
© Mathias Bavay
2.3 Drifting snow
Snow transport mechanisms:•Saltation •Suspension•Sublimation (removes mass)•Preferential deposition
© Mathias Bavay
2.3 Snowdrift
Lateral snow exchange (by wind)•3 processes:
•Saltation•Suspension•Sublimation
•Suspension & sublimation solved together•Saltation as boundary condition•Exchanges between cells•Very CPU intensive
Suspension parallelized with standard numerical libraries (using MPI)
© Mathias Bavay
2.4 Runoff
Hydrological contribution:•Each cell maintain its runoff buckets•Collect them all to get outlet discharge
© Mathias Bavay
2.4 Runoff
Collecting liquid water•From the bottom of each column•Bucket model•But requires global view of the data•Inexpensive computation (so far)
No need to parallelize
© Mathias Bavay
3. Data input
The models work by cells...•Meteorological data at point measurements•Need to have meteorological parameters for the cell!•How to calculate cell value in a physically sensible way?
© Mathias Bavay
3. Data input
Getting data in and out•Raw data•Filtering•Spatial interpolations•Reading grids and preparing them (DEM)•outputs
No need to parallelize yet, interpolations could become CPU intensive
© Mathias Bavay
4 Full overview
Design philosophy•1 module per major process•Each module can be made of an arbitrary hierarchy of sub-processes•Follow the structure of the physics, not of the computer!•Parallel and sequential versions must share the same code
Parallelization•Each module runs //•Synchronization points when order is important•Blend of parallel and sequential code
© Mathias Bavay
Conclusion
Complex code: Multi-physics Multi-scales So, multi-models! 1 major physical process = 1 object
MPI-style approach: Would break the physical processes structure Or would force MPI into a structure that is not his!
Pop-C++: Keep physical processes structure Parallelize per object, ie per physical process Can contain MPI code as well as parallelization within a
parallel object