Adaptive algorithms benchmarking - CMMSE2013
-
Upload
lukas-korous -
Category
Software
-
view
162 -
download
0
Transcript of Adaptive algorithms benchmarking - CMMSE2013
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Computational Comparison of Various FEM Adaptivity Approaches
Lukas Korous, Pavel Solın, Pavel Karban, Frantisek Mach, Pavel Kus,
Lukas Korous, Ivo Dolezel, et al
Department of Theory of Electrical Engineering
Faculty of Electrical Engineering
University of West Bohemia
Czech Republic
May 22, 2013
1/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Introduction
Context
2/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Introduction
Context
General purpose (hp-)FEM (& (hp-)DG) software
non-linear, transient, coupled, real/complex, ... problemsmultitude of settings of automatic adaptivity
1 understanding effects of adaptivity settings on the whole computation
2 automatic self-tuning of adaptivity algorithm for arbitrary problem
Automatic solution of general engineering-level problems
1 accurately (error estimation - adaptivity)
2 fast (effective implementation, optimization, parallelization)
3/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Introduction
Goals
very good error information (important on its own) ... OK
small requirements on the input mesh, on user experience ... OK
only do automatic (local) refinements when beneficial ...∼ OK
use the ”best” adaptivity type (”best” - speed, problem size, ...), with ”best” settings ...∼ OK
Mid-goal: Adaptivity benchmarking
measure all kinds of performance aspects (speed, memory, parallelization)
improve performance / tune implementation according to data
implement robust heuristics to automatically self-optimize the implemented adaptive algorithms
use gained experience with implementing and optimizing various hp-adaptive aspects in 2D and evolve the
3D code
4/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Motivation
Theory⇐⇒ reality
Why do adaptivity in the first place?
While achieving comparable resolution as without adaptivity, it (generally) leads to:
reduced algebraic problems sizes (DOFs)
reduced mesh size (Elements * Poly order)
The previous is related to:
reduced algebraic structures memory demands (nnz, ...) ?
reduced algebraic solver memory consumption ?
reduced mesh size (bytes) ?
reduced size of utility data structures ?
CPU time ?
Aim: inspect / test the relationship.
5/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
hp-adaptivity with Rothe’s method
A-posteriori mesh(space)-adaptivity with Rothe’s method
Solving Lu = f on a mesh Th1.
Do the following until an error condition is satisfied: {1 solve Lu = f on a mesh Thn
2 inspect the error condition
if True=⇒ End
3 determine the refinement of Thn
mapping e → True/False
(optional) determine how to refine the identified elements
4 refine the appropriate elements and obtain Thn+1
}
6/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
hp-adaptivity with Rothe’s method
Determination where to refine the mesh
Error calculation
mapping e → True/False in our implementation:
1 error estimation based on coarse and reference solution difference
2 error quantity (L2
error norm, H1
error norm, L∞
error norm, ..) is calculated element-wise
3 elements of the mesh sorted descending by the error quantity
4 a Stopping Criterion Threshold T% is prescribed:
refine ONLY elements with error larger than T% of the maximum element error.
In the calculated examples, 7 different levels (Lowest - 5%, ..., Highest - 95%) were tested.
7/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
hp-adaptivity with Rothe’s method
Determination of refinement type - illustration
GAMM channel - use of directional polynomial orders
8/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
hp-adaptivity with Rothe’s method
Determination of refinement type
I. Basic refinement types.
1. noSelectionH
standard isotropic h-refinement
2. noSelectionHP
refine both in h and in p
II. Selection based on scoring refinement candidates.
- local projection of the reference solution to space created by a particular candidate.
- measure the local error in the appropriate quantity.
II.i Use the error itself
3. hXORpError
refine either in h or in p, select the one with the smaller error.
9/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
hp-adaptivity with Rothe’s method
Determination of refinement type
II.ii Calculate a score for each refinement candidate:
candidate.score = log
(original.error
candidate.error
)1
(candidate.dofs − original.dofs)
and select the one with highest score.
4. hORpDOFs - 3 candidates:
isotropic h-refinement
isotropic p-refinement
both isotropic h- and isotropic p- refinement
5. isoHPDOFs - adds the following candidates:
isotropic h-refinement with order permutations ranging from (original p - 1) to (original p + 1)
6. anisoHPDOFs - adds the following candidates:
anisotropic h-refinement with order permutations ranging from (original p - 1) to (original p + 1)
anisotropic p-refinement
10/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Attributes inspected
Steps the adaptive algorithm must make in order to achieve the error threshold.
Number of unknowns in the last adaptivity step (where the error threshold is met).
Cumulative number of unknowns of all systems solved in the process.
Capabilities of the algorithm to perform faster by caching of a sort, in this sense, how many local stiffness
matrices and rhs vectors can be reused.
Cumulative direct solver factorization size, used memory and flops.
Measurement how well a particular adaptive strategy followed the prescribed error threshold (by not
dropping unnecessarily much below it, etc.).
Error estimate and exact error.
CPU time (implementation-specific, for illustration)
11/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Illustration - 1
12/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Illustration - 2 - improvement
13/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark examples
Thanks to: W. MITCHELL, A Collection of 2D Elliptic Problems for Testing Adaptive Algorithms
10 different ”difficulty” settings
5 resolution(error) thresholds
14/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
Solution of various difficulty settings:
Error thresholds: {0.1%, 0.025%, 0.00625%, 0.0015625%, 0.000390625%}
15/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
Number of unknowns reached by the adaptive algorithm
total per all difficulties combined
total per all error thresholds combined
16/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
Number of unknowns reached by the adaptive algorithm
total per all difficulties combined
error level: 0.1%
17/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
Number of unknowns reached by the adaptive algorithm
total per all difficulties combined
error level: 0.000390625%
18/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
Cumulative number of unknowns of all systems solved
total per all difficulties combined
total per all error thresholds combined
19/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
Cumulative number of unknowns of all systems solved
total per all difficulties combined
error level: 0.1% error threshold
20/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
Cumulative number of unknowns of all systems solved
total per all difficulties combined
error level: 0.000390625%
21/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
, percentage of total cache searches
total per all difficulties combined
total per all error thresholds combined
22/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
, percentage of total cache searches
total per all difficulties combined
total per all error thresholds combined
23/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
total per all difficulties combined
total per all error thresholds combined
24/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - exponential peak
total per all difficulties combined
total per all error thresholds combined
25/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 1 - NIST-04 - summary
h-adaptivity is competitive down to 0.1% error threshold
winners: (3.simple h- XOR p- refinement), (2.simplest hp-refinement)
(3.simple h- XOR p- refinement) leads to better matrix sparsity pattern
in both cases, best stopping criterion: 30− 40%
losers: (4.h- OR p- refinement) - too few candidates (unnecessary DOFs)
26/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Solution of various difficulty settings:
Error thresholds: {0.1%, 0.025%, 0.00625%, 0.0015625%, 0.000390625%}
27/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Number of unknowns reached by the adaptive algorithm
total per all difficulties combined
3 error levels, with the lowest one= 0.00625%
28/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Number of unknowns reached by the adaptive algorithm
total per all difficulties combined
4 error levels, with the lowest one= 0.0015625%
29/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Number of unknowns reached by the adaptive algorithm
total per all difficulties combined
5 error levels, with the lowest one= 0.000390625%
30/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Number of unknowns reached by the adaptive algorithm
highest difficulty
highest error level: = 0.000390625%
31/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Cumulative number of unknowns of all systems solved
total per all difficulties combined
all error levels
32/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Local stiffness matrices - new calculation, percentage of total cache searches
total per all difficulties combined
3 error levels, with the lowest one= 0.00625%
33/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Local stiffness matrices - new calculation, percentage of total cache searches
total per all difficulties combined
4 error levels, with the lowest one= 0.0015625%
34/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Local stiffness matrices - new calculation, percentage of total cache searches
total per all difficulties combined
5 error levels, with the lowest one= 0.000390625%
35/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Local stiffness matrices - forced recalculation, percentage of total cache searches
total per all difficulties combined
3 error levels, with the lowest one= 0.00625%
36/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Local stiffness matrices - forced recalculation, percentage of total cache searches
total per all difficulties combined
4 error levels, with the lowest one= 0.0015625%
37/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - boundary layer
Local stiffness matrices - forced recalculation, percentage of total cache searches
total per all difficulties combined
5 error levels, with the lowest one= 0.000390625%
38/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 2 - NIST-06 - summary
Summary of benchmark data
even though boundary layers should favor (6. anisotropic refinements), it does so only for very high
accuracies
for higher error levels (0.0015625% and up), anisotropic refinement candidates bring only small
improvement
winners: (6. anisotropic refinements), (5.isotropic refinements), (2.simplest hp-refinement)
roughly same numbers of adaptivity steps, (6. anisotropic refinements) always slightly less DOFs
in all cases, best stopping criterion: 5% (due to large error difference boundary layer⇔ rest of the domain)
39/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - oscillatory
Solution of various difficulty settings:
Error thresholds: {1.0%, 0.25%, 0.0625%, 0.015625%, 0.00390625%}
40/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - oscillatory
Number of unknowns reached by the adaptive algorithm
total per all difficulties combined
all error levels
41/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - oscillatory
High difficulty test example
42/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - oscillatory
High difficulty test example
Number of unknowns reached by the adaptive algorithm
all error levels
43/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - high difficulty
High difficulty test example
Number of unknowns reached by the adaptive algorithm
error level: 0.00390625%
44/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - high difficulty
Cumulative number of unknowns of all systems solved
total per all difficulties combined
all error levels
45/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - high difficulty
Average ratio of achieved error and prescribed error threshold
total per all difficulties combined
all error levels
46/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - high difficulty
total per all difficulties combined
all error levels
47/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 3 - NIST-08 - summary
high difficulty example: (4.h- OR p- refinement) - achieved less DOFs than strategies with more candidates
(why?)
winners: (6. anisotropic refinements), (5.isotropic refinements), (2.simplest hp-refinement)
(2.simplest hp-refinement) less adaptivity steps (comparable DOFs of steps)
in all cases, best stopping criterion: 20%
losers: (4.h- OR p- refinement) for less difficult versions - too few candidates (unnecessary DOFs) + too
many steps
48/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 4 - NIST-12 - multiple difficulties
Solution of various difficulty settings:
Error thresholds: {5.0%, 2.5%, 1.25%, 0.625%, 0.3125%, 0.05%}
49/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 4 - NIST-12 - multiple difficulties
Cumulative number of unknowns of all systems solved
total per all difficulties combined
error level: 0.625%
50/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 4 - NIST-12 - multiple difficulties
Cumulative number of unknowns of all systems solved
total per all difficulties combined
error level: 0.3125%
51/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 4 - NIST-12 - multiple difficulties
Cumulative number of unknowns of all systems solved
total per all difficulties combined
error level: 0.05%
52/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Benchmarking
Benchmark 4 - NIST-12 - summary
hp-adaptivity comparably efficient only below error level of 0.05%
above this threshold, h-adaptivity (reference solution only refined in h!) performs (much) better
number of adaptive steps comparable
⇒ important to include h-adaptivity (non-p-) to any general heuristics
53/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Conclusions
More benchmarking needed (verification, phenomena variety)
Already observable
It pays off to spend time tuning the parameters
Selections based on scoring refinement candidates may have troubles showing results paying for their
implementation costs
Anisotropic refinements do not exhibit improvements in general⇒ other adaptivity features will have higher
implementation priority
Depending on the application, a fine-tuned h-adaptivity might be generally sufficient for engineering-level
(low) accuracy demands
Heuristics automatically tuning adaptivity algorithms must take into account:
available resources (memory, no. of cores, etc.)
”problem characteristics”
error threshold, other user-defined settings
previous (up-to-now) adaptivity algorithm behavior on the current problem
even the stopping criteria and refinement selections can (from the data it is obvious that should) be chosen
in an adaptive way
???
54/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches
Introduction Motivation hp-adaptivity with Rothe’s method Benchmarking Conclusions
Conclusions
Thank you for your attention.
55/55 L. Korous, P. Solin, P. Karban, F. Mach, P. Kus, I. Dolezel: Computational Comparison of Various FEM Adaptivity Approaches