SPE-116675

download SPE-116675

of 19

Transcript of SPE-116675

  • 8/10/2019 SPE-116675

    1/19

    SPE 116675

    From Mega-Cell to Giga-Cell Reservoir SimulationAli H. Dogru, Larry S.K. Fung, Tareq M. Al-Shaalan, Usuf Middya, and Jorge A. Pita,Saudi Arabian Oil Company

    Copyright 2008, Society of Petroleum Engineers

    This paper was prepared for presentation at the 2008 SPE Annual Technical Conference and Exhibition held in Denver, Colorado, USA, 2124 September 2008.

    This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not beenreviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, itsofficers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission toreproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

    Abstract

    For many oil & gas reservoirs, especially large reservoirs in the Middle East, the availability of vast amounts ofseismic, geologic and dynamic reservoir data result in high resolution geological models. Because of thelimitations of conventional reservoir simulator technologies, high resolution models are upscaled to flow simulationmodels by reducing the total number of cells from millions to a few hundred thousand. Flow simulators usingupscaled reservoir properties produce average reservoir performance and often fall short of accurately predictingrecovery.

    Realizing the limitations of the conventional simulators for the giant oil and gas reservoirs, parallel reservoirsimulators have been developed. The first generation of parallel simulators increased the simulator capabilities byan order of magnitude the result was that mega (million) cell simulation became a reality. Parallel computers,including PC Clusters, were successfully used to simulate large reservoirs with long production histories, usingmillions of cells. Mega-cell simulation helped recover additional oil and gas due to better understanding ofreservoir heterogeneity. The speed of parallel hardware also helped, making many runs to address uncertaintypossible.

    Despite the many benefits of parallel simulation technology for large reservoirs, the average cell size still remainsin the order of hundreds of meters (m) for large reservoirs. To fully utilize the seismic data, smaller grid blocks offifty m in length are required. This size of grid block results in billion (Giga) cell models for giant reservoirs. This isa two orders of magnitude increase from the mega-cell simulation. To simulate Giga-cell models in practical time,new innovations in the main components of the simulator, such as linear equation solvers, are essential. Also, thenext generation pre- and post-processing tools are needed to build and analyze Giga-cell models in practicaltimes.

    This paper describes the evolution of reservoir simulator technology, from Mega-Cell scale to a Giga-Cell scale,presenting current achievements, challenges and the road map for Giga-Cell Simulators.

    INTRODUCTION

    To simulate the giant oil and gas reservoirs with sufficient resolution, parallel reservoir simulator technology is anabsolute necessity since the conventional simulators based on serial computations cannot handle these systems.

    Interest in parallel reservoir simulation started over two decades ago in the oil industry. The earliest attempt wasby John Wheeler

    1. This was followed by work of John Killough

    2, Shiralkar and Stevenson

    3.

    Realizing the importance of the reservoir simulation technology in 1992, Saudi Aramco initiated a program fordeveloping the companys first in-house, Parallel Oil Water and Gas Reservoir Simulator, POWERS

    4. The

    simulator was developed on a parallel computer (Connection Machine CM-5) in 1996. The first field simulationmodel using 1.3 million cells was successfully completed

    5.In 2000 the parallel code was ported to symmetric

  • 8/10/2019 SPE-116675

    2/19

    2 SPE 116675

    parallel computers using IBM Nighthawk shared memory systems and later in 2002 to distributed memory PCClusters.

    By 2002 the worlds largest oil field was simulated by Ghawars full field model, run using 10 million active cellswith sixty years of production history and thousands of wells. Experience proved that parallel reservoir simulationwas successful for reservoir simulation. Therefore, more features such as compositional, dual porosity/dualpermeability, and coupled surface facilities were added to the standard black oil options to cover more areas of

    applications.

    Figure 1 shows the past and future trend in simulating large reservoirs at Saudi Aramco. As seen, the impact ofparallel simulation on model size is very significant. Model size in this figure follows the general trend ofexponential growth in computer hardware technology.

    Figure 1: Growth of Model Size for field studies at Saudi Aramco

    As parallel computers progressed from specialized designs such as Connection Machines to PC Clusters, theprogramming language also progressed from High Performance Fortran to Shared Memory Parallel Fortran(OpenMP), F90, Message Passing Interface libraries and finally parallel C++.

    Industry-wide, several efforts have been made to introduce new programming languages and algorithms forparallel simulation, starting in early 2001 by Beckner

    6, DeBaun et al.

    7and Shiralkar et al.

    8in 2005.

    BENEFITS OF MEGA-CELL PARALLEL RESERVOIR SIMULATION

    Parallel reservoir simulation offers several advantages over the traditional, single CPU simulators. Among severalbenefits of parallel simulation is the ability to handle very large simulation models with millions of cells. Thisfeature provides reduced upscaling from the geological models to simulation models, resulting in betterrepresentation of the reservoir heterogeneity.

    Figure 2 shows a typical industry simulation process. The geological models with millions of cells are upscaled tothe multi-hundred thousand cell simulation model. This produces highly-averaged (diluted) results. Specifically,sharp saturation fronts and trapped oil saturations will not be captured by this approach. Simulators in thiscategory use single CPU computers. Therefore, simulator codes use non-parallel sequential programminglanguages. We will refer to them as Sequential Simulators in this article.

    0

    5

    10

    15

    20

    25

    30

    35

    40

    1994 1998 2003 2008

    Model SizeMillion Cells

    ConventionalSimulators

    ParallelSimulator

  • 8/10/2019 SPE-116675

    3/19

    SPE 116675 3

    Figure 2: Industry practice for building simulator models from the geological models

    Figure 3: Parallel Simulation Process

    As mentioned above, parallel simulators do not require strong upscaling. Typically, with mild upscaling, thesemodels can produce highly realistic simulation results. Figure 3 displays the concept of parallel simulators.

    As shown in Figure 3, mild upscaling is applied to the geological models. This helps to retain the original reservoirheterogeneity. Hence it is expected that this process will produce more accurate results for a good geologicalmodel. Another important side benefit is to enable engineers to directly update the geological model during thehistory matching. This feature is not available for the coarse grid models (Sequential Simulators).

    To further demonstrate the benefits of parallel simulation, we will present an actual field case shown in Figure 4.Two modeling approaches will be compared. The first model is a 14 layer, 53,000 active cell coarse grid (Dx =500 m) on a conventional nonparallel industrial simulator, and the second model is a 128 layer, 1.4 million activecell, relatively fine grid (Dx = 250 m), parallel simulation model. This figure shows a vertical cross section betweena down flank water injector and a crestal producing well. The reservoir has been water-flooded over 30 years.Changes in the water saturation over 30 years are shown on this cross section. Red or brown color shows nochange (unswept) where the green color indicates change in water saturation (swept).

  • 8/10/2019 SPE-116675

    4/19

    4 SPE 116675

    Figure 4: Field Case

    If one compares the computed performances of the crest well for water cut and pressure, one can see that bothmodels match the observations very well. This indicates that the 14-layer model matches field data as good asthe 128 layer model. When the change in the water saturation (sweeping) was examined, it was observed thatthere are significant differences between the two model results: the 14-layer model shows good sweep betweenthe injector and crestal well while the 128 layer model shows an unswept oil zone (brown color pocket). Based onthese results, a new well location was chosen as shown in the figure. A horizontal well was drilled and oil was

    found as predicted by the simulator5. Following the drilling, the same well was logged. Water saturation obtainedby the log was plotted against the simulator calculated water saturations in Figure 4. As seen, the simulator wasable to closely predict the vertical water saturations prior to drilling the well. Based on simulator results, four newwells were drilled in 2003 and 2004 in similar locations to produce trapped oil. All the wells drilled found oil andare still producing at a good rate.

    MOTIVATION FOR GIGA-CELL SIMULATION

    Presently, multi-million cell simulation models are used for reservoir management, recovery estimates, placingcomplex wells, making future plans and developing new fields. For giant reservoirs, current grid sizes varybetween 100 m to 250 m. The question is whether or not it would be beneficial to go to smaller grid sizes.

    To answer this question, we will consider a giant reservoir with associated reservoirs imbedded in the same

    aquifer as shown in Figure 5. For the reservoir system, as shown in Figure 5, we will fix the number of verticallayers as 51 (with an average thickness of 5 ft) and calculate the total number of grid block sizes in the arealdirections. Using a 250 m areal grid size yields 29 million cells. A seismic scale grid, 25 m areal grid size, yieldsnearly 3 billion cells. A 12.5 m areal grid would yield about 12 billion cells. This is due to areal refinement.Similarly, more cells will result in more vertical layers. For example, a 25 m areal grid with 100 layers will result in6 billion cells and a 12.5 m areal grid would yield about 20 billion cells. This example was chosen for the largestreservoir, since a very small percent recovery increase due to more accurate simulation would yield billions ofbarrels of additional oil. Therefore, for this reservoir, it would be of interest to test finer grid models and see if thevertical and areal refinement would increase the accuracy of the model. Accuracy of the model can be measuredby comparing model results, water cuts, and pressures with the observed data.

  • 8/10/2019 SPE-116675

    5/19

    SPE 116675 5

    Figure 5: Areal View of the Reservoir Aquifer Systems

    ACCURACY INCREASES WITH MORE REFINEMENT

    In this section we will consider the giant reservoir shown in Figure 5. As seen in this figure, there are severalreservoirs in the same full field model. Reservoir 1 is the small oil reservoir located in the NE corner of Figure 1.All reservoirs are connected by a common aquifer. Production from the reservoirs had started over 60 years ago.

    To show the effect of finer grid modeling of the same reservoir systems, we will first define a base model andmake a history run. Next, we will refine the base model and repeat the same run. We will then compare theresults relative to the field observations to make a judgment. It should be noted that finer grid models will beconstructed from the base model by reducing the grid block sizes. For refined cell properties, the mother model

    cell properties will be assigned. Hence this practice will demonstrate the effect of numerical errors. Building veryhigh resolution models with actual data interpolated from seismic or geostatistics will be the subject of a newpaper.

    In this paper, we will present two sets of examples on the giant Ghawar model. The base model has 10 millioncells, over 60 years of production/injection history with thousands of wells involved. From this base model, first a48 million cell model will be constructed to study the effects of vertical refinement. Next, a 192 million cell modelwill be constructed from the 48 million cell model to study the effects of areal refinement.

    Refinement in both directions would yield a billion-cell model. The work on this is in progress and will be reportedin a future paper. In this paper a few results for a 700 million cell model will be presented for illustration of thesimulator capabilities.

    Higher Vertical Resolution

    Our base model is a ~10 million cell full field model covering the reservoirs shown in Figure 5. This model has 17vertical layers and 250 m (840 ft) areal grid size. Next, we will increase the number of layers to 85, resulting in a48 million cell model. This is accomplished by dividing each vertical layer by five and assigning the property ofeach layer in the base model to the refined layers. The objective is to see if the numerical refinement would makeany difference in the accuracy of the model. Figure 6 shows the layering and total number of cells for bothmodels. The same relative permeabilities were used for both models.

    It is expected that vertical refinement of layers for a thick reservoir with good vertical permeability would bettersimulate the advancing water fronts. This is due to a better representation of the gravitational effect thatinfluences the shape of the moving water fronts. As a result, water arrival times and cuts at the producers areexpected to be more realistic. Finally, this would also reduce/eliminate the need of using pseudo functions and

  • 8/10/2019 SPE-116675

    6/19

    6 SPE 116675

    unrealistic multipliers in history matching. The reservoir in this example is a thick reservoir. Vertical fracturesenhance the vertical permeability.

    Figure 7 demonstrates the concept. It is shown that the advancing water front, due to flank water injection, isbetter represented by the fine grid model than the coarse grid (base) model. For example, water will move like apiston into a base model layer (15 ft) showing no gravity effects. In the fine grid model (a 15 ft layer is divided intofive layers of 3 ft each), it will show the gravity effects. It is expected that more water will be in the bottom layer

    due to gravity (water is heavier than oil). The field being studied has peripheral water injection, it is thick and hasenough vertical permeability (in some areas, this is due to local fractures).

    Figure 6: Base model and Vertically Refined Full Field Model definition

    ConceptThick reservoirs

    Higher Vertical Resolution Better Accuracy

    water

    15ft

    Oil

    Thick Vertical layers

    Late Water Breakthrough

    Oil

    water

    Thin Vertical Layers

    ( subdivision by 5 )

    Earlier Water Breakthrough

    Oil 3ft

    Not to scale

    Dx = 840 ft

    Same Relative Permeability Curves

    Dx

    Dx

    Figure 7: Effects of more vertical layering on injected water movement in a thick reservoir

  • 8/10/2019 SPE-116675

    7/19

    SPE 116675 7

    17 layers 85 layers

    WaterWater

    WELL A

    Simulator Results - 1990Simulator Results - 1990

    WELL A

    Water Sat Water Sat

    Figure 8: Vertical water profile at 1990

    Figure 9: Well Performance Comparison of 17 and 85 layer models with field observed water production and wellpressures

    Figure 10: Water Rate predictions from both models

  • 8/10/2019 SPE-116675

    8/19

    8 SPE 116675

    Results of the simulation runs are shown in Figure 8. This figure illustrates the calculated vertical water fronts in1990. The left image is from the base model, the image on the right is from the refined grid model. Figure 8 showsthat calculated vertical water fronts, due to west flank injection, confirm the concept postulated in Figure 7. Thefine grid model with 85 layers shows water saturation has already broken at the producer but the coarse gridmodel with 17 layers shows no water breakthrough yet.

    The next question is which of the results are correct? To answer this question we need to examine the observed

    water production and pressures at the producing well in this figure. When simulator results are compared with thefield observations (Figures 9 and 10), one sees that close agreement is observed for the fine grid model for waterarrival, water production rate and well pressures. The coarse grid model (base model) miscalculates water arrival(by a few years), water production rate and pressures. In this case the coarse grid model requires furtheradjustments in parameters for history matching in this area of the reservoir.

    Higher Areal Resolution

    Next, the areal grid resolution was increased by a factor of two in the 48 million cell model (85 layers). This wasdone by subdividing the areal grid size by a factor of 2 in X and Y, and hence yielding quadrupling the totalnumber of cells (48 million cells x 4 = 192 million cells). The new model has 125m (420 ft) areal grid block sizeand a total of 192 million active cells. It must be noted that the properties of the fine grid cells were inherited fromthe coarse cell (parent and child logic), hence no upscaling was implemented. Objective here was again toobserve the effects of numerical discretization and hence numerical errors.

    In this section we will show the improvement in the solution not only in the main reservoir as shown in theprevious example, but also in the peripheral reservoirs. As seen in Figure 5, all reservoirs are imbedded in acommon aquifer and hence the withdrawal from one reservoir would affect the other. Therefore, accuracy insimulating the whole system should be better than simulating individual reservoirs. This is particularly importantfor high permeability reservoirs as shown here. Accurate accounting of boundary fluxes will improve the accuracyof the solution. In this section we will show how much accuracy has improved, even in nearby reservoirs with finegrid resolution.

    Direct effect of areal refinement in the simulator results for this system will show as reduced numerical dispersionin the areal direction. This effect is demonstrated on a conceptual model shown in Figure 11. In this figure, aboveeffects are demonstrated on the movement of injected water displayed in a vertical cross section. There are two

    sub figures in Figure 11. The first figure on the left illustrates the water movement using 840 ft( 250m ) areal gridsize with fine gridding in the vertical direction ( Dz=3 ft ). As illustrated and also discussed in the previous sectionit is expected that water should move faster in the bottom layer and can breakthrough at a hypothetical welllocated at the east boundary of this vertical cross section. On the other hand, the sub figure on the right illustratesthe fact that when smaller areal grid of 420ft (125m) is used, water movement is delayed in the x direction due toreduction of the numerical dispersion effects in x-y (areal) directions. Therefore, water is not expected tobreakthrough at a hypothetical well located at the east boundary of this vertical cross section.

    As shown in Figure 11 below, this concept is illustrated by building and running 2 full-field simulation models at 2different resolutions. The two models represent the same reservoir for the sixty years of history and withthousands of wells producing and injecting. The coarse areal grid model with areal grid size of 840ft (250m) and85 layers (Dz=3ft) results in 48 million cells. The fine areal grid model with areal grid size of 420ft (125m) and withsame number of vertical layers would result in 192 million cell model. These two models were run on a distributed

    memory cluster for the sixty years of history, involving production and injection of thousands wells.

    For illustration purposes, the effects of the areal grid size for a Well D are shown in Figure 12. Well D is located inthe oil reservoir to the NE of the main giant reservoir as shown in Figure 5. We note that both models (48 millioncells and 192 million cells) cover the main reservoir and the peripheral reservoir located to the NE of the mainreservoir sharing the same aquifer.

    Near well D, the vertical water movement calculated by the coarse areal grid model (48 million cells, 840f arealgrid) at January 1981. As seen due to water injection at well C, water has already broken through at most of theperforations for well D due to areal numerical dispersion effects.

  • 8/10/2019 SPE-116675

    9/19

    SPE 116675 9

    Areal Refinement Thick reservoirs

    Concept

    water

    Dx = 840 ft

    Earlier Water Breakthrough

    Oil 3ft

    Not to scale

    Same Relative Permeability Curves

    water

    Dx = 420 ft

    Delayed Water Breakthrough

    Oil

    DxDx

    Figure 11: Effect of areal refinement on the advancing vertical water front for a thick reservoir

    Figure 12: Water Saturations in 1981 for the 48 Million Cell Model

    On the other hand, effects of areally refined grid model are displayed in Figure 13. This figure shows thatnumerical dispersion effects in the areal direction is reduced and water has only broken though at the bottomsections of the well not through out the well perforations as in Figure 12.

  • 8/10/2019 SPE-116675

    10/19

    10 SPE 116675

    Figure 13: Water Saturations in 1981 for the 192 Million Cell Model

    The observed phenomenon in Figures 12 and 13 is reflected in the calculated water production for the same well,Figure 14.

    Figure 14: Water Production at Well D: Comparison of Models with field data

    As illustrated in Figure 14, coarse grid model show premature water breakthrough while the fine grid model showslate water arrivals and less water production which is consistent with the observations (in green color).Figures 12 to 14 illustrate the clear effect of numerical errors in an actual reservoir simulator using the real data.

    Larger Size Models

    In order to test the parallel performance of the simulator a new model was generated from the original model (10million cells, 17 layers, with areal grid size of 840ft-250m) by reducing the areal grid size to 42 m x 42 m (138 ft x

    Well D

    Case 48 Million

    Case 192 Million

    History

    Reservoir 1

  • 8/10/2019 SPE-116675

    11/19

    SPE 116675 11

    138 ft). This time number of the vertical layers was kept at 34 (twice of the original) in order to fit into the LinuxCluster available.

    Model was run and a snap shot of the fluid distribution in the peripheral field (which is also a giant carbonatereservoir) is illustrated in Figure 15. As shown a secondary gas cap was formed due to production. This isconsistent with the history of the reservoir and also demonstrating the simulator capability of handling very largesize systems with three phase flow in high resolution.

    Figure 15: Gas Cap formation in Reservoir 1 of 700 Million Cell Model

    Towards Giga Cell Models

    The motivation for going to Giga-Cell Reservoir Simulation is not only limited to the reason explained above. As itis well known, more and more new technologies are being deployed in the oil and gas fields. These new

    technologies provide more detailed description of the reservoirs which were not previously available. Some of thenew technologies include deep-well electro-magnetic surveys; new borehole gravimetric surveys, new seismicmethods, use of geochemistry and implementation of many new sensor technologies. With the addition of thesenew technologies, it will be possible to describe reservoir heterogeneity more accurately and in greater detail.This will require more grid cells to fully describe the fluid flow in the porous media. Even for regular-sizereservoirs, it will be possible to use giga-cell simulation for understanding the fluid flow better (pore scale physics)and develop better recovery methods.

    Critical Components of a Giga-Cell Simulator

    There are several critical components of the simulator that enable us to make big model runs. Figure 16 is the piechart for the computational work for various parts of the simulator. For example, for any of the runs described inthe previous sections, more than one half of the computational time is spent in Linear Solver (52 percent). Forexample, if a Giga-Cell run takes two days to complete for a gigantic reservoir with a long history, more than oneday would be spent solving the linear system of equations. If the linear solver is not scalable or doesnt have goodparallel efficiency, this number could be 80 to 90 percent. The fact that it is 52 percent here is a result of newlydeveloped parallel algorithms. The following sections will briefly describe the solver and scalability of the simulatoron a state-of-the-art parallel machine, node Linux PC cluster with Infiniband network interconnects or an IBM BlueGene/P.

  • 8/10/2019 SPE-116675

    12/19

    12 SPE 116675

    Figure 16- Distribution of Computational Work in a Next Generation Parallel Simulator

    Parallel Linear Solver

    The parallel linear solver used for the simulator mentioned in this paper, GigaPOWERS, can handle linearsystems of equations resulting from structured grids and unstructured grids as well. The theory of the solver hasbeen presented in SPE recently by Fung & Dogru

    9.

    Basically, there are two new parallel unstructured preconditioning methods used for the solver. The first method isbased on matrix substructuring which was called line-solve power series (LSPS). A maximum transmissibilityordering scheme and a line bundling strategy were also described to appropriately strengthen the methodology inthe unstructured context. The second method is based on variable partitioning, known as constraint pressureresidual (CPR).

    The CPR method is a kind of divide-and-conquer strategy that specifically attacks the nature of multiphase,multicomponent transport problems in reservoir simulation. The basic premise of CPR is that the full systemconservation equations are mixed parabolic hyperbolic in nature, with the pressure part of the problem beingmainly parabolic, and the concentration and saturation part of the problem being mainly hyperbolic.

    Therefore, CPR aims to decompose the pressure part and approximately solve it. The pressure solution is used toconstrain the residual on the full system, thus achieving a more robust and flexible overall solution strategy.These two methods can be combined to produce the two-stage CPR-LSPS method. We also discussed the two-stage CPR-PAMG method where parallel algebraic multigrid is used as the pressure-solve preconditioner.

    Distributed Unstructured Grid Infrastructure

    A scalable fully-distributed unstructured grid framework has been developed10

    , which was designed toaccommodate both complex structured and unstructured models from the mega cell to the giga cell range in thedistributed memory computing environment. This is an important consideration as todays computer clusters canhave multiple terabytes of memory. The memory local to each processing node is typically only a few gigabytesshared by several processing cores. The distributed unstructured grid infrastructure (DUGI) provides dataorganization and algorithms for parallel computation in a fully local referencing system for peer-to-peercommunication. The organization is achieved by using two levels of unstructured graph descriptions: one for thedistributed computational domains and the other for the distributed unstructured grid. At each level, theconnectivities of the elements are described as distributed 3D unstructured graphs.

    52

    17.64

    5.15

    11.63

    2

    8.35

    Solver

    Jacobian

    Wells

    Update

    I/O

    Others

    Percent

  • 8/10/2019 SPE-116675

    13/19

    SPE 116675 13

    Numerical Example

    Two full-field reservoir models, one at 172 million cells and the other at 382 million cells are used to test scalabilityof the new simulator. The 172 million cell model is a structured grid model with grid dimensions of 1350*3747*34.The areal cell dimensions are 83 m * 83 m and the average thickness is about 2.2 m. The horizontalpermeabilities, Kh, range from 0.03 mD to 10 Darcies. Vertical permeabilities vary from 0.005 mD to 500 mD.There are about 3000 wells in the model. The 382 million cell model has the grid dimensions of 2250*4996*34. It

    is a finer grid simulation model for the same reservoir.

    A 512-node Linux PC cluster with Infiniband network interconnects was used to run the models. Each node has 2Quad Core Intel Xeon CPU running at 3.0 GHz. This gives a total of 4096 processing cores. Our initial testingindicates that the optimal number of cores per node to use is 4. Using additional cores per node does not improveperformance. Thus, our optimal usage for performance is to always use 4 cores per node, and to increase thenumber of nodes as model size increases.

    The scalability of the simulator for the 172 million cell model is illustrated in Figure 17. The solver achieved superlinear scale-up of 4.23 when we increase from 120 nodes (480 cores) to 480 nodes (1920 cores) and the overallNewtonian iteration achieved 3.6 out of 4. For 20 years of history, the total runtime was 8.86 hours using 120-node which reduces to 2.76 hours when using 480-node. Detail breakdown of timing is illustrated in Figure 18.

    Scalability results for the 382 million cell model are shown in Figure 19. In this case, the solver also achievedsuper linear scale-up and linear scale up for the overall Newtonian iteration when we increase from 240-node to480-node. This indicates that the simulator can fully benefit from using all 512 nodes (the entire cluster) to solve agigacell simulation problem to achieve good turn around time. The total run time of the 20 year history simulationfor the 382 million cell problem using 480 nodes (1920 cores) was 5.63 hours.

    Scalability on QuadCore Xeon Linux Cluster for

    172MM cell model

    1.00

    2.00

    3.00

    4.00

    480 960 1440 1920

    Number of Processing Cores

    Speed-Upfactor

    solver

    newton

    ideal

    Figure 17: Scalability for the 172 MM Cell Model on the Infiniband Quad Core Xeon Linux cluster

  • 8/10/2019 SPE-116675

    14/19

  • 8/10/2019 SPE-116675

    15/19

    SPE 116675 15

    PROJECTION FOR THE FUTURE

    Growth of model size and improvement in computing power results presented above can be summarized on anew plot as shown in Figure 20. The vertical axis is logarithmic, the horizontal axis is linear. This semi-log plotshows that model size is exponentially growing with time.

    Extrapolation of the data beyond 2008 makes it evident that giga-cell simulation can be attained within the nextfew years. Giga-cell simulation process will bring new challenges with it. These challenges will be discussed inlater sections.

    It is interesting to note that the behavior seen in Figure 20 is similar to the computer power increase forecasted byMoores Law. Although model size growth and computing power shows similar trends, one should not think that agiven code, for example, developed in 1985, would be able run a giga-cell model by using more powerfulcomputers. As indicated in Figure 19, growth in model size using more powerful computers is achieved byintroducing new programming languages (rewriting the code) and using innovative numerical solutions that takeadvantage of the new computer architectures to improve simulator scalability. Otherwise, an old code will notscale properly with the increased number of CPUs and, after certain number of CPUs, no more speedimprovement will be possible. Ideally, scientists design the algorithms such that the scalability plot is a straightline or close to it. This can be accomplished by new methods of domain partitioning for load balancing, newprogramming techniques and new numerical algorithms compatible with the hardware architecture.

    A comprehensive discussion of the formulation of a next generation parallel or giga cell simulator will bepresented in a future publication.

    0.1

    1

    10

    100

    1000

    1988 1998 2002 2004 2007 2008 2009

    Model SizeMillion Cells

    Figure 20Growth of Model size

    Conventional Simulators

    Second Generation ParallelSimulators (Giga-Cells)

    First Generation Parallel Simulators(Mega-Cell)

  • 8/10/2019 SPE-116675

    16/19

    16 SPE 116675

    GIGA-CELL SIMULATION ENVIRONMENT

    Building giga-cell reservoir models, and analyzing the results generated by the simulator in practical times requiretotally new technologies in pre- and post-processing. Specifically, processing gigantic data set as input and outputin practical times is a major challenge for the conventional data modeling and visualization technologies.Limitations in the pre- and post-processing would limit the use of Giga Cell simulation.

    We have used two different approaches to giga-cell visualization. The first and more conventional approach usesparallel computers to perform the visualization by subdividing the data and the rendering domain into multiplechunks, each of which is handled by an individual processor in a computer cluster. We chose VisIt, avisualization technology developed at the Lawrence Livermore National Laboratory

    11. VisIt uses the MPI parallel

    processing paradigm to subdivide work among processors. It ran on a SGI Altix 3400 shared-memorysupercomputer with 64 processors and 512 gigabytes of memory. Figures 21 and 22 show snapshots ofsimulation results using VisIt.

    Figure 21: Billion Cell Visualization of Results and QC of Input Data

    Figure 22: Billion Cell Visualization of Reservoir Simulation Time Step

    The second approach is newer and relies heavily on the capabilities of GPUs (Graphical Processing Units), whichtoday support large memory (currently up to 2 GB). By using multi-resolution techniques (e.g., levels of detailsapproach), relatively inexpensive workstations are able to render giga-cell size datasets. We chose Octreemizersoftware from the Fraunhofer Institute

    12for this approach, using an Appro XtremeStation with dual GPUs (NVidia

    FX5600) and 128 gigabytes of memory. Figure 22 shows a four-display Octreemizer session using dual-GPUs torender a 2.4-billion-cell model.

  • 8/10/2019 SPE-116675

    17/19

    SPE 116675 17

    Figure 23: Octreemizer Visualization of 2.4-Billion Cell Model

    Octreemizer uses a hierarchical paging scheme to guarantee interactive frame rates for very large volumetricdatasets by trading texture resolution for speed. Original volumes are divided into bricks in the range of 32x32x32to 64x64x64 voxels each. These bricks create the finest level in the octree structure. Eight neighboring bricks are

    filtered into a single brick of the next coarser level until only a single brick remains on the top of the octree.

    Both of these methods have been able to visualize billion-cell models with interactive frame rates for rotation andzooming, while spending less than a minute in loading each time-step. We are confident that the rapid progressof CPU, GPU and file I/O speeds will further accelerate our capabilities.

    This interactivity opens the doors to in-situ visualization of online simulations with real-time analysis of theresults, bringing the insights of visualization closer to the actual model simulation than currently afforded by justpost-processing examination of the results. Such integration would also bypass file I/O bottlenecks. We envisionthat large-scale parallel supercomputers and computational GPUs on the workstation will partner effectively toachieve online simulation interactivity for Giga-cell models in the near future.

    Modern remote visualization techniques are also proving helpful to visualize results from supercomputers located

    in remote data centers. Using VNC (Virtual Network Computing) protocol and VirtualGL, all 3D manipulationsoccur in the local workstation hardware, providing a very interactive response, while the simulation and thevisualization software run on the remote supercomputer. Figure 24 shows a snapshot of a workstation displaylocated in Dhahran, Saudi Arabia, running a visualization session on the TACC Ranger supercomputer located inAustin, Texas. The model visualized consists of one billion cells.

  • 8/10/2019 SPE-116675

    18/19

    18 SPE 116675

    Figure 24: Remote Visualization of a One-Billion-Cell Model (TACC Ranger)

    SUMMARY AND CONCLUSIONS

    Experience has shown that with the aid of rapidly evolving parallel computer technology and using innovativenumerical solution techniques for parallel simulators, Giga-cell reservoir simulation is within reach. Such asimulator will reveal great details of giant reservoirs and will enable engineers and geoscientists to build, run andanalyze highly detailed models for the oil and gas reservoirs with great accuracy. This will help companiesrecover additional oil and gas. Further developments and innovative approaches are needed for the full utilization

    and manipulation of the giant data sets. Introduction of virtual reality and sound will enhance the understanding ofthe results, locating trapped oil zones, placing new wells and designing new field operations. Overall, Giga-cellsimulation is expected to be beneficial for mankind in its quest to produce more hydrocarbons to sustain theworlds economic development.

    ACKNOWLEDGEMENTS

    The authors would like to thank Saudi Aramcos management for the permission to publish this paper. Theauthors would also like to thank the other Computational Modeling Technology team members: Henry Hoy,Mokhles Mezghani, Tom Dreiman, Hemanthkumar Kesavalu, Nabil Al-Zamel, Ho Jeen Su, James Tan, WernerHahn, Omar Hinai, Razen Al-Harbi and Ahmad Youbi for their valuable contributions.

    REFERENCES

    1. Wheeler, J.A. and Smith, R.A.: "Reservoir Simulation on a Hypercube, SPE paper 19804 presented at the64th Annual SPE Conference & Exhibition, San Antonio, Texas, and October 1989.

    2. Killough, J.E. and Bhogeswara, R.: "Simulation of Compositional Reservoir Phenomena on a DistributedMemory Parallel Computer," J. Pet. Tech., November 1991.

    3. Shiralkar, G.S., Stephenson, R.E., Joubert, W., Lubeck, O. and van Bloemen, B.: A Production QualityDistributed Memory Reservoir Simulator," SPE paper 37975, presented at the SPE Symposium on ReservoirSimulation, Dallas, Texas, June 8-11, 1997.

  • 8/10/2019 SPE-116675

    19/19