First Quarterly Report
March 7, 1996 to June 6, 1996

Fractured Reservoir Discrete Feature
Network Technologies

A Project of:
Fundamental Geoscience
Research and Development
U.S. Department of Energy
National Oil and Related Programs

Contract Number #G4S51728

Prepared by:
William S. Dershowitz
Golder Associates Inc.
Redmond, Washington
June 26, 1996

Table of Contents

1.1 Overview of Progress
1.2 Project Deliverables
1.3 Issues and Resolutions
2.1 Active Tasks
2.2 Task Progress
2.2.1 Task 1.1.1 Initial Data Warehouse
2.2.2 Task 1.2.1: 3D Hierarchical Fracture Model Developments Toward a 3D Hierarchical Fracture Model
2.2.3 Task 1.3.2 Drainage Volume Analysis Methodology for Computing Tributary Reservoir Volume Convex Hull Algorithm Net Volume Algorithm Methodology for Computing Compartmentalization/Matrix Block Size
2.2.4 Task 2.1.1 Fracture Sets Analysis Background Software Conclusions
2.2.5 Task 4.1.1 Fracture Image Data Acquisition
2.2.6 Task 4.1.2 Well Testing Data Acquisition
2.2.7 Task 5.1.1 WWW Server Development
2.2.8 Task 5.2.1 Progress Reports
2.2.9 Task 6 Management
3.1 Schedule
3.2 Milestones and Deliverables

List of Tables

Table 2-1 Classification of Fractures Formed in Different Geologic Processes
Table 2-2 Input and Output Parameters for Fracture Conductivity Study
Table 2-3 Neural Network Codes
Table 3-1 Schedule and Deliverables

List of Figures

Figure 2-1 Project Study Site
Figure 2-2 Core Log Analysis BH-17, Well 11
Figure 2-3 Fence Diagram of Gamma Ray Logs for Reservoir Tract 49
Figure 2-4 Cross Section Through Yates Field Study Site
Figure 2-5 Fluid Contact Histories
Figure 2-6 Synthetic Lineament Map for Tract 17 Area
Figure 2-7 Yates Area 17 Production Data
Figure 2-8 Tract 17 Area Map
Figure 2-9 2D Hierarchical Model
Figure 2-10 3D Hierarchical Model
Figure 2-11 Tributary Drainage/Injection Volume
Figure 2-12 Structural Processes Producing Reservoir Compartmentalization
Figure 2-13 Convex Hull Algorithm
Figure 2-14 Examples of Convex and Non-convex Polygons
Figure 2-15 Neural Network Topology
Figure 2-16 Example Neural Network for Fracture Set Analysis
Figure 2-17 Hinton Diagram Using Continuous Variables


1.1 Overview of Progress

The research project "Fractured Reservoir Discrete Feature Network Technologies" develops and demonstrates technologies for improving the ultimate recovery from fractured oil reservoirs. The project focuses on developing a suite of tools to better understand and design secondary recovery processes specifically tailored to fractured oil reservoirs. The research is designed to use information gathered during a field trial of Thermally-Assisted Gravity Segregation (TAGS), but is also applicable to other types of enhanced oil recovery processes in fractured reservoirs.

This quarterly progress report describes activities during the period March 7, 1996 to June 6, 1996. The project was initiated during the quarter, and the project work scopes and task structure was developed. Work was initiated on the following tasks:

The major efforts during the quarter were in preparation of the project work scope, development of the initial data warehouse, and implementation of the World Wide Web server to support the project.

1.2 Project Deliverables

No project deliverables were scheduled or submitted during this quarter.

1.3 Issues and Resolutions

The following issues were addressed during the quarter:

  1. The initial scopes for Task 1.3 and 2.1.4 were focused on analysis of stress effects in fractured reservoirs. Current data indicates that the Yates field, which is the project study site, is not stress controlled. However, the Yates field is significantly effected by reservoir partitioning, related to heterogeneity of the fracture network. As a result, alternative scopes for Task 1.3 and 2.1.4 were developed and provided to BDM/NIPER for review and approval.
  2. The algorithm for the analysis of fracture sets was developed.


2.1 Active Tasks

The following tasks were active during the quarter:

2.2 Task Progress

This section describes progress during the quarter for each of the active tasks.

2.2.1 Task 1.1.1 Initial Data Warehouse

The locations of the study site tracts 17 and 49 within the Yates fractured reservoir field are illustrated in Figure 2-1. During the quarter, Marathon Oil Company collected fracture and production data from project study site and provided this data to Golder Associates to provide the basis for the initial data warehouse. The following data was donated to the project:

This data was assembled and processed into an information warehouse within the World-Wide-Web (WWW) server application (Task 5.1.1). Figures 2-2 through 2-8 present examples of the data already available through the WWW. To access the server, you must have access to the Internet and a browser capable of interpreting HTML2. The URL (Uniform Resource Locator) for the site is HTTP://WWW.GOLDER.COM/NIPER/NIPRHOME.

From this "Home Page" you may access the technology transfer database, or you may review the project overview, scope of work or information about the project team.

The WWW server organizes the Technology Transfer Database into nine categories:

2.2.2 Task 1.2.1: 3D Hierarchical Fracture Model

During the quarter, MIT initiated extension of the two dimensional hierarchical fracture model to three dimensions, and analysis of the Yates fracture data which will form the basis for future developments to the model. The two dimensional hierarchical fracture model is illustrated in Figure 2-9. Developments Toward a 3D Hierarchical Fracture Model

The primary focus of MIT efforts during the quarter were on collection of information on geologic fracture genesis processes. This has been incorporated in a written, illustrated review. The structure of the fracture classification system is illustrated in Table 2-1.

Table 2-1 Classification of Fractures Formed in Different Geologic Processes

1. Fractures associated with folding
1.1 Fractures due to flexure and concentric folding of competent beds
1.2 Fractures related to planar shear of competent beds
1.3 Fracturing of a folded sequence of competent and incompetent beds
2. Fractures related to shallow-depth crustal faults
2.1 Normal faults and associated fractures
2.2 Strike-slip faults and associates fractures
  • General fracture geometry
  • Strike-slip faulting in igneous rocks
  • Strike-slip faulting in sedimentary rocks
2.3 Thrust faults and associated fractures
3. Fractures formed in remote tension
3.1 Subparallel vertical joints in igneous rocks
3.2 Subparallel vertical joints in sedimentary and other layered rocks
3.3 En-echelon cracks originating from a parent joint
3.4 Sheet joints parallel to the topographic surface
4. Contraction joints
4.1 Thermal contraction cracks and ice-wedge polygons in permafrost
4.2 Columnar jointing in igneous rocks
5. Fractures around central intrusive and extrusive structures
5.1 Fractures associated with upward pressure
5.2 Fractures created by subsidence of rock masses
6. Fractures created by melting of glacial ice

The current status of the 3-D hierarchical structural model incorporating the fracture genesis hierarchy is illustrated in Figure 2-10. The phenomena and mechanics summarized in Table 2-1 will be used during the upcoming quarter to formulate a number of basic geometric processes which can represent the observed geometries. These basic processes will then be modelled with corresponding development of the 3-D hierarchical fracture system model.

During the quarter, MIT received and reviewed data from Marathon Oil regarding the Yates field. The information is limited to some very basic structural geologic processes. Violeta Ivanova has scheduled a site visit to the Yates field to collect additional data and geological information to incorporate into the model.

2.2.3 Task 1.3.2 Drainage Volume Analysis

During the quarter, Golder Associates initiated development of the theory of tributary drainage volume for fractured reservoirs such as the Yates field. This theory utilizes graph theory to analyze fracture network connections to assess reservoir connectivity and compartmentalization (or injection access volume).

The TAGS process requires efficient transfer of heat from injected steam into the rock. Efficient heating requires that the injected steam heats as large a rock volume as possible surrounding the injection wells. The steam reaches the rock by means of fractures connected to the wellbore. As the rock is heated, the viscosity of the oil is reduced and the oil relative permeability is improved as the oil interfacial tension shifts from water-oil to gas-oil dominance, leading to enhanced oil production. Thus, fracture network connectivity strongly affects TAGS efficiency. While the geometry and connectivity of the natural fracture network cannot be altered except by local hydrofracturing, it is important for designing the TAGS process to understand the connectivity of the network and the volume of matrix that can be heated by means of steam injection (Figure 2-11). This volume of matrix constitutes the portion of the reservoir which can be effectively produced via the TAGS process.

A second related problem is the determination of block size. At the reservoir scale, compartmentalization is an important concern. At a much smaller scale, matrix block shape and size distribution is important for thermal modeling.

Reservoir-scale flow or pressure compartmentalization can arise from a variety of geological processes involving depositional and structural mechanisms (Ortoleva, 1994). Three of the most important structural processes are (Figure 2-12):

In the Yates Field, Marathon believes that permeable subvertical faults reduce lateral fluid migration during the double displacement process upon which TAGS relies. The various families of subvertical faults tend to form compartments. The volume of these compartments effects several important engineering parameters, including the appropriate spacing of production and injection wells, the volume of gas that will need to be injected and the amount of oil that can be produced. These compartments will not all be of the same volume or shape, since the faults are neither periodic or uniform in their orientations or locations. Thus it is important to calculate the range of compartment volumes, and to estimate the shape characteristics of these compartments.

THERM-DK will be used to simulate the TAGS process. This code requires certain geometric information concerning characteristic matrix block sizes and shapes for each grid cell in the model. These grid cells may be on the order of tens or even hundreds of meters, and thus will include many matrix blocks within each cell. An algorithm is required to compute the characteristic matrix block sizes from DFN models.

During the quarter, Golder Associates initiated development of methods to calculate the tributary drainage volume and compartmentalization/matrix block size for fractured reservoirs such as the Yates field. The methods utilize graph theory and computational geometry. Methodology for Computing Tributary Reservoir Volume

We have developed two algorithms for computing tributary reservoir volume whose predictive power will be evaluated through future field experiments. The first algorithm computes the volume of the convex hull for a connected network of fractures (Figure 2-13). The second algorithm identifies all fractures connected to a segment of a wellbore, and then computes the net volume of matrix within a certain distance of the connected fractures, as in Figure 2-11. These algorithms are termed the convex hull and net volume algorithms, respectively. Convex Hull Algorithm

Each fracture in a fracture network is represented by a polygon that is defined by vertices with three-dimensional spatial coordinates and vertex connectivity information. The collection of fractures that comprise a connected network can be viewed as a set of points in space in which each point is a vertex. A convex polyhedron that encloses this set of points is defined as a polyhedron in which a line connecting any two points inside the polyhedron itself lies entirely within the polyhedron. Analogous two-dimensional examples of convex and non-convex polygons are shown in Figure 2-14.

A convex hull is a particular type of convex polyhedron. The convex hull is defined as the smallest convex polyhedron containing all of the data points. This condition is equivalent to the definition of the hull as the shortest path traversing the data set. This latter definition is a description using terms from graph theory (for a lucid discussion of this problem, see Sedgewick, 1990).

A number of different algorithms have been proposed for computing the convex hull of a data set in three dimensions. After evaluation of several different methods, the Quickhull algorithm (Barber et al., 1995) was selected. The algorithm efficiently generates convex hulls in three dimensions, is well-documented, and has been implemented in the program Qhull which is available in source code from The Geometry Center of the University of Minnesota (Barber et al., 1993). Qhull was developed under National Science Foundation grants, and may be freely used for projects such as the current effort. Net Volume Algorithm

In steady-state conductive heat flow where a constant temperature boundary condition is applied to the surface of a material, the temperature at a distant from the surface is only a function of the thermal properties of the material, the boundary temperature and the distance from the boundary. While the heat transfer process due to steam injected into fractures is more complex, continuous injection of steam imposes a near-constant temperature boundary condition on connected fractures. The conductive heating of the matrix, the significant component of heat transfer in the TAGS process, thus has much in common with simple steady-state conductive heat flow. This analogy suggests another possible algorithm for estimating the amount of matrix heated by steam injected into fracture networks.

If calculations were to estimate, for example, that steam injection should heat 10 meters of rock over a 5-year steam injection program on either side of a fracture, this would be like extruding the polygonal fracture to a prism with a thickness of 20 meters (Figure 2-11). Although adding up all of these volumes might provide a good estimate of the matrix volume effectively heated for sparse fracture networks, denser networks might cause a significant overlap problem. It is necessary to correct for this overlap.

The algorithm to be evaluated consists of defining a thickness or extrusion distance based upon steam temperature, the matrix temperature, and the thermal properties of the matrix. The fractures making up networks connected to injection wells are extruded into prisms that have a thickness twice the extrusion distance. Finally, volume overlap is removed. Methodology for Computing Compartmentalization/Matrix Block Size

Large, permeable, subvertical faults can create barriers to lateral flow. These faults cause fluids to preferentially move along the faults rather than across them. If the faults are large enough, it is possible that they will connect to form compartments. The scale of these compartments limits the efficiency of the TAGS process because of their influence on reservoir-scale flow. Thus compartment size, as defined by the faults and the fluid contacts, is an important parameter for maximizing recovery through TAGS. In a faulted reservoir, it is unlikely that faults will completely isolate large volumes of matrix. Rather, the compartments will be bounded by large faults, but not completely isolated by them to form discrete blocks.

Codes such as Eclipse or Therm-DK require block information as well, although the scale is different. These finite difference codes use parameters that are functions of the size and shape of matrix blocks expected to lie within each grid cell. In this situation, the matrix blocks are defined by joints and small faults. As in the case with the large faults, the blocks defined by joints will be partially bounded by the joints.

An algorithm to determine compartment volume must accommodate faults that partially bound a matrix volume, and then eliminate faults that do not form or bound compartments. This requirement adds complexity to the computation which, nonetheless, can be overcome.

We have developed an algorithm that accomplishes these tasks. The algorithm consists of two parts. The first part involves identifying faults that are within a user-specified distance of one another. If they are within this tolerance, then the two faults are treated mathematically as if they were connected. This does not involve "extending" the faults until they intersect, but rather altering the connectivity matrix that describes all of the faults in the model. The second part of the algorithm identifies polyhedrons or compartments and computes their volumes.

The fundamental data structure for describing fault connectivity is an adjacency list (Sedgewick, 1990) modified to include some additional geometrical information, such as fault size and orientation. This type of list describes which faults can be reached from any selected fault and the order in which the faults forming the pathway are connected. This type of data structure can be searched to identify blocks.

Once generated, the list is first optimized to remove faults which are not completely connected geometrically. The reduced list is then traversed in much the same manner as a weighted breadth first search with the selection criteria being the angle of connection between faults.

The traversal begins by picking a particular face and selecting a normal direction then traversing the connection list until all of its edges are matched with the appropriate connecting fault. The algorithm repeats by taking the next connecting fault that has an available edge and continues recursively until we have a collection of faults in which every edge has a matching fault. This collection of faults is identified as a rock block after verifying that it is truly a closed polyhedron.

To generate another rock block, you can either select the same face and the second normal direction (each face has two sides) or choose another face and repeat the process.

Note that the rock blocks generated are not necessarily convex. Also note that the algorithm is sensitive to very small blocks (in the relative sense).

2.2.4 Task 2.1.1 Fracture Sets Analysis

During the quarter, we initiated development of neural net technology for identifying conductive fracture sets. Background

Neural networks are a sophisticated form of non-linear pattern recognition that are used in such diverse areas as stock market analysis, loan application screening, diseases diagnosis, and medical expert systems (Eberhart & Dobbins, 1990). They are particularly well-suited for problems in which the input and output variables vary spatially and in scale of importance, are of different mathematical types (e.g. class, ordinal, and continuous variables) and are complexly interrelated. Neural network have found geologic application in a variety of areas including slope stability analysis (Xu et al., 1994), rock and soil mechanics (Ellis et al., 1995; Feng, 1995; Lee et al., 1992), fracture network hydrology (La Pointe et al., 1995; Thomas & La Pointe, 1995) and prediction of earthquake intensity and liquefaction (Goh, 1994; Tung et al. 1994).

There are many types of neural networks, but all share a common architecture consisting of neurons and synapses (Figure 2-15). A neuron is simply a node in the network which uses a non-linear transfer function to convert an input signal (value) to an output signal. Neurons are connected by synapses. A synapse takes the output signal from one neuron, multiplies it by a synaptic weight, and passes the modified signal to an adjacent neuron as input. Depending on the number of incoming and outgoing synapses connected to it, a neuron can be classified into one of three categories:

  1. Input neurons have zero incoming synapses and one or more outgoing synapses. They are used to represent input variables, and take the variable value as their output.
  2. Output neurons have one or more incoming synapses and zero outgoing synapses. They are used to represent output variables, and produce an output signal which equals the predicted variable value.
  3. Hidden neurons have one or more incoming synapses and one or more outgoing synapses. They sit between the input and output neurons and pass signals through the network.

A distinct advantage of neural networks over other classification methods is their ability to learn the relative importance and complex interrelations among input and output variables. By changing the neuron transfer functions, the synaptic weights, or the network connectivity, a neural network can be conditioned to provide the expected response for a given input pattern. Once trained, a neural network can then be used to make predictions for input patterns whose correct classification is unknown.

Fracture conductivity studies may be considered an exercise in discriminant analysis: given a variety of geological and environmental parameters, is a particular fracture likely to be conductive or not? Backpropagation neural networks are well-suited for this purpose. In a backpropagation neural network, the input, hidden, and output nodes are arranged in layers. A single input layer, consisting only of input neurons, is connected to an output layer, consisting only of output neurons, through one or more hidden layers, consisting only of hidden neurons (Figure 2-15). Each neuron in a given layer is connected to all neurons in the preceding and following layers by synapses, which are characterized by their synaptic weight.

As an example, consider a fracture conductivity dataset consisting of the input and output parameters listed in Table 2-2. Of the five input variables, three are continuous, one is of class type, and the remaining one is boolean. The single output parameters is of class type, indicating the fracture set. A backpropagation neural network constructed for this problem would require at least five input nodes, one output node, and perhaps a single hidden layer containing three or so hidden nodes (Figure 2-16).

Table 2-2 Input and Output Parameters for Fracture Conductivity Study

Parameter Type Range Units
Input Parameters
Strike Continuous 0 - 360 degrees
Dip Continuous 0-90 degrees
Mineralization Class Calcite, quartz, epidote, ... n/a
Aperture Continuous greater than or equal to 0 mm
Open (or closed)? Boolean true, false n/a
Output Parameters
Fracture Set? Class Set Number n/a

In a backpropagation network, the network connectivity and the neuron transfer function are held constant, and network behavior is modified by adjusting synaptic weights. Initial synaptic weights are assigned from a random distribution. The neural network is then presented with a series of training patterns, and an error signal is computed from the difference between the network's output signal and the desired output signal. In an iterative procedure known as back propagation of errors, the synaptic weights connecting each layer are modified so as to reduce the output error. In this way, the network is trained to successfully classify the training data. Any backpropagation network with one or more hidden layers using a non-linear neuron transfer function is capable of learning complex non-linear mappings (Eberhart and Dobbins, 1990). Once trained, a neural network can be used to make predictions for data sample whose output parameters are unknown (e.g. assignment of additional fractures to sets).

Additional information can be obtained by examining the synaptic weights of a trained neural network. These weights provide a record of the network's classification strategy and of the input parameters most important for classification. Synaptic weights can be viewed graphically using a Hinton diagram (Figure 2-17), or examined quantitatively by computing the neural network's relation factors. Of these, the simplest is relation factor one, which indicates the strength of the output signal produced by a single neuron. Software

While commercial neural network software is available, a variety of freeware packages are readily available for workstations and personal computers. Table 2-3 lists the neural network software which we are currently evaluating for use in this project.

Table 2-3 Neural Network Codes

Code Source
NevProp University of Nevada, Reno
SNNS University of Stuttgart, Germany
Pygmalion ESPRIT project, Europe
Matrix Backpropagation University of Genoa, Italy Conclusions

Neural networks can provide a useful tool for studying the relationships between geologic parameters, particularly for large datasets containing variables of different types which would be difficult or cumbersome to analyze by conventional methods. An advantage of the neural network approach is that the user does not have to know a priori how to classify the data. When presented with a set of training data, the network will derive its own rules. Once trained, the neural network can be used as a predictive tool or examined to reveal the input parameters most important for classification.

2.2.5 Task 4.1.1 Fracture Image Data Acquisition

During the quarter, Marathon collected and processed fracture image data, and provided the data for posting on the WWW server.

2.2.6 Task 4.1.2 Well Testing Data Acquisition

During the quarter, Marathon collected and processed well test and hydraulic response data, and provided the data for posting on the WWW server.

2.2.7 Task 5.1.1 WWW Server Development

During the quarter, Golder Associates developed and implemented the World Wide Web (WWW) Server for the project. Implementation consisted of the following activities:

The WWW server was set up during the quarter, and will be populated with the project data collected from Marathon, and technologies developed by Golder and MIT during the next quarter.

2.2.8 Task 5.2.1 Progress Reports

The quarterly progress report for the period March 7, 1996 to June 6, 1996 was prepared during the quarter.

2.2.9 Task 6 Management

The following significant management activities were carried out during the quarter:

No project deliverables were prepared or delivered during the quarter.


3.1 Schedule

This section presents revisions to the Schedule and Deliverable list provided with the project work plan. These revisions do not reflect changes to the work scope but are designed to clarify the relationship between tasks and deliverables. The project activities remain on schedule. The revised schedule of deliverables is provided in Table 3-1.

3.2 Milestones and Deliverables

The following project milestones occurred during the quarter:

The following deliverables occurred during the quarter: