Every year, oil and gas companies can be seen investing millions of dollars in order to better understand the future outlook of their job sites. This is because traditional modeling approaches to reservoir management are quite costly and time-consuming. They require months or years in some cases for asset teams to integrate new forms of data that came in from recent seismic surveys, drilled wells, or production processes.
Why Traditional Reservoir Modeling Is Costly
Traditional models of reservoirs tend to have a lack of work processes or software tools that are needed in the age of big data and analytics. These types of reservoir models often consist of:
1. Models That Are Often Tailor-Fitted
Such models usually work with outdated forms of data that compromises their ability to be applicable to technological advances. These include the introduction of artificial wells or even dynamic data conditioning (historical matching) with the help of petrophysical properties and box multipliers.
2. Dynamic and Static Data Conditioning
Both these properties are oftentimes treated as disconnected forms of processes. This then leads to limited communications between the parallel disciplines resulting in inconsistencies in reservoir models.
3. A Single Base-Case Model
A basic keystone of reservoir modeling projects is to generate a single ‘base-case’ model. This is exactly why a lot of time and effort are lost in discussing approaches to modeling and their respective interpretations of data.
4. Uncertainties in Data Interpretation
When this exists, uncertainty quantifications are also needed to be made. Here, the inherent uncertainties of the study are taken into account as an ad-hoc exercise. As a result, modeling and data interpretation processes aren’t effectively captured or propagated.
For all of the above reasons, most reservoir modeling projects tend to fail simply because they don’t deliver results or meet the initial project deadlines. Other than this, these traditional reservoir models also become extremely difficult to update when new oil and gas field data flows in.
For these reasons, the reliability of traditional reservoir models should definitely be questioned. This should especially be the case when extreme differences occur between reported reserves and the actual levels of production.
Enter Closed-Loop Frameworks for Reservoir Management
Closed-loop reservoir management combines data assimilation and model-based optimization. This is why it has also come to be known as real-time reservoir management or smart reservoir management. The aim of this form of reservoir modeling is to maximize the performance of reservoirs – both in terms of recoveries and financial measures.
These results are experienced by oil and gas companies over the complete lifecycle of the reservoir because their management has been changed to a near-continuous process. Closed-loop frameworks in an oil and gas sector facilitate the measurement and data assimilation techniques by taking oceanography and meteorology into consideration.
When seismic or geological information and production-related measurements are made available, oil and gas companies use algorithms to update their reservoir models. This is done alongside updating the pressures and saturations of reservoirs using the information obtained through big data and analytics. With the help of these updated estimates in real-time, the water injection trajectory rates and expected net present values of reservoirs can be enhanced.
As a result of all of the above, oil and gas companies can experience a more uniform production of oil alongside better recovery rates of available resources. Prototype simulations of closed-loop frameworks have also found that the increased levels of recoveries of oil and gas resources were 10% more than their traditional levels of operation.
The Scope for Closed-Loop Water Flooding
In order to portray the scope of this form of reservoir management, we can speak about a numerical example. This example could illustrate the closed-loop scope of water flooding with the help of real-time data from production.
In the case of a 12-well waterflood, optimization can be performed with the help of a reservoir simulator that has the functionality of an adjoint-based optimization under pressure constraints and rates. Data assimilation, here, may be performed with the help of Kalman filters with optimization frequencies that are set at the cycle of one per every 4 years. By doing so, the respective company was able to experience an NPV (net present value) of 6.68%.
Closed-Loop Frameworks Improve Big Data and Analytics
This result, when compared to the outcomes of a 30-day cycle with open-loop optimization, was found to be 0.15% higher in terms of NPV. In a similar manner, the data assimilation problem is overcome as closed-loop frameworks have much fewer control variables. This allows for optimization of the long-term performances of reservoirs while also maintaining the freedom of oil and gas companies to perform their optimizations for short-term productions.
Its Effect on the Oil and Gas Industry
Closed-loop frameworks of reservoir management aim to increase the overall performance of reservoirs – both in terms of financial and recovery measures. The idea of this reservoir management technique has been around for quite some time now and in many different forms. They have even oftentimes surfaced as attempts to improve the characterization of reservoirs from a more geoscientific perspective.
Real-time or closed-loop approaches to the production of hydrocarbons have been receiving growing attention as part of many initiatives in the industry. These include initiatives that came to be known as integrated operations, self-learning reservoir management, e-fields, i-fields, or even smart fields.
The main focus of these initiatives was to primarily optimize the production in the oil and gas industry on the short-term scale of lifecycle optimization. There have also been optimizations of reservoir optimization which were completely based on numerical simulations and combined with frequent updates of data assimilation.
With the help of closed-loop frameworks, subsurface teams can meet their deadlines while oil and gas companies can leverage the latest computing advancements to keep promises.
Oil and gas operations are commonly found in remote locations far from company headquarters. Now, it's possible to monitor pump operations, collate and analyze seismic data, and track employees around the world from almost anywhere. Whether employees are in the office or in the field, the internet and related applications enable a greater multidirectional flow of information – and control – than ever before.