Deep Water Deposition

SEPM Deepwater Research Group
Annual Meeting Report
 
“Advances in Data Analysis and Reservoir Modeling Approaches”

Organizers:

● Larissa Hansen (University of Leeds)
● Zane Jobe (Colorado School of Mines)

Panelists (alpha):
● Liz Hajek (Penn State)
● Michael Pyrcz (University of Texas at Austin)
● Katy Sementelli (Hess)
● Lisa Stright (Colorado State)

Topics for general discussion
 
1. Lisa Stright: How do we get analog data into reservoir models in a meaningful way? What do we need? What do we currently get? How can we get the significant remainder?
a. How do we decide what matters in models? Dealing with issues of scale. What’s practical? Predictive? Using models for hypothesis-based testing to learn more.
b. Outcrops, modern seafloor systems, experiments, numerical models
2. Katy Sementelli: There are many turbidite unconventional plays (e.g., Permian) – can we model these plays effectively? What is going wrong?
a. Perhaps also discuss the role of turbidites in ‘shallow marine’ mudrocks (e.g., Eagleford, Bakken, etc.).
b. What should we model in unconventionals? Is it just a statistical play? Is it just about production data-driven approaches? Is it just fracture networks?
3. Michael Pyrcz: We know we need to capture impactful heterogeneities and their geometries for flow forecasting - how do we best impose the critical heterogeneities in our reservoir models? How do we decide what heterogeneities (scales and types) we need, considering project and uncertainty modeling constraints?
 
4. Liz Hajek: Should we be focusing more on sand geometries/lengths, or on shale architecture/lengths?
a. A tighter link needs to be made between modelling and data if this is the case. Collect the data that can be fed into models. Vice versa, write algorithms that can honor data collected.

SUMMARY OF THE CONVERSATION:
1. Detail in modeling – what is important?
a. Fit for Purpose: What is the subsurface question answer or decision we need to support with the model? This will be sensitive to the fluid type, drive mechanism, and the interval of time and volume.
b. Level of Detail: It is very difficult to know a priori what level of detail is required. Two approaches:
i. Generally better to err on the side of too much detail. Detail can be removed, but not added to models. Look backs and flow relevance studies may help.
ii. Text detail in bed-scale models to understand what bed-scale detail matters and when it matters. Build larger scale models with this foundational understanding at hand.
c. Analogs are essential, but there is no perfect analog (outcrop, modern, physical/numerical model, etc.). We need to find a robust way to compare analogs amongst each other to assess how representative they are of the ‘norm’; agree on and/or define a hierarchical schema and terminology for adequate comparison
2. Uncertainty
a. Geoscientists should seek out and quantify uncertainty in their information sources. When using outcrop analogues then quantifying uncertainty of input parameters should be discussed in the field.
b. Uncertainty must be propagated through reservoir modeling workflows
c. Must always consider a suite of model scenarios
3. Modeling algorithms, and interfacing with engineers:
a. Geologists and engineers can speak the same language, it just takes a bit of effort to learn the lingo; modeling should not be done in isolation and will be more effective when approached by an interdisciplinary team
b. Hierarchical modeling can be a very useful tool
c. Computers are super powerful now – we shouldn’t be constrained by computer power. Can we build a better simulator? Can we move away from voxel models to unstructured mesh and/or finite element models? Is open-source the way to go? Should we - what does this technology add to model predictivity?
4. Unconventionals:
a. How much rock are we actually accessing per well? In many cases, we don’t know…
b. Physical sedimentology is important, but how do we integrate with biogeochemical data, maturity, geomechanics, etc.?
c. As a community, how do we share data (i.e. open source) to allow faster/better understanding of processes/geometries/important heterogeneities?
d. Should we be approaching modeling differently? ‘Statistical play’ concept uses initial production (IP) and ultimate recovery (EUR) volumes as model input distributions rather than traditional things like poroperm, facies, etc.
5. Process vs geometry
a. There should be a balance between going out and measuring things, and trying to use existing data to ask the ‘why’ questions (ideally, there should be an iterative loop here)
b. Can we use boundary conditions to enable rapid, probabilistic/stochastic models, or can deterministic models give us a better answer? Full-physics models are great and important for understanding process, but are only 1 of many possible solutions (and they take a long time to run)… there needs to be a balance here depending on the question you are asking
c. As a community, we need to develop rigorous metrics to measure and characterize things (similar to MTDs and many lab-based communities that have standard measurement and reporting metrics)

Latest Tweets View Twitter Feed @SEPMGEO is really cooking something special for their #ACE2019 Research Symposium @AAPG By the way, have you alreau2026 https://t.co/xvoAbHMAeY 6/18/2018 5:45:54 PM RT @RockHeadScience: Geoscientist, Matt Hall: A Day in the GeoLife Series @kwinkunks https://t.co/RLQaR8Ngkw 6/15/2018 4:53:28 PM RT @Water_Enigma: RT @WaterEngResLab: RT @zzsylvester: Is there such a thing as #meandermonday? Birch Creek, Alaska, on a clear day in Mayu2026 6/15/2018 4:51:26 PM