Artificial Intelligence in Oil and Gas | Katalyst Data Management

Unleash Your Data's Potential℠

Big Data and Artificial Intelligence in Oil and Gas

Big Data and Artificial Intelligence in Oil and Gas

Do the math.

How many pieces of paper do you think would fit into ten terabytes?

In a typical oil and gas digital transformation project, ten terabytes holds the equivalent of 35,000,000 pieces of paper full of text and images. That’s roughly 16,000 boxes of paper. Add another terabyte of subsurface maps and seismic sections, and that’s an additional 15,000 images and 350 boxes of paper. Just the storage for all of those boxes could cost as much as $32,700 per month and nearly $400,000 annually.

Ten terabytes is also roughly what Katalyst Data Management processes every month in support of paper to digits in the new digital transformation world, and that number is growing.

Katalyst globally stores 40 petabytes or 40,000 terabytes of data. As you can imagine, 40 petabytes is a lot of data.

  • 140,000,000,000 pages
  • 60,000,000 TIF images
  • 65,400,000 boxes
  • 75,000 semi-trucks of paper data
  • 10,000 man-years of scan and name time (at 3 seconds/page)

Talk about big data! Consider the fuel cost and man power involved in moving and storing all of that information. The business value to transform all of this information into a digital format is tremendous, and well worth the investment. Today, this is a manual effort, since somebody actually has to look at the paper data to determine the best course of action, and then proceed to scan, sort and classify.

The oil and gas industry has drilled millions of wells across the globe and many of these wells only have paper records available. Well data has no retention period, and the paper from the historical wells can hold hidden gems of information. Moving the historical data stored in paper well files into an accessible, digital well file could save weeks of analysis time for an asset team.

What oil and gas companies are facing today is the integration of paper to digits. Data comes in many formats, can be public or proprietary, structured or unstructured. Artificial intelligence integration can initially help classify this unstructured data into an organized manageable system. But that’s just the beginning.

What exactly is artificial intelligence? A brief history…

The very beginnings of artificial intelligence (AI) starts with the definition of Turing and von Neumann machines as universal computers – could such machines actually become intelligent? AI was really beyond the reach of the early computers, but around the same time, the first autonomous robots were being built. These were analogue robots that used vacuum tubes or valves as amplifiers to turn signals from sensors into currents that could drive motors.

In 1951 Minsky invented the “neural net.” IBM would run a simple neural net on an IBM 704 computer in 1956, just before the ‘birth of AI’ Dartmouth Conference. The 1960s saw the invention of semantic networks as a new way of storing data in a way that more closely resembled the real world. Then in 1966 ELIZA – one of the first chatterbots – was invented. The program was used as a ‘therapist’ and understood nothing of the conversation, but cleverly manipulated the user’s input using Natural Language Processing (NLP) to come back with appropriate questions or comments.

In the 1980’s, expert systems (hand built rules in rules engines with basic NLP) solved a few real world problems. In addition the invention of neural nets with back propagation overcame a previous limitation of neural nets that lacked of any kind of memory.

Then in the early 1990s, compute power enabled evolutionary algorithms and neural nets to be built on a big enough scale to start making real progress in terms of accuracy. IBM published the first continuous speech recognition system for PCs in 1996. You still had to speak clearly and slowly, but compute power and probability were helping deal with the fuzziness of the real world.

Fast forward to today, where we’re in the era of big data and deep learning, and enormous strides been made in the last 20 years.

Humans versus machines

The performance gap of humans versus machines can be bridged by cognitive computing, overcoming both human and current systems’ limitations. The human brain can consume and process only limited amount of information. Being human, we are subject to physical and mental fatigue, and thus are not scalable. In addition, human error means that even experienced workers and experts can make mistakes and have biases.

Machines have obvious limitations as well. Traditional technology cannot handle ambiguity, and the traditional paradigm of computing is pre-programmed and rigid. It cannot learn, reason or relate. In addition, traditional machines do not interact in a natural language.

Cognitive systems are creating a new partnership between humans and technology, harnessing the strengths of both. Humans excel at common sense, dilemmas, morals, compassion, imagination, abstraction and generalization. Cognitive systems on the other hand, excel at natural language, pattern identification, locating knowledge, machine learning, objectivity and endless capacity.

There are three capabilities that differentiate cognitive systems from traditional programmed computing systems. Cognitive systems understand like humans do. They reason and can understand information along with the underlying ideas and concepts. And finally, they never stop learning. As a technology, this means the system actually gets more valuable with time and develops an “expertise.”

How can oil & gas utilize machine learning and AI?

So why bother undertaking the effort to utilize artificial intelligence? Considering the cost involved, “because everyone else is doing it” is hardly good enough.

Oil and gas companies have a lot of data to work with and tremendous opportunity to realize the value of machine learning and AI in various areas of their operations. Katalyst is working with companies like IBM to bring AI tools to “dark data”, helping the industry harness this data to improve business practices. With quality data behind it, there is no limit to what artificial intelligence can accomplish for the oil and gas industry.

For example, AI tools can help reduce the time spent on manual inspection, extraction and documentation of objects on engineering drawings for bid analysis. Geoscientists can find the “sweet spot” more efficiently with an AI tool that takes existing information from nearby wells, applies machine learning and produces a 3D model with the production heat map.

Workers in the oil and gas industry are routinely exposed to hazardous situations at work sites, and AI tools can provide better understanding of how risk manifests at the field level and gains visibility into potential work related risks.

Significant opportunities exist for cost reductions and revenue generation around prioritizing the reactivation of wells, and realizing opportunities is limited by human ability to analyze and act on large amounts of data. Artificial intelligence can increase shut-in well analysis throughput with automation and can also increase the success rate due to machine learning.

In addition, there is constant pressure to reduce operational costs, but organizations have limited insight into the power usage and emissions. Artificial intelligence helps companies better understand power consumption and cost of operations, optimize asset operations, minimize consumption and reduce greenhouse gas emissions.

It’s all about the data.

The bottom line is that “AI’s are only as good as the data we train them with.”

With artificial intelligence integration, analysis of a new area will take less than a fraction of the time it currently takes a human counterpart to complete. In order to realize AI’s potential for the E&P industry, seismic and well data domain expertise are crucial to the data management process. Katalyst has experience in the cleanup and movement of very large, unstructured datasets with the goal being to get data into the iGlass data management solution, a cloud-based ESRI map platform for accessing data.

When reading seismic data from tape or scanning paper documents, the absolute focus is to maintain the data integrity of the asset by capturing data from the original source. Using a set of smart data quality tools to capture existing catalogue and index information from various sources, Katalyst maps, cleans and matches metadata to unique IDs within the navigation database.

Focusing on data integrity provides a solid foundation for big data and artificial intelligence in oil and gas. For more information on how you can get ready for AI with Katalyst’s data management services, please contact us.

Contact Us

Get answers to your oil and gas data management questions. Complete the form below and one of our team members will contact you promptly.