Part 2: Musings from a Data-Science Convention

Data Science Conference Musings Part Two

This is part two in a three-part series by Matt Bell. To start at the beginning, click here.

Data as an Asset

Shell’s Hohl went a step further, explaining how his company has decided to make much of its work on artificial intelligence public to help accelerate the industry’s adoption rate. In his opinion, workflows and algorithms should no longer be treated as trade secrets – data itself is the primary asset. More on data sharing in just a moment.

This point was reinforced by Sunny Haroon, CEO of AlphaX, who pointed to the most-successful AI implementations being empowered by open source components because “no single company can innovate fast enough.” 

Uber – undoubtedly one of the most advanced AI practitioners in the B2C world – is taking the open proliferation of data science to another level. Franziska Bell, Head of Uber’s Data Science Platform, described her team’s efforts to “transform anyone at Uber into a data scientist at the touch of a button.” Her compelling presentation described how a team of over 1,000 data scientists is creating AI toolkits and reusable AI modules that can be widely redeployed by non-data-scientists across the organization.

Oil and gas companies have a long way to go before we make such a commitment to embedding data science, but sharing our data science successes – and, also valuably, our failures – widely and openly is something we would do well to emulate.

But what about sharing the data itself? If, as Shahri observed, 85% of the problem lies with the data and not the data science, and not everyone possesses an adequate collection of data assets, surely companies must work together to find solutions?

Preston Cody, Head of the Analytics Lab at Wood Mackenzie, described a litany of data-related issues – from insufficient quantity and access difficulties to poor data quality and selection biases. He proposed the formation of “data consortia,” which he was quick to stress is not a new concept but one that has “only met with limited success in oil and gas.” 

He explained that such consortia must have clear boundaries, an independent data manager, and employ a “give to get” business model if they are to succeed. He also stressed the importance of data security so that members “don’t have to worry about where it came from or where it will go.”

All of this resonated strongly with me, since Premier Oilfield Group manages several multi-company projects that utilize what we like to call “Shared Data Workspaces.” Our member companies – who form a steering committee to decide the technical scope of work undertaken by each group – contribute massive amounts of data of different types into a secure workspace. There, we perform the necessary cleaning, aggregation, and analysis to produce integrated, inherently more robust models that the members can take away and use to make more effective field development decisions.

We’re also constructing a unique database of rock properties by performing consistent analyses on our proprietary sample collection that represents over 200,000 wells. Anyone can view the sample collection and data availability at datastak.pofg.com, and we offer a variety of subscription models for those wanting to make use of the data or request new analyses. We’re adding new samples daily, thanks to donations from companies wanting to safeguard their rock samples for future analysis, and we will gladly help operators minimize cost by taking their cuttings samples into storage.

Want to keep reading? This is the second in a three-part series. To read part three, click here.

The Data-Driven Oil & Gas Industry Is Moving Fast

Don't miss something crucial. Join our email list and stay up to date on Premier news and insights.

You Might also like