ESA_WaveVertical.png

Blog

AI and Visual decision making

Commentary by Eirik Larsen. Chief Solutions Officer, Earther Science Analytics
17 January 2022

Eirik Larsen, co-founder and  Chief Solutions Officer of Earth Science Analytics comments on no-code, data-centric artificial intelligence (AI) and how data labelling remains a key piece in the puzzle.


At Earth Science Analytics, we provide AI solutions that aim to enhance the profitability of Exploration and Production (E&P) workflows and decision-making quality in the search for oil and gas. The focus of our EarthNET platform centres on the sub-surface petroleum geoscience domain proving to be the biggest catalyst for technological progress and change in the energy sector. Not only is its current usage directly altering E&P activities, but its future adoption will also lead to numerous benefits. 


AI has perfectly lent itself to visual decision making and I would go further to say that anything with a visual aspect becomes a prime candidate for AI. It changes what can be seen and done, as well as how individuals can interact with the data. It also reduces the potential for human bias, in turn leading to better decision-making. But while the above is true, AI still requires human input. 


The scope of AI can be incredibly effective, but to achieve this success, there is still a requirement for subject-matter experts to feed data into the process. AI can work as the legs, but it needs a brain in the form of high order, human decision making. It’s perhaps best viewed from a data engine concept, where data is continually updated, and models are re-trained to achieve better results. This in turn leads to better training data and new insights, allowing the process to be repeated to achieve even better decision support. And it’s here that no-code AI comes into the picture. 


No-code AI looks to make AI technology more widely accessible for all users, no matter their computer-science ability. Its adoption will deliver platforms that are used with no-coding, instead using visual-based interfaces for AI model training and predictions. But even with this advancement,   injection of knowledge by subject-matter experts via labelling is still required in the quest to deliver accurate models. Labelling of data has always been a large part of the AI process. It is pleasing for me to see changes in the sector, with companies working to leverage this requirement alongside existing AI models. 


Data labelling is traditionally labour intensive but with EarthNET, we offer a fully configurable platform to enable teams to interactively create, manage and improve machine learning (ML) training data as quickly as possible. Moreover, the improvement of the training data is assisted using ML models that in turn improve as the data improves. 


While this is important and a key driver for the successful use of industry technologies, no-code ML is arguably where the industry of the future is going, and where to my mind it needs to go. Indeed, the sector is now driving forward application development along with no-code or low-code models to support this. 


At Earth Science Analytics, we are well placed for this change and will continue to work with no-code platforms that are scalable with cloud technologies. However, at its most basic level, the biggest industry improvement comes from the increased volume of widely diverse, high precision data being used in the current generation of models. 


To get a good model out, good labels must be entered at the back end. It’s also possible to use these created labels to provide quality metrics, in turn helping to validate and improve the performance of labellers. Studying the work of a good labeller will help to define what a good labelling platform should look like, create a workflow to ‘hand-hold’ the less experienced, and provide a useful training tool for those just beginning their career in the industry. 


Ultimately, end-users don’t want to have to label data as they want a model to give them the answers with little or no human modification. In the future,   only models that require a minimum of manual optimization will be acceptable.  I don’t believe we are there yet, but that’s not to say we aren’t close.