INNOVATION

Debunking five myths about AI in continuous-process operations

Much hype surrounds the question of how mining companies can best use artificial intelligence (AI) to improve continuous-process operations. The possibilities are decidedly alluring.

Joakim Kalvenes, JT Clark, Rohin Wood, Matías Raby*
Debunking five myths about AI in continuous-process operations

Consider a copper mine that uses a machine learning model built on AI algorithms to optimise its flotation process, resulting a 3% increase in its copper-recovery rate. This improvement - far greater than the 1% increase already considered impressive in the industry - could translate into tens of millions of pounds in added revenue per year.

What's more, the AI solution required no new capital expenditure. And even as it boosted recovery, it lowered costs by enabling the mine to use smaller volumes of materials such as reagents and lime. The payback period for the mine's investment in the model? Just two months after the model's completion.   

What explains this success? We think the deciding factor was that the mine's managers and operators avoided falling prey to common myths about how to use AI. Many mining companies interested in exploring this are unsure of where to start. Often, executives assume that, to reap AI's benefits, they need only hire some data scientists to develop machine learning models incorporating process-related data.

They're wrong. To capture AI's full benefits - while avoiding costly mistakes - miners must combine AI technology with knowledge of physical processes from on-site human operators and subject matter experts. Indeed, in our work with clients, we've identified five common myths that can cause unwary miners to develop AI models that backfire. By taking a hard look at these myths, mining companies can avoid falling victim to them, and thereby unlock new value from their operations.

MYTH #1: AI models don't need human insight to generate useful process-improvement recommendations.

REALITY: If models ignore insights on physical relationships from science, engineering and human operators, they learn the wrong thing--and deliver faulty recommendations.

Machine learning algorithms cannot diagnose cause and effect from raw data. Only human experts can do so. To see the danger in falling victim to this myth, consider the copper flotation-recovery process. In this process, the more air is added through a frothing agent, the more foam is produced, and thus the more copper can be recovered. A mining company developed a model for this process. When analysts assessed the data generated by the model, they saw that the model correlated high froth velocity with poor copper recovery. This correlation seemed to make little sense, because it contradicts common knowledge of how the physical process works. Managers wondered what froth velocity should be to improve recovery.  

When operators were asked for their thoughts on this counterintuitive reading, they explained that gold-hued froth in a flotation tank indicates greater concentration of copper resulting from higher-grade ore, while grey froth indicates low-grade ore. If their knowledge of the meaning of grey versus golden froth color had been incorporated into the model, the correlation delivered by the model would have been more correct and thus more useful. But because the operators' expertise was not built into the algorithm, the model made a faulty cause-and-effect connection: the poor recovery wasn't caused by incorrect setting for froth velocity; it was caused by low-grade material traveling through the process in the first place.    

MYTH #2: AI is a replacement for a company's existing, classical control systems.

REALITY: AI and control systems are generally complementary, working together (with the operator) to unlock value.

Many managers assume that once they activate an AI model, it will not only deliver recommendations for process-control settings; it will also achieve and maintain those settings. However, AI models and control systems are generally complementary--because they typically operate on different time scales. Models operate with a typical prediction horizon of 15 minutes to 1 hour, as the material entering the process takes this long to undergo significant change. Process controls operate on a second-by-second time scale. For example, a model will recommend a specific froth velocity to optimise recovery from the current material entering the plant. The control system will manage pump-flow rates to achieve this froth velocity.

While an AI model can recommend a set point, it's the operator who implements that set point using the control system at hand. And it's the control system that maintains it--whether or not the model has provided a recommendation. AI models and control systems are therefore both needed, and they must work together.

MYTH #3: AI is useful only if it can accurately predict the future.

REALITY: The ability to deliver high-quality recommendations based on understanding of physics is what makes AI most useful.

Many data scientists tout ‘great' R-squared and small mean average percentage error (MAPE) as measures of a model's prediction quality. But these measures don't point to what mining companies really need: recommendations that help them optimise factors they can control, such as level settings for a continuous process.

Going back to the flotation-recovery example, operators want to know what the froth velocity, reagent dosage, lime addition or air flow should be to maximise copper recovery. That question can't be answered through a prediction model--even one that's impressively accurate.

It's far more important that a model's recommendations accurately reflect the right physical relationships, even if the model exhibits less-than-perfect prediction accuracy. Only then can it deliver useful recommendations, such as what the impact on mineral recovery will be if operators change the settings on factors they can control.

MYTH #4: As long as a model has enough data, it will work as required.

REALITY: In model-training data, variability matters far more than volume.

Another common belief is that all AI models require huge volumes of data to ‘learn' about the continuous process in question and thus provide useful recommendations. However, a model can learn only from data it's given. Consider a situation where a model is built using data on pH levels from a process, with the level ranging from 10.1 and 10.5. Moreover, the mine in question has historically operated such that there are billions of observable operating conditions reflecting pH levels within this range.

What if operators want to know how pH levels outside that range will affect the process? The model can't help them. It would be far more useful if it were built on a broader range of pH data (say, 9.5 to 11.00), but from fewer observable operating conditions (perhaps just one or two thousand, instead of billions).

Of course, data is generated from historical operation of a process. What if a process has been operated such that the data range is narrow? In this case, a mine can conduct controlled experiments to create a broader data set that enables managers to consider a wider range of controls. Bayesian Learning, as discussed in BCG's recent article, can help: Developers can incorporate knowledge of how a physical process works into a model, even if they have little or no data from actual operation of the process. As they keep running the process and collecting more data, they can refine their model to reflect what they're seeing in the data. Consequently, the model will provide more useful recommendations, because it helps operators consider a broader array of control-setting choices.

MYTH #5: The hardest part of deploying AI is building a robust model.

REALITY: The hardest part is earning operators' trust in models.

An AI model can be perfectly robust, but if operators don't trust it, they won't use it. And if they won't use it, their company can't extract any value from it. Models can spawn mistrust if they deliver recommendations that create problems for operators.

Consider a recommendation that unwittingly causes flotation-tank overflows requiring hours of cleanup. To avoid this situation, model developers should collaborate with operators to identify conditions that lead to overflows. Developers should then make sure those conditions are incorporated into the model's algorithm. They should also share their ideas for the model and the anticipated results with operators, get operators' reactions and incorporate those insights into the model's design. By doing all this, developers gain a better understanding of the process that's being modelled. Even more importantly, they enhance operators' trust in the model. Increased trust leads to greater use of the model. And greater use sweetens the odds that the mine will gain value from the model.

Deploying AI models in a continuous-process facility can deliver impressive results, but only if mining companies understand the realities behind how this technology works and how it should be used. There's a huge difference between success, such as a 3% improvement in recovery, and failure, including underperforming models that provide only marginal productivity gains. By familiarising themselves with common myths, miners can tip the scales toward success and away from costly failure.

 

*Joakim Kalvenes (kalvenes.joakim@bcg.com) is a partner at Boston Consulting Group, and JT Clark (clark.jt@bcg.com), Rohin Wood (wood.rohin@bcg.com) and Matías Raby (raby.matias@bcg.com) are managing directors and partners at BCG.

A growing series of reports, each focused on a key discussion point for the farming sector, brought to you by the Kondinin team.

A growing series of reports, each focused on a key discussion point for the farming sector, brought to you by the Kondinin team.

editions

Mining Journal Intelligence Investor Sentiment Report 2024

Survey revealing the plans, priorities, and preferences of 120+ mining investors and their expectations for the sector in 2024.

editions

Mining Journal Intelligence Mining Equities Report 2023

Access an exclusive, inside look on the quarterly mining IPOs and secondary raisings data and mining equities performance tables with an annual Stock Exchange Comparisons supplement.

editions

Mining Journal Intelligence World Risk Report 2023 (feat. MineHutte ratings)

A detailed analysis of mining investment risks across 121 jurisdictions globally, built on 11 ‘hard risk’ metrics and an industrywide survey.

editions

Mining Journal Intelligence Global Leadership Report 2023: Social licence

Gain insights into social licence trends and best practices from interviews with 20+ top mining company executives and an industrywide survey.