In times of raw material shortages, increased transportation costs, and other potential hurdles such as punitive tariffs, companies are increasingly looking for ways to avoid any business-critical disturbances like supply bottlenecks. However, mitigation of such risks requires lots of manual effort and is dependent on the availability of expert knowledge and the right data. Obviously, being successful with supply chain risk mitigation can be very time-consuming and comes at the cost of some uncertainty.

Business Impact of SUPPLY INTERRUPTIONS

First, let’s take a look at the problem of supply bottlenecks and supply interruptions. In order to avoid potential loss of sales, manufacturing companies in general, try to do everything to meet their projected sales volumes. However, as we can see in the automotive industry right now, supply shortages of certain components such as semi-conductors or display panels cannot be mitigated easily and lead to significant production delays at most car manufacturers. The end customer notices these changes in the form of prolonged delivery times, rising prices or the unavailability of goods – this was particularly evident in the last year with PC components, such as graphics processing units, also known as GPUs or graphics cards.

Of course, a company cannot always respond effectively to the causes of supply shortages. Nevertheless, there are some possibilities to prevent or to recognize possible supply bottlenecks in time, so that the possible intervention time is maximized, and one can react to the upcoming situation in the best possible way. Artificial Intelligence (AI) provides various ways to analyze large data sets, derive possible warning signals and inform the responsible supply or inventory managers automatically in case a potential threat has been detected.

The field of artificial intelligence allows companies to analyze their data and use appropriate AI models to make predictions about potential supply risks and provide automated warnings to companies. But how does this work in practice? How do you create such a solution, and what are the challenges in establishing it?

 

 

 

 

EVERY AI-DRIVEN SOLUTION IS APPLICATION-SPECIFIC AND HIGHLY INDIVIDUALIZED. NO TWO ARE ALIKE.

OUTLINE YOUR EXPECTATIONS – DEFINING THE FUTURE SOLUTiON

Before starting to implement any kind of AI-driven solution, it is important to define and highlight expectations and goals: What shall be achieved? Who is the intended user group? What material, locations or product groups are in the scope of the first version? It is very important to keep the scope small for the ‘proof of value’ – the first prototype of the anticipated solution. The exact outline of the prototype will differ from case to case – Is the activation of an alert sufficient when the supply is in danger? Can a tool be developed, that proposes how to mitigate the identified risk? Shall the solution be integrated with the existing planning tools, or can it be a stand-alone tool? Once these questions are answered, it is time to think about the future design.

THE CONCEPT OF MACHINE LEARNING – TRANSFERRED TO OUR USE CASE

The concept of machine learning as a subfield of artificial intelligence consists of making meaningful deductions from given data and data structures. For this purpose, algorithms are trained to find and recognize certain patterns and correlations from large data sets. In our specific case, the aim is to identify these data, which, in combination with each other, indicate a possible interruption of the supply chain. Subsequently, our solution shall be able to predict this as accurately as possible. But which data is required to achieve the desired result? Answering this question often accounts for 80% of the effort in an AI project and can only be answered individually – company by company.

GOOD QUALITY & MEANINGFUL DATA – THE KEY TO SUCCESS

When developing AI-driven solutions, it is helpful if the required data is available in real-time and, above all, it is available in good quality. As long as you know where your data is kept and how you can retrieve it fast and securely, it doesn’t really matter, where to store it. However, a data warehouse or data lake that is always up to date, can be a great help in enabling fast access to the data. Especially in large companies, a central data repository often can be a real advantage, because it allows one to grab the required data with ease.

THE ART OF AI – IDENTiFYING WHAT’S IMPACTING THE OUTCOME

As you’ve successfully outlined your project, the difficult part begins. Data Analysts and Data Scientists attempt to identify key figures, measures, numbers, or events that can have an impact on the observed outcome. For example, a reduction in Supplier-OTIF for a given material could indicate that there is a certain risk for larger volumes not to be delivered in the future. If there is additional information available that the relevant material is short of (e.g. in newspaper or web articles) and that prices are rising, then this could be another indicator that supply is at risk.

PREPROCESSING THE DATA AND FINDING OUT WHAT’S WORKING WELL

Experts now try to gather and collect all this data into one data model. This step is also known as preprocessing and takes a large portion of time within any data science project. When the different data has been collected and merged into a data model, data scientists train algorithms for a variety of machine learning algorithms such as Support Vector Machine (SVD), Decision Trees, Random Forests or Gradient-Boosted Decision Trees (e.g. XGBoost). It is especially important to collect historical data in regard to the target variable. In our case for instance this could: ‘Potential stock-out or not?” encoded with 1 for “True” and 0 for “False”. This historical data will be urgently needed to train and validate the model later on.

TRAIN-TEST-SPLIT & VALIDATING THE MODEL

To train models, only 80% of available reference data is used – this Is the so-called ‘training set’. As soon as the models are trained, data scientists use the other 20% to validate the model. This is what we call a ‘test’ or ‘validation’ set. Plenty of measures are monitored to evaluate the performance of each algorithm and whether each algorithm is too precise (‘over-fitting’) or too generic (‘under-fitting’). Eventually, multiple trained models are merged to function as an ensemble model which often provides even better results than the stand-alone models.

THE SUCCESS OF YOUR AI SOLUTION MOSTLY DEPENDS ON WHAT YOUR BUSINESS MAKES OUT OF IT

EVALUATION METRICS FOR ML MODELS –MORE THAN JUST  A NUMBER?

As soon as the prototype is ready, it is time to evaluate the performance of the machine learning model by loading the validation data into the model and comparing a variety of performance metrics to receive an understanding of how well the algorithm works. Depending on the data type that is predicted, multiple performance metrics can be calculated and compared with each other.

A: CLASSIFICATION ACCURACY

(Example: 995 correct predictions /. 1000 total predictions = 99,5%)

Especially useful for target variables that are classified. Target variables should be of discrete nature, i.e. observations can be distinguished into clearly separatable categories or clusters.

B: MEAN ABSOLUTE ERROR (MAE)

MAE represents the average difference between the actual and predicted value for all predictions. For example, let’s assume it is desired to predict movie ratings that users (who have not seen a movie yet) would likely rate the movie based on their historically observed preferences.

 

Example: Movie X | User X | N = 1:

  • yj =Actual Value: 4
  • ^yj= Predicted Value: 6

Based on just this one observation, the MAE provides a value of 2, which is the absolute deviation between actual and predicted values. Obviously, this performance metric makes more sense using a larger set of values (e.g. N > 30). Also, please note that this metric is size-dependent. The analyzed quantities must have the same unit in order to be compared. Whether the MAE of “2” is considered a good or bad value really depends on the relevant unit and underlying scale

C: MANY MORE PERFORMANCE METRICS

Whenever building an AI prototype, data scientists must exactly know which performance metric to pick and how to interpret them. There are many more than we have listed here. However, for the sake of complexity, we will not go into more detail in this whitepaper.

STOCK-OUT PREDICTIONS IN PRACTICE – OUR EXPERIENCE

Let’s get back to the initial focus of this paper – stock-out predictions using AI. We have supported a bigger project within the German pharmaceutical industry which was focused on a very similar solution. Development went very well, and the data scientists involved in the project found suitable measures that could predict possible stock outs. It was great to see such a large data science prototype come to life, especially considering the potential value behind it. Nevertheless, this showcase comes with some limitations:

LIMITATIONS OF THE SHOWCASE

One of the major downsides: Machine Learning models had to be developed and trained on a location-by-location basis. It was not easily possible to scale one single model to a larger number of locations. As the client had way more than 100 locations, this meant severe efforts if the solution should be leveraged to a larger part of the business. Some of the underlying reasons for that are varying data quality, different business processes, and different sets of data available for each of the locations.

MANY PITFALLS – BUT A STEADY LEARNING CURVE 

Even-though results looked very promising for the pilot locations and the prototype was also used well by pilot users, it could be observed that the utilized predictor variables were mostly dependent on each other. This is because the predictors where mostly consisted of key performance indicators (KPIs) that themselves already relied on similar – if not the same – data inputs. Sometimes, we could prove that multiple KPIs partially relied on the same data inputs, which led to this interdependency. It has shown that few KPIs were very relevant and could explain most of the predicted results.

Why is this critical? Whenever there are few predictor variables with high weights in the model, the overall prediction is highly susceptible to variations in data quality. Since the predictor variables are additionally dependent on each other, the effect is amplified even more.

SUPPORT THE MIND CHANGE – EXPLAINABLE AI

Finally, we need to consider interpretation difficulties and the skepticism of end users towards AI tools. Any AI tool is just as useful as it can be used productively by the target audience. In our case, the focus really was to create a user-friendly frontend, guided by best practices in UI/UX design, to leave as few open questions as possible. As soon as the solution went live, we could observe something very interesting that no one had thought about before.

RESPONSIBILITIES IN AN AI-DRIVEN BUSINESS

Who is taking responsibility for accepting or overruling a recommendation of the stock-out prediction tool? Let’s assume the tool tells you, that no stock out is in sight – but the next month it happens. Who takes responsibility for this? What happens if a human being overrules the prediction tool, but it turns out that his decision was wrong and the company might now be sitting on way too much inventory?

CLOSING REMARKS

You see, the potential of AI can be very interesting and enormous – but its challenges and pitfalls can be very difficult to manage as well. Researchers and businesses alike try to find answers to those questions. Nevertheless, it is important that businesses utilize the power of AI as soon as they can because they have all the data & use cases to use AI for.

DISCLAIMER

Everything stated in this whitepaper is based on ACOPAs experience and observations in real industry projects. This whitepaper is not intended to be exhaustive and is merely a view of the current situation on the use of AI in the industry.


Author: Fabian Ruehrnschopf
(info@acopa.de)