AI for Supply Chain – How it Can Benefit?

AI for Supply Chain has been talked about amongst the supply chain community for almost a decade and has been promoted heavily as a key Supply Chain Future Trend and also included in 24 Supply Chain Technologies which Are Shaping Present and Future of Supply Chain. But do we really know how it can benefit supply chain managers? We explore briefly how AI for supply chain works in this article.

 

Since the first day of our life, our mind is linked to data science and analysis. However, our brain saves the most specific decisions, situations, emotions, etc., and uses them later for our daily life, engineering decisions, organisation, and planning. Some data is stored in our memories and use later depend on image, voice, or facing recognition.

 

Industry development is facing a good integration of the AI algorithms to have good and efficient systems and less cost-effective production. The cost of supply chain management contributes to at least 14%(automotive industry) in the enterprise balanced sheet. Additional to these elements, demand planners are facing problems related to production planning, quality analysis, and KPIs measurements.

 

AI for Supply Chain – How it Can Benefit Supply Chain?

 

Simply put, these AI algorithms help supply chain professionals to better manage Cost, Quality of Processes & Delay:

Benefits of artificial intelligence in supply chain management

Cost – The advantage of the AI programs is to help reduce:

 

Delay – Data science can help to be as close as possible with ‘Just In time’ concepts, by:

 

  • Evaluation of the necessary time to receive the required products from the suppliers
  • Predict the most reasonable time to put the product in the market (Market Benchmarking)
  • Opportunities in Artificial Intelligence in Procurement
  • Quicker replies to customer query using AI-powered chatbots

Supply Chain Process Quality: The quality index is a major KPI in the supply chain management process. AI can be integrated for:

  • Analysis of the defections in the production process and their sources
  • Analysis of the downstream and upstream supply chain process quality (Delivery process quality)
  • One of the prominent applications of AI in Supply Chain is process mining.

 

Relationship between Artificial Intelligence (AI) and Machine Learning (ML)

 

This Oracle blog does a good job of explaining the relationship between AI and ML.

AI means getting a computer to mimic human behavior in some way.

 

Machine learning is a subset of AI, and it consists of the techniques that enable computers to figure things out from the data and deliver AI applications.

difference_between_ai__machine_learning_and_deep_learning

 

As you can see from the above image, ML is one of the techniques which enables Artificial Intelligence in supply chain management or in other application.

Programming Languages Used for Machine Learning Algorithms

 

During this century, they are two major programming languages that can be used for ML algorithms: Python and R.

What is R Programming Language

 

R programming language is numeric software for data analysis and statistics. It is mainly used with R studio software with different packages that can be downloaded in the cran project website: https://cran.r-project.org.  R combines mathematical functions and statistics methods to solve numerical problems. I recommend this software for anyone wanting to go into in-depth in-data analysis, statistical modeling and AI.

What is Python Programming Language

 

Python is a simple programming language used for sample applications and AI. It has a good library in their packages that can be used to solve differential and probabilistic functions: www.python.org

 

Both languages can be used for SCM and especially demand forecasting to help planners for their heavy works. R has some additional libraries and packages that can be used for data visualizing and analysis. In this article, we will focus to explain why R is a powerful tool that can be used for these applications.

How to Run a S&OP Process – Benefits, Process Steps & Barriers

 

Machine Learning programs are based on the Given Steps- an Example:

 

(⚠: article gets technical from here, not for the faint-hearted!)

1)    Data Collection:

 

The most interesting part for machine learning is to select the data input on the basis to certain criterions like timing schedule. There are four types of data: seasonal, random, Cyclic and trend data.

Trend data is data where we have increasing or decreasing trends, seasonal data is where data changes depending on a known period, cycle data is a type of seasonal data with some changes due to unpredictability of circumstance and random data is a data where it is difficult to see the variation, therefore, it is difficult (sometimes impossible) to forecast.

 

R has several functions to prepare and visualise the data.

 

  • Caret package: with this package, you can benefit from some of its functions to import your dataset and plot it.
  • csv() function is used to import csv dataset with some arguments to allow for specification of strings of data. Also, read.xls() i can be used for excel data.

Example:

read.csv(“Covid19MarocHL.csv”,header = TRUE, sep = “;”)

 

To plot a data, we can use either the function plot(x,y) or using the ggplot2 package which includes several functions:

 

Data %>%.                    // Specify the dataset input

ggplot() +                                // specify that we will use ggplot2 package

geom_point(aes(x,y),color).  // we want to plot a points graph

geom_line (aes(x,y), colour).  // plot a line graph

geom_smooth(aes(x,y)).  // Specify that we want plot a dataset with trending or smoothing line

 

The above codes can be easily used to understand the shape of the dataset and it’s trends.

2)    Data Cleaning and Preparing:

 

Data cleaning structures the dataset to make it more applicable for data training. This step focuses on:

  • Visualisation data line and columns
  • Delete the unavailable column or row data
  • Arrange data set for better coding

In R, there’s a greater range of functions that can be used to perform data preparing and structuring including:

  • cases(): this function helps delete all the columns where no data are available

R code : data[complete.cases(data),]

  • nrow() and ncol() return the number of lines and columns of a matrix data

3)    Training the Model

 

One of the most important tasks in machine learning is to split the data into training sets and test sets. In R in order to do this task, we have to firstly let the machine select the indecies value by splitting the data set. The Caret package has a function (creatDataPartition(..)). By running this function, we let the engine select a random index data from the original dataset. For example, if we have a dataset with 50 features, the engine will randomly select 25 indecies or pointers.

 

In R code: index <- creatDataPartition(y, p=0.5, times=1, list=FALSE)

y is the predicted outcome

p is the balance index. Where this value is 0.5 this means we split the data into two equal intervals.

times=1, means that we want to split the data only one time

list: is always FALSE so as to have data as a data structure instead of a list.

To create a training set and data set we can use the following lines:

Trainingset <- y[index]. // we will select the values that correspond to the selected indecies

Testset <- y{-index]    // that is the complementary data of the training set.

Example (Classification problem):

In this example, we will use R language to predict if the sex is Male or Female depending on the height feature. The data is available on “dslabs” package.

1- Import R packages:

library(“dslabs”)

library(“dplyr”)

library(“purrr”)

2- Lets use the dataset heights available in dslabs package library(“caret”)

data(“heights”)

dataExercice <- heights

View(dataExercice)

 

3- Identify the feature and the outcome

X <- dataExercice$height

y <- dataExercice$sex

 

4)    Create the Data Partition of the Dataset and Specify the Training and Test Set:

 

index <- createDataPartition(y, times=1,p=0.5, list=FALSE)

index2 <- createDataPartition(X, times=1,p=0.5, list=FALSE)

trainingset <- y[index]

testset <- y[-index]

trainingset2 <- X[index2]

5)    First Test is a Simple Guessing

 

Y_hat <- sample(c(“Male”,”Female”),length(y),replace = TRUE)

mean(y==Y_hat).     // result is 0.4885714

dataExercice %>%group_by(sex) %>%summarise(mean(height),sd(height))

 

sex    `mean(height)` `sd(height)`

  <fct>           <dbl>        <dbl>

1 Female           64.9         3.76

2 Male             69.3         3.61                                                                                    

 

Here we can see that if the height is higher than 65, the outcome is probably a Male. Let’s try it.

6)    Create a Data Set with a Condition:

 

If height is higher than 65 than the output is Male else it is Female.

 

YY <- (ifelse(X>=65,”Male”,”Female”))

mean (YY == trainingset) // results is 0.6761905

confusionMatrix(data=factor(YY),  reference= factor(y) )

 

   Reference

Prediction Female Male

    Female    119   55

    Male      119  757

                                         

               Accuracy : 0.8343

 

confuionMatrix helps to analyse the results by showing the accuracy and the specificity (How many Males are predicted Female and how many Females are predicted Male). We see here that our accuracy is better than a simple guessing.

 

7)    We Will Search, the Value of x that Maximises our Accuracy

 

seq <- seq(10,90,1)

Accuracy <- map_dbl(seq, function(x){

YY2 <- (ifelse(X>=x,”Male”,”Female”)) %>%

factor(levels = levels(testset))

mean(YY2==y)

})

plot(seq,Accuracy)

The seq value that maximises our accuracy is 65.  You have just to use this R code:

seq[which.max(Accuracy)]

 

In this article, we have tried to give the basic reason of why R can be a good tool for your ML applications. In the coming articles, we will go more in-depth on some practical problems for supply chain management and discover how artificial intelligence supply chain can change the game!

About Co-Author Jamal Elmansour 

I am Jamal Elmansour, an engineer from the University of Brussels. I like challenging subjects and programming. I worked in several sectors ( Energy, Management, and Quality inspection). Diversity is one of the main drivers of my life, I work on Machine Learning and engineering. my symbol: “hard work works”

Recommended Reading:

Technology in Supply Chain Management and Logistics: Current Practice and Future Applications

 


Dr. Muddassir Ahmed