Asia School of Business

Edit Content

Using data science to accurately predict hospital demand

There are few places where demand forecasting is more important than in hospitals. Hospitals must have an adequate supply of medical devices for those that need them while not overstocking inventory that quickly becomes obsolete. Maintaining this delicate balance was the focus of our recent MBA project. Through an elective in data science, we learned exploratory data analysis as well as descriptive, predictive, and prescriptive analytics using the R language.

On top of that, we exercised our MBA Smart skills in working, coordinating, and communicating in a diverse team across multiple time zones, each of us hailing from a different background:

    • Paul: Business Development Lead, MBA for Working Professionals 2021
    • Hisyam: Former Engineering Proposal Manager, MBA 2021
    • Chan: Former Senior Lawyer, MBA 2021
    • Joe: Airline Operations Manager, MBA for Working Professionals 2021

We chose to analyze the demand from Paul’s company, a distributor of medical consumables. Many medical machines are like printers – their ink cartridges, as the consumables, need to be replaced from time to time. The problem with these consumables is that they are perishable. Thus it is pertinent to manage inventory well or face wastage.

To do this, the company wants to forecast monthly demand. It frequently uses data obtained from past invoices as a baseline for forecasting. While invoices indicate the real amount that customers have ordered, they do not reflect real-time demand. For example, a customer might order 100 boxes but only use 50 boxes in a month.

Choosing demand predictors

For this reason, we looked for other ways to forecast demand. First, we retrieved an “off the top of your head” forecast from customers’ datasets. We then mined data directly from machine records as a measure of “real demand.” Finally, we compared invoices and owner estimates and ran a regression analysis.

Astonishingly, the prediction of customers was more accurate than that of the invoices! Invoice estimates could explain almost none of the demand (for every 1 piece of consumable, invoices predicted 0.008 pieces). By contrast, customers’ estimates were almost in lockstep, predicting 0.963 pieces of demand for every 1 piece of actual demand.

However, customers only used a single number to predict the demand based on their experience. This means that any fluctuations in the actual hospital demand are averaged out throughout the year. Hence, if compared monthly, there are discrepancies between the customers’ estimated demand and the actual demand, as illustrated below:

Since we had the actual demand data, we turned to time-series modeling, a way of forecasting future demand via historical data. However, which model should we choose and how should we evaluate how good a model was? We chose to use MAPE (Mean Absolute Percent Error), a statistical measure of the accuracy of a forecast. It is the percentage difference between the forecast value and the actual value.

Determining the “next top model”

We picked 5 models from R’s forecast package:

  1. Naïve
  2. Simple Exponential Smoothing
  3. Holt’s Trend Method
  4. ARIMA
  5. TBATS – The only model that includes seasonality – or not.

TBATS is a model of models. While we chose between these models, TBATS also compared a few models internally, choosing them based on the Akaike Information Criterian (AIC) instead of MAPE.

Twelve months of demand data from 2019 were obtained and divided into training and testing data, the first nine months for the former, and the last quarter for the latter. We considered potential weaknesses from allocating data this way – the last three months of any year could be vastly different from the rest of the year due to seasonality.

Announcing the winning model

Among all of the models, Holt’s Trend had the lowest MAPE, or “error.” Surprisingly, it was just a bit more complex than simple exponential smoothing, and yet it outdid the sophisticated TBATS model. This brings us back to how the data was selected – the last three months were taken as test data, and forecasts were compared against these months.

Seasonality, (the main thing that differentiated TBATS from the other models) could have caused TBATS to fail. Perhaps, if we had a few years’ worth of data, TBATS would not have done so badly. Model selection is not a one size fits all process. One way to improve the accuracy further is to obtain the average predictive figure of all the models that we have made.

While we may not become professional data analysts, we can appreciate what it takes to slice and dice the data we have at hand and put the pieces together to see the big picture. With the new skillset provided by our data science course, we have developed data literacy. This is indeed a powerful tool to have in our MBA toolbox that will help us make better informed decisions throughout our careers.