Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving...

18
7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32 nd Annual International Symposium on Forecasting June 24-27, 2012 Boston, MA 2 Valspar (VAL): Background 200+ years old $3.9 Billion Revenue 9,500+ employees Global scope: 25+ countries Products include: Industrial coatings for wood, metal and plastic for original equipment manufacturers Coatings and inks for rigid packaging, principally food and beverage cans, for global customers Paints, varnishes and stains, primarily for the Do-It-Yourself market Coatings for refinishing vehicles High performance floor coatings Resins and colorants for internal use and for other paint and coatings

Transcript of Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving...

Page 1: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

1

Forecastability: New Methods for Benchmarking and Driving Improvement

By Sean Schubert

Valspar: Consumer Division

32nd Annual International Symposium on Forecasting June 24-27, 2012 Boston, MA

2

Valspar (VAL): Background

• 200+ years old

• $3.9 Billion Revenue

• 9,500+ employees

• Global scope: 25+ countries

• Products include: • Industrial coatings for wood, metal and plastic

for original equipment manufacturers

• Coatings and inks for rigid packaging, principally food and beverage cans, for global customers

• Paints, varnishes and stains, primarily for the Do-It-Yourself market

• Coatings for refinishing vehicles

• High performance floor coatings

• Resins and colorants for internal use and for other paint and coatings

Page 2: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

2

3

Why Are We Here?

Key Question How do we set appropriate forecasting goals for a SKU, group of

SKUs, and Global Business Unit (GBU)?

Goal Develop an objective approach for finding the greatest opportunities to

improve forecasting

Develop an objective approach for setting Forecast Accuracy targets specific to the idiosyncrasies of each business

4

Fundamentals of Forecasting

customer finance

“just about right” “too low” “too high”

Goldilocks

Forecasting is a key

part of “getting results”

for any business.

Forecasting is

FUNdamental.

Page 3: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

3

5

Key Outcomes vs. Supporting Metrics

1) Forecast Accuracy

1) Reduced Inventory levels

2) Improved Customer Service Levels metrics

2) Forecast Bias

4) Improved Budgeting and Financial Reconciliation

3) Reduced Excess & Obsolete Inventory

Supporting Metrics

Key Outcomes

6

Who’s the Best Forecaster?

FA=64%

FA=72%

FA=56%

Curly Moe Larry

» Anything else you want to know?

Page 4: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

4

7

Forecasting Ability

» What two things drive our ability to forecast accurately?

» Our forecasting process

» Inherent forecastability

» Which do we have more control over?

8

Inherent Forecastability

» 50% is world-class forecast accuracy

» True or False?

Nassim Taleb:

Author of Black Swan: The Impact

of Highly Improbable Events (2007)

The Ludic Fallacy:

The misuse of games

(dice, etc.) to model

real-life situations

What do we do when we

can’t calculate “basic”

probabilities?

Red vs. Black

18/37 = 48.6%

Page 5: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

5

9

Benchmark!!!

» Let’s run a survey

» Strengths • Allows performance comparisons and targets

» Forecast Errors: How Much Have We Improved?

» Journal of Business Forecasting (Summer 2011)

That’s

actual

data.

10

Benchmarking: External

» Weaknesses Quite often, we don’t know the following:

• What level in the hierarchy are they measuring FA% at?

• What lead time do they measure FA% at?

• How is the metric weighted?

• How are Make-to-Order SKUs included?

• Where do Net Requirements show up?

• What resources do they dedicate to Forecasting?

What’s the

“kitchen sink”

number?

What is

scrubbed out?

Page 6: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

6

11

Benchmarking: Internal

» Strengths

» We know more detail about how the metric is calculated

» We are more consistent in how the metric is calculated

» Weakness

» Typical responses when the results of two GBUs are

compared

» “It’s harder to forecast our business because of x, y, and z.”

» “That other business is easier to forecast because of a, b, and c.”

12

Benchmarking

» Examples

Lesson: not all

businesses are

created equal in

forecastability 64% 72%

56%

Disclaimer: these are not real numbers

from the Three Stooge Forecasting Co.

Page 7: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

7

13

Forecastability

» Operational Definition:

» What level of Forecast Accuracy is reasonably achievable

for a SKU or group of SKUs in a business?

model results

That’s a definition we

can do business with.

W. Edwards Deming

Quality and Statistics Guru

(1900-1993)

14

Forecastability

» What affects forecastability*?

» Consumer purchasing behavior

» Customer behavior

» Supply Chain behavior

» Demand Planning resources and ability

» Factors that may provide insight into forecastability

» Total SOH Volume (yearly)

» Length of Material History

» Number of Customers

» SOH Variability (Coefficient of Variation: COV)

» Naïve Forecast Error

» Number and Size of Promotions

» Intermittency and Sporadicity of Sales

» Concentration of Sales in Top Customers

» Etc, etc, etc…

“The DNA for forecasting a SKU”

* at a given Lead Time

Page 8: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

8

15

Forecastability Gene: COV

Forecast Accuracy% ( 1 - MAPE) @ 60-day lead time

FA% and COV measured over recent 12 month period

The general

relationship

holds.

Is there

anywhere we

might be able to

improve?

16

Forecastability Gene: SOH Volume

Forecast Accuracy% ( 1 - MAPE) @ 60-day lead time

FA% and SOH measured over recent 12 month period

That

relationship

looks

elementary.

Page 9: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

9

17

DNA

» The power of DNA is that it jointly uses all the information

» Let’s combine the genes…

Malachy Frankenweenie Pekingese

2012 Westminster

Best in Show

Star of Tim Burton movie

Brought back from the dead.

18

Forecastability DNA

• Total SOH Volume

• SOH Variability (COV)

• Number of Customers

• Length of Material history

• Number and Size of

Promotions

• Naïve Forecast Error

• Intermittency and

Sporadicity of SOH

• Concentration of Sales in

Top Customers

• And others…

Forecastability Factors

Bottoms-Up

Internal

Benchmarking

Forecast Accuracy

(Baseline)

• Region

• Business Unit

• Product Family

• SKU

• Customer

• Etc, etc, etc…

Conceptually…

“Bring the big data”

Page 10: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

10

19

Multivariate Model

Forecast Accuracy% =

( SOH, COV, Naïve Forecast Error, # Customers,…)

Log(AbsErr) =

( log(SOH), COV, log(NaiveAbsErr), # Customers,…)

Technical reasons:

• Violates some statistical assumptions

• Doesn’t work very well

Rube Goldberg

(1883-1970)

More complicated

models can also work

Log(AbsErr) =

( log(SOH), COV, log(NaiveAbsErr), # Customers,…)

20

Interlude on Metrics and Models

Lionel Hutz, Esq.

“I Can’t Believe It’s a Law Firm”

Springfield

“What’s the

problem, you

don’t believe the

math?”

“I reject your

reality and

substitute my

own.”

“I object to your

model. I object to

your metric.”

Page 11: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

11

21

Interlude on Metrics and Models

Lionel Hutz, Esq.

The real question is,

“Is the model and

metric better than

using our gut to

make decisions?”

George Box

World-famous

Statistician and

forecasting guru

I’m a lawyer.

I don’t do statistics.

“All models

are wrong,

some are

useful.”

22

The Model

Notes:

• Each point represents a SKU in one Region

• All predictors have been centered and standardized (subtract the overall mean and divide by the standard

deviation) to simplify comparison of model coefficients.

• Only selected factors from the full model have been disclosed, since the detailed forecastability model is not

transportable from one business to the next.

Prob|t| <0.0001 for all factors shown

This model

can be

improved.

Page 12: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

12

23

Interpreting the Model

It still makes more

sense than the tax

code.

Log coefficients

are not intuitive

24

Graphical Interpretation

Notes:

1) Assumptions have been made that the model has been constructed according to good regression practice.

2) The actual and predicted Absolute Forecast Error (log units) are both measured over a 12-month period and reflect in-sample regression results.

Actual < Predicted

(better than benchmark)

Actual > Predicted

(worse than benchmark)

R2 = 0.9153

50th percentile

benchmark

Page 13: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

13

25

Using the Model (I)

Forecastability Factors

• Total SOH (12 mo) =

20,224

• COV (12 mo) = 0.5765

• Total Naïve Error =

11,384

• Length of History = 18

• Customers = 8

• Top 2 Customers =

58.4% of Sales

Forecast Actuals

Total AbsError = 18,554 units

Forecast Accuracy% = 8.3%

Forecast Gaps

Total AbsError = 8,639 units

Forecast Accuracy% = 42.7%

Forecast Baseline

Total AbsError = 9,915 units

Forecast Accuracy = 51.0%

» One SKU

Naïve Forecast

Accuracy = 43.7%

versus

Forrest Gump

Forecaster?

26

Using the Model (II)

» “My business is harder to forecast”

Average Levels of key DNA Factors at the

Business Region Level Forecast Accuracy Actuals

Business 4, Region 3: 34.5%

Business 10, Region 2: 65.3%

Forecast Baseline

Business 4, Region 3: 65.0%

Business 10, Region 2: 60.5%

Young Frankenstein

(1974)

Naïve FA% for “Average” SKU in

Business 4, Region 3 = 55.7% vs.

Business 10, Region 2 = 42.1%

We have

created a

customized

benchmark.

Page 14: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

14

27

Business Benchmarking

What if my

business is

better than the

benchmark?

Are you as

good as the

“Best in

Class”?

Usain Bolt:

100m 9.58s

200m 19.19s

28

Forecasting Questions: #1

» Is it harder to forecast a business when it has more

SKUs?

» How does the number of SKUs forecasted affect the

forecast accuracy when the Forecastability DNA is also

considered?

That’s so obvious.

Okay. Maybe that’s

not so obvious.

Page 15: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

15

29

Forecasting Questions: #1

» At the Business, Region level:

Log(AbsErr) = ( log(PredAbsErr), log(SKU Count) )

More SKUs makes

forecasting slightly easier,

but the effect is not strong.

A multi-level model

could also apply here.

30

Forecasting Questions: #2 » Comparing like-to-like (using Clustering)

Note:

Clusters=100

Page 16: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

16

31

Forecasting Questions: #2 » Comparing like-to-like (using Clustering)

Forecast Accuracy

Mean: 65.4%

Std Dev: 30.0%

n=1,249

Forecast Accuracy

Mean: 36.3%

Std Dev: 133.3%

n=1,047

Forecast Accuracy

Mean: -2835.0%

Std Dev: 41,953%

n=555

FA%: 8.3%

vs. 51.0%

32

Other Potential Uses

» Forecast accuracy benchmarks by:

» Customer?

» Sales Channel?

» Salesperson?

» Product Brand?

» Forecasting Process

» Software?

» Forecast Model?

» Demand Planner?

Hello, hello, hello…

Hello, objective customized

forecast accuracy targets

Page 17: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

17

33

Summary of Key Points

• Forecasting is fundamental

• Benchmarking sounds great, but it may be a black

box filled with unknowns

• Forecastability is a helpful to the extent it helps us

improve

• A forecastability model built on each SKU’s unique

DNA can help us understand and compare our

businesses more objectively

34

Questions? New Ideas?

For more discussion contact:

Sean Schubert

[email protected]

[email protected]

REFERENCES

Boylan, J. (2009), Toward a more precise definition of forecastability, Foresight, Issue 13 (Spring 2009), pp.34-40.

Catt, P. (2009), Forecastability: Insights from Physics, Graphical Decomposition, and Information Theory, Foresight, Issue 13

(Spring 2009), pp.24-33.

Gelman, A. and Hill, J. (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models, NY: Cambridge University

Press

Gilliland, Michael, (2010) The Business Forecasting Deal: Exposing Myths, Eliminating Bad Practices, Providing Practical

Solutions, Wiley and SAS Business Series

Hawitt, D. (2010), Should you report forecast error or forecast accuracy, Foresight, Issue 18 (Summer 2010), p.46.

Jain, Chaman (2011), Forecast Errors: How much have we improved?, Journal of Business Forecasting, Summer 2011, pg. 27

Kahn, K. (2006), In search of forecastability, presentation at the Forecasting Summit, Orlando, FL, February 2006.

Kolassa, S. (2008), Can we obtain valid benchmarks from published surveys of forecast accuracy?, Foresight, Issue 11 (Fall

2008), pp.6-14.

Page 18: Forecastability - Foresight · 7/2/2012 1 Forecastability: New Methods for Benchmarking and Driving Improvement By Sean Schubert Valspar: Consumer Division 32nd Annual International

7/2/2012

18