VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the...

215
VALUATION OF FLEETING OPPORTUNITIES A DISSERTATION SUBMITTED TO THE DEPARTMENT OF MANAGEMENT SCIENCE AND ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Ibrahim S. Almojel August 2010

Transcript of VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the...

Page 1: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

VALUATION OF FLEETING OPPORTUNITIES

A DISSERTATION

SUBMITTED TO THE DEPARTMENT OF

MANAGEMENT SCIENCE AND ENGINEERING

AND THE COMMITTEE ON GRADUATE STUDIES

OF STANFORD UNIVERSITY

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

Ibrahim S. Almojel August 2010

Page 2: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

http://creativecommons.org/licenses/by-nc/3.0/us/

This dissertation is online at: http://purl.stanford.edu/qz978wj4057

© 2010 by Ibrahim Saad Almojel. All Rights Reserved.

Re-distributed by Stanford University under license with the author.

This work is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.

ii

Page 3: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Ronald Howard, Primary Adviser

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Ali Abbas

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Samuel Chiu

Approved for the Stanford University Committee on Graduate Studies.

Patricia J. Gumport, Vice Provost Graduate Education

This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file inUniversity Archives.

iii

Page 4: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

iv

Abstract This dissertation is concerned with the study of problems that involve fleeting opportunities.

These are situations with limited capacity and a stream of opportunities on which the decision

maker must decide whether to capitalize as they occur. One example is the evaluation of

business proposals by Venture Capitalists (VC). I have developed a process to help decision

makers in such situations set their policies and answer valuation questions. In the VC context,

this process helps evaluate the firm itself or the value of a partner in hiring/firing situations. It

can also help the organization’s decision makers determine the value of information-gathering

activities for different deals at different situations.

Our solution is based on three steps. In the first, we assess the frame and decision basis. In the

second, we build the valuation funnel. In the third, we apply the results of the valuation funnel

to decisions regarding the deal flow and specifically to deals as they arrive. We assume that the

risk preference of the decision maker follows a linear or an exponential utility function. We

formulate the generic problem in the valuation funnel as a dynamic program wherein the

decision maker can either accept a given deal directly, reject it directly, or seek further

information on its potential and then decide whether to accept it or not. This approach is

illustrated in this dissertation through several examples.

Our results show well-behaved characteristics of the optimal policy, deal flow value, and the

value of information and other alternatives over time and capacity. To enable these results, we

developed the power u-function further and defined a u-value multiplier and certain equivalent

multipliers. We also studied different approximations for the power u-curve based on few

moments of the underlying distribution. The system we created gives a valuation template for

the fleeting opportunities problem. This template allows decision makers to address single

opportunities as they occur and guides their approach to the deal flow as a whole. We believe

this template will help to extend the application of Decision Analysis and spur more research

within the fleeting opportunities problem and, more generally, valuation problems.

Our process allows us a valuation template for problems fitting the fleeting opportunities

description. This, we hope, will extend the use of Decision Analysis into new fields.

Page 5: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

v

To my inspiration – my parents Saad and Halah

And to my life – my beloved Sarah and Saad

Page 6: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

vi

Acknowledgments “Gratitude Is Not Only the Greatest Of Virtues,

But the Parent of All the Others.”

Marcus Cicero

Obtaining a degree at Stanford is a life-changing experience. Working in such a dynamic, active,

and intellectual environment changes one’s perspective. I was lucky enough to experience a

wide spectrum of classes and appreciate the insights of the full range of professors.

I owe much gratitude to all the people who made this dissertation possible. My parents’

inspiration, my wife’s moral support, my siblings’ encouragement, and my son’s innocent

enthusiasm enriched my experience. My professors’ guidance changed my perspective and my

friends’ support and kindness created a home for me away from home.

My position with Professor Ron Howard was a learning experience in itself. Learning went

beyond Decision Analysis to ethics and legal systems. His logical and consistent view, relevant

across all aspects of life, was what enlightened me most. I will forever be indebted to him for

my learning. I am also indebted to my advisors Jim Matheson, who worked closely with me

through the defense, and Ali Abbas, who introduced me to Decision Analysis and the path to

changing my life.

My friends provided me with a rich intellectual environment at Stanford. Our continuous

discussions at the Decision Analysis Group broadened my horizons in a fun-loving atmosphere.

The Muslim and Arab communities at Stanford were a home for me away from home. The list is

long and as diverse as the World itself. I will forever cherish our memories together.

I am most appreciative of my family. My parents, Saad and Halah, are my inspiration and

motivation throughout my life. I gained a new sense of appreciation for them after my son was

born. My wife Sara is the warmth of my life and her support and care keeps me going every day.

My son’s innocent enthusiasm and excitement is my delight. I owe my siblings Muhammad,

Maha, and Rana for their endless encouragement.

Page 7: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

vii

Table of Contents

ABSTRACT .................................................................................................................................. IV

ACKNOWLEDGMENTS ............................................................................................................ VI

TABLE OF CONTENTS ........................................................................................................... VII

LIST OF FIGURES .................................................................................................................... XII

CHAPTER 1 – INTRODUCTION .............................................................................................. 1

1.1 A New Research Direction .................................................................................................................. 1 1.1.1 The Need to Formalize DA ................................................................................................................. 1 1.1.2 Introducing DA Methodologies .......................................................................................................... 1

1.2 The Research Process ......................................................................................................................... 2 1.2.1 The Motivating Problem .................................................................................................................... 2 1.2.2 Intuitive Understanding of the Motivating Problem .......................................................................... 3 1.2.3 The Research Problem ....................................................................................................................... 3 1.2.4 Solution to the Research Problem ..................................................................................................... 4 1.2.5 Intuitive Understanding of the Research Problem ............................................................................ 6 1.2.6 Solution to Motivating Problem ......................................................................................................... 6

1.3 Dissertation Structure ........................................................................................................................ 6

1.4 Using this Dissertation ........................................................................................................................ 6

CHAPTER 2 – REVIEW & SUMMARY OF CONTRIBUTIONS .......................................... 8

2.1 Literature Review ............................................................................................................................... 8 2.1.1 Risk Aversion ...................................................................................................................................... 9 2.1.2 The Value of Information ................................................................................................................... 9 2.1.3 Stand-alone Opportunities ............................................................................................................... 10 2.1.4 Repeated Opportunities................................................................................................................... 12

2.2 Contributions ................................................................................................................................... 14 2.2.1 Risk Aversion .................................................................................................................................... 14 2.2.2 The Value of Information ................................................................................................................. 14 2.2.3 Stand-alone Opportunities ............................................................................................................... 15 2.2.4 Repeated Opportunities................................................................................................................... 15

2.3 Impact of These Contributions ......................................................................................................... 15

Page 8: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

viii

CHAPTER 3 – RISK ATTITUDE ............................................................................................ 16

3.1 The Exponential U-Curve .................................................................................................................. 16 3.1.1 Introduction .................................................................................................................................... 16 3.1.2 Definitions ....................................................................................................................................... 18 3.1.3 Applying the Exponential U-Curve .................................................................................................. 18 3.1.4 The Value of Information and Control ............................................................................................ 18 3.1.5 Proximal Analysis ............................................................................................................................. 18

3.2 The Power U-Curve .......................................................................................................................... 19 3.2.1 Introduction .................................................................................................................................... 19 3.2.2 Definitions ....................................................................................................................................... 23 3.2.3 Applying the Power U-Curve ........................................................................................................... 25 3.2.4 The Value of Information and Control ............................................................................................ 27 3.2.5 Proximal Analysis of the Power U-Function .................................................................................... 29

3.3 Comparison between Exponential and Power U-Functions .............................................................. 32 3.3.1 Introduction .................................................................................................................................... 32 3.3.2 Comparison when Considering a Single Deal .................................................................................. 32 3.3.3 Comparison when Considering Fleeting Opportunities .................................................................. 33

CHAPTER 4 – MODEL AND NOTATION ............................................................................ 35

4.1 General Description ......................................................................................................................... 35

4.2 Components of the Model: Deals, Time Horizon, and Capacity ........................................................ 36

4.3 The Additive Model .......................................................................................................................... 37 4.3.1 Representation of Deals .................................................................................................................. 37 4.3.2 Model Layout .................................................................................................................................. 38

4.4 The Multiplicative Model ................................................................................................................. 39 4.4.1 Representation of Deals .................................................................................................................. 39 4.4.2 Model Layout .................................................................................................................................. 40

4.5 Terms and Notation ......................................................................................................................... 40

CHAPTER 5 – STEP 1: FRAME AND DECISION BASIS ................................................... 43

5.1 Overview ......................................................................................................................................... 43

5.2 Framing ............................................................................................................................................ 44 5.2.1 Framing the Deal Flow..................................................................................................................... 45 5.2.2 Framing the Deals............................................................................................................................ 45

5.3 Basis for the Decisions ..................................................................................................................... 46 5.3.1 Preferences ..................................................................................................................................... 46 5.3.2 Alternatives ..................................................................................................................................... 47 5.3.3 Information ..................................................................................................................................... 47

Page 9: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

ix

CHAPTER 6 – STEP 2.1: ADDITIVE VALUATION FUNNEL .......................................... 49

6.1 Overview .......................................................................................................................................... 49

6.2 Basic Problem Structure ................................................................................................................... 50 6.2.1 Definitions ........................................................................................................................................ 51 6.2.2 The Value of Control ........................................................................................................................ 53

6.3 Main Results ..................................................................................................................................... 53 6.3.1 Optimal Policy .................................................................................................................................. 53 6.3.2 Characterizing the Certain Equivalent and Threshold...................................................................... 53 6.3.3 Characterizing Indifference Buying Prices........................................................................................ 54 6.3.4 Characterizing the Optimal Policy .................................................................................................... 56 6.3.5 Multiple Detectors ........................................................................................................................... 58

6.4 The Long-Run Problem ..................................................................................................................... 59 6.4.1 Problem Structure and Definitions .................................................................................................. 60 6.4.2 Extension of the Results to the Long-Run Problem ......................................................................... 61 6.4.3 Policy Improvement Algorithm ........................................................................................................ 62

6.5 Extensions ........................................................................................................................................ 64 6.5.1 Multiple Cost Structures .................................................................................................................. 64 6.5.2 Decision Reversibility ....................................................................................................................... 67 6.5.3 The Probability of Knowing Detectors ............................................................................................. 69

CHAPTER 7 – STEP 2.2: THE MULTIPLICATIVE VALUATION FUNNEL .................. 72

7.1 Overview .......................................................................................................................................... 72

7.2 Basic Problem Structure ................................................................................................................... 73 7.2.1 Definitions ........................................................................................................................................ 74 7.2.2 The Multiple of Control .................................................................................................................... 76

7.3 Main Results ..................................................................................................................................... 76 7.3.1 Optimal Policy .................................................................................................................................. 76 7.3.2 Characterizing the Certain Multiplier and Threshold ....................................................................... 76 7.3.3 Characterizing Indifference Buying Fractions .................................................................................. 77 7.3.4 Characterizing the Optimal Policy .................................................................................................... 80 7.3.5 Multiple Detectors ........................................................................................................................... 81

7.4 The Long-Run Problem ..................................................................................................................... 83 7.4.1 Problem Structure and Definitions .................................................................................................. 83 7.4.2 Extension of the Results to the Long-Run Problem ......................................................................... 84

7.5 Extensions ........................................................................................................................................ 85 7.5.1 Multiple Cost Structures .................................................................................................................. 85 7.5.2 Decision Reversibility ....................................................................................................................... 88 7.5.3 The Probability of Knowing Detectors ............................................................................................. 90

Page 10: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

x

CHAPTER 8 – STEP 3: FUNNEL APPLICATION AND EXAMPLES............................... 93

8.1 Overview ......................................................................................................................................... 93

8.2 Application of the Funnel ................................................................................................................. 94

8.3 Two Types of Decisions: Meta and Real-Time .................................................................................. 94

8.4 Setup Examples ................................................................................................................................ 95

8.5 Real-Time Application Examples ...................................................................................................... 97 8.5.1 Real-Time Setup Example ................................................................................................................ 97 8.5.2 Should Saad Buy Information? ........................................................................................................ 98 8.5.3 Should Saad Invest in the Deals? ..................................................................................................... 98

8.6 Examples of Meta-Decision Applications ......................................................................................... 99 8.6.1 Situation 1: Choosing the Focus of the Firm ................................................................................... 99 8.6.2 Situation 2: Hiring Partners: Evaluating Synergies in Their Skills .................................................. 102 8.6.3 Situation 3: Hiring Partners: Evaluating Their Skills ...................................................................... 103

CHAPTER 9 – CONCLUSIONS AND FUTURE RESEARCH ........................................... 104

9.1 Conclusions .................................................................................................................................... 104

9.2 Future Research ............................................................................................................................. 105

BIBLIOGRAPHY .................................................................................................................... 107

APPENDIX A1 – PROOFS .................................................................................................... 113

A1.1 Chapter 3 Proofs .......................................................................................................................... 113

A1.2 Chapter 6 Proofs .......................................................................................................................... 118 A1.2.1 Section 6.3 Main Results ............................................................................................................. 118 A1.2.2 Section 6.4 The Long-Run Problem ............................................................................................. 134 A1.2.3 Section 6.5.1 Extensions – Multiple Cost Structures .................................................................. 140 A1.2.4 Section 6.5.2 Extensions – Decision Reversibility ....................................................................... 142 A1.2.5 Section 6.5.3 Extensions – Probability of Knowing Detectors .................................................... 143

A1.3 Chapter 7 Proofs .......................................................................................................................... 151 A1.3.1 Section 7.3 Main Results ............................................................................................................. 151 A1.3.2 Section 7.4 The Long-Run Problem ............................................................................................. 167 A1.3.3 Section 7.5.1 Extensions – Multiple Cost Structures .................................................................. 174 A1.3.4 Section 7.5.2 Extensions – Decision Reversibility ....................................................................... 176 A1.3.5 Section 7.5.3 Extensions – Probability of Knowing Detectors .................................................... 177

Page 11: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

xi

APPENDIX A2 – A GENERIC DECISION DIAGRAM EXAMPLE ................................. 185

A2.1 Introduction ................................................................................................................................. 185

A2.2 Generic Diagram Setup and Frame ............................................................................................... 185

A2.3 Generic Diagram Node Definitions ............................................................................................... 187

A2.4 Deeper Layers .............................................................................................................................. 191

A2.5 Extensions .................................................................................................................................... 194

APPENDIX A3 – VENTURE CAPITAL VALUATION ..................................................... 196

A5.1 Literature Review ......................................................................................................................... 196 A5.1.1 Valuation Overview ..................................................................................................................... 196 A5.1.2 The Venture Capital Method ....................................................................................................... 197 A5.1.3 The First Chicago Method (FCM) ................................................................................................. 199

A5.2 The ‘Real VC’ View ....................................................................................................................... 199

A5.3 Summary ...................................................................................................................................... 201

Page 12: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

xii

List of Figures Figure 1 - Research Process ............................................................................................................. 2 Figure 2 - Solution to the Research Problem ................................................................................... 5 Figure 3 - Valuation Funnel .............................................................................................................. 5 Figure 4 - Basic Dynamic Program Structure ................................................................................... 5 Figure 5 – Power u-function for λ ≤ 0 ........................................................................................... 21 Figure 6 – Power u-function for 0 < λ < 1 ....................................................................................... 21 Figure 7 – Example 3.2.1 Original Deal Structure .......................................................................... 24 Figure 8 – Example 3.2.1 Simplified Deal ....................................................................................... 24 Figure 9 – Example 3.2.2 Deal Structure with a Combined Deal ................................................... 26 Figure 10 – Example 3.2.3 Deal Structure with Perfect Information ............................................. 27 Figure 11 – Example 3.2.4 Improved Deal Structure ..................................................................... 28 Figure 12 – Power Function approximation error converges to zero ............................................ 32 Figure 13 - Deal Flow Structure ..................................................................................................... 36 Figure 14 - Deal Setup Structure .................................................................................................... 36 Figure 15 - Representation of Deals in the Additive Model........................................................... 38 Figure 16 – Additive Model Layout ................................................................................................ 38 Figure 17 –Representation of Deals in the Multiplicative Model .................................................. 39 Figure 18 – Multiplicative Model Layout ....................................................................................... 40 Figure 19 - Step 1 of the Solution Process ..................................................................................... 43 Figure 20 - Two Levels of Framing ................................................................................................. 44 Figure 21 - Generic Diagram for Internet Consumer Startups ....................................................... 46 Figure 22 - Decision Basis ............................................................................................................... 46 Figure 23 - Modeling the Prior Deals ............................................................................................. 48 Figure 24 - Step 2 of the Solution Process ..................................................................................... 49 Figure 25 - Basic Problem Structure .............................................................................................. 51 Figure 26 - Example Deal Structure ............................................................................................... 52 Figure 27 - Example Deal Flow ....................................................................................................... 52 Figure 28 - Examples of Deal Flow Values ..................................................................................... 54 Figure 29 - Characterizing the IBP of Information ......................................................................... 55 Figure 30 - Example IBP Value ....................................................................................................... 56 Figure 31 – Characterizing Optimal Policy ..................................................................................... 57 Figure 32 – Example of Optimal Policy Over Time ......................................................................... 58 Figure 33 - Example 6.3.4: The ordering of detectors is not myopic. ............................................ 59 Figure 34 - Example 6.3.4: The ordering of detectors is not myopic. ............................................ 59 Figure 35 - Problem Structure with Infinite Horizon...................................................................... 61 Figure 36 - Problem Structure with Information at Multiple Cost Types ...................................... 65 Figure 37 – Example 6.5.1: Value of the deal flow with clairvoyance at different cost structures66 Figure 38 - Example 6.5.1: Incremental multiple of the deal with clairvoyance at different cost structures ....................................................................................................................................... 67 Figure 39 - Problem structure with an option to reverse an allocation decision .......................... 69 Figure 40 - Problem Structure with the Probability of Knowing Detectors ................................... 70 Figure 41 - Step 2 of the Solution Process ..................................................................................... 72 Figure 42 - Basic Problem Structure .............................................................................................. 74 Figure 43 - Example Deal Structure ............................................................................................... 75

Page 13: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

xiii

Figure 44 - Example Deal Flow ....................................................................................................... 75 Figure 45 - Example Deal Flow Value ............................................................................................. 77 Figure 46 - Characterizing the IBF of Information ......................................................................... 78 Figure 47 - Example IBF Value ........................................................................................................ 79 Figure 48 – Characterization of Optimal Policy ............................................................................. 80 Figure 49 - Example Optimal Policy Over Time .............................................................................. 81 Figure 50 - Example 7.3.4: The ordering of detectors is not myopic. ............................................ 82 Figure 51 - Example 7.3.4: The ordering of detectors is not myopic. ............................................ 82 Figure 52 - Problem Structure with Infinite Horizon ..................................................................... 84 Figure 53 - Problem Structure with Information at Multiple Cost Types ...................................... 86 Figure 54 – Example 7.5.1: Multiples of the Deal Flow with Clairvoyance at Different Cost Structures ....................................................................................................................................... 87 Figure 55 - Example 7.5.1: Incremental multiples of the deal with clairvoyance at different cost structures ....................................................................................................................................... 87 Figure 56 - Problem Structure with an Option to Reverse an Allocation Decision ........................ 90 Figure 57 - Problem Structure with Probability of Knowing Detectors ......................................... 91 Figure 58 - Step 3 of the Solution Process ..................................................................................... 93 Figure 59 – Two Levels of Funnel Application ............................................................................... 94 Figure 60 - Application Example: Hardware Deals ......................................................................... 96 Figure 61 - Application Example: Software as Service Deals ......................................................... 96 Figure 62 - Example 8.4: Real-Time Decisions Structure ............................................................... 97 Figure 63 - Example 8.4: Should Saad Buy Information? ............................................................... 98 Figure 64 - Example 8.4: Should Saad Invest in the Deals? ........................................................... 99 Figure 65 – Example 8.5: Should Saad Focus on Hardware or Software? ................................... 100 Figure 66 – Example 8.5: What Combination of HW and SW Should Saad Focus on? ................ 100 Figure 67 - Example 8.5: With Information, Should Saad Focus on Hardware or Software? ...... 101 Figure 68 - Example 8.5: With Information, What Combination of HW and SW Should Saad Focus on? ............................................................................................................................................... 101 Figure 69 – Example 8.5: Are the Partner’s Skills Complementary or Synergetic? ..................... 102 Figure 70 – Example 8.5: How Good Must the Partner's Information be to Justify Hiring her? . 103 Figure 71 - Example of a Generic Decision Diagram .................................................................... 186 Figure 72 - Generic Diagram Node Key ........................................................................................ 187 Figure 73 - Venn Diagram of the Competitive Landscape ........................................................... 192 Figure 74 - Cluster Diagram of the Competitors’ Node ............................................................... 193 Figure 75 - Types of Market Share ............................................................................................... 193 Figure 76 - Example of an internal diagram for Market Share .................................................... 194

Page 14: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made
Page 15: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

1

Chapter 1 – Introduction “Life is but Fleeting Opportunities…”

1.1 A New Research Direction

1.1.1 The Need to Formalize DA Four decades after the introduction of Decision Analysis (DA), applications for this process

are still lacking. In order to disseminate DA we need to formalize it and the way we

communicate it. Howard and Matheson identified a path for the future of DA in their (1968)

paper. The following is a summary of their recommendation:

“Decision analysis procedures will become standardized so as to yield special forms

of analyses for the various types of decisions, such as marketing strategy, new

product introduction, and research expenditures. This standardization will require

special computer programs, terminology, and specialization of concepts for each

kind of application.”

We are pleased and eager to call for this specialization and standardization of DA. We

present a method for doing so in section 1.1.2, including a suggestion for standardization

method that we actually follow in our dissertation.

1.1.2 Introducing DA Methodologies We suggest standardizing the decision analysis cycle and principles of DA to specific areas of

application. The researcher should tackle a specific area of application and study the

challenges of applying DA in that particular area. We customize the DA cycle and develop

new tools, definitions, and concepts that address the challenges. We describe the process in

the following section

Definition 1.1.1: DA Methodologies We define DA Methodologies as a customization of the DA cycle to address challenges within

an area of application. This customization is accomplished by introducing tools, definitions,

and new concepts to the DA cycle.

Page 16: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

2

1.2 The Research Process We suggest the following research process to serve as DA methodology (see Figure 1):

• Begin with an observed motivating problem

• Develop an intuitive understanding of the motivating problem (what makes it challenging)

• Abstract the motivating problem to a research problem

• Develop a solution to the research problem

• Develop an intuition about the research problem (what impact this problem has)

• Customize our solution to the specific motivating problem

Figure 1 - Research Process

In the following sections, we describe each step of the research process and give a summary

of our work within that step.

1.2.1 The Motivating Problem This work was motivated by a personal experience. My friends started a company for which I

helped to raise the capital. I was struck with the challenge of how to value the company.

After going through the venture capital (VC) literature and talking to Venture Capitalists, it

was clear that the methods currently in application were insufficient. This got me interested

in the problem of evaluating startups. Being a student of Decision Analysis, the obvious

solution was to value the startup using the DA process. I give a literature review of the

problem of valuing stand-alone opportunities in Chapter 2, and present a more detailed view

of venture capital valuation in Appendix A3.

Page 17: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

3

The VC example holds many interesting questions that we can approach through this

process. For example, should the VC invest in a specific deal at hand? Should he buy

information relevant to the deal? What category of deals should the firm focus on? How

good should a potential partner’s expertise be to justify hiring him/her? We will answer

these motivating questions in chapter 9.

1.2.2 Intuitive Understanding of the Motivating Problem Consideration of the VC problem reveals that the real challenge facing a Venture Capitalist

lies in analyzing the startup within the context with a multitude of deals requiring immediate

action.

There are two dimensions to this challenge. The first relates to determining the optimal

response to each opportunity, given what lies in the future. The second relates to evaluating

alternatives with regards to the deal flow in general. The first dimension includes questions

like, “How much analysis is too much?” or, “Should we spend time on seeking information

about a specific startup or go on to analyzing a different one?”

The second dimension encompasses interesting questions that are difficult to resolve. Some

of the questions that we will discuss in this dissertation are, “What category of deals should

the firm focus on?” “Are a potential partner’s abilities and skills complementary or

substitutes?” and, “How good should a potential domain expert’s information-gathering

ability be to justify hiring him/her?”

1.2.3 The Research Problem In this section we define the class of problems that are suitable for our process. This is an

abstraction to the challenges we faced when attempting to answer the motivating questions

above. First we describe the fleeting opportunities problem class, then we give its

constraints, and finally we list some other problems that can be answered within our

abstraction.

Problem Description We consider situations in which decision makers have flows of opportunities becoming

available over time. They can only accept a limited number of deals and have to immediately

decide how to react to each deal as it arrives.

Page 18: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

4

Problem Constraints We limit the class of problems to two dimensions, namely, constraints on the deals and

constraints on the decision makers.

1. Constraints on the deals 1.1. The distribution of deals is irrelevant to the deals arriving in the past. This means

that the decision makers’ belief about the deals does not change as they evaluate more deals.

1.2. The effects of the deals have to be separable over time. Thus, the impact of a deal arriving at a certain period can be accounted for independently of deals already at hand and arriving in the future.

1.3. The deals are irrelevant of each other. Thus, there is no value to diversification or “hedging” across deals.

1.4. The deals require immediate decisions. This constraint has two implications. First, it means that that deals available do not change during the decision makers’ consideration. Second, decision makers cannot change their decisions later. We relax the second implication later in the dissertation by allowing decision makers to reverse a decision to accept a deal.

2. Constraints on decision makers 2.1. Decision makers are limited to the exponential and power u-curves. 2.2. Decision makers have limited resources. 2.3. Decision makers face a deadline after which they cannot accept more deals. We

relax this constraint later in the dissertation but require discounting of future prospects.

Example Applications This abstraction goes beyond the Venture Capital problem. For example, consider a movie

production company. The producers are continuously approached with scripts, which they

consider for production. They cannot accept all the scripts and have to decide on the scripts

as they arrive.

In chapter 9, we give a detailed example using a Venture Capital context. We consider a VC

firm contemplating where to focus its efforts. We also study process of the firm hiring a new

partner.

1.2.4 Solution to the Research Problem We developed a solution that consists of the following three steps:

Page 19: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

5

Figure 2 - Solution to the Research Problem

Setup In the setup, we determine the frame and the decision basis (assessing beliefs, risk attitude,

etc.). We discuss setup in further detail in Chapter 5.

Valuation Funnel Our solution to the challenge of fleeting opportunities lies in developing a valuation funnel.

The funnel is structured as follows:

Figure 3 - Valuation Funnel

The basic element of the valuation funnel is the following dynamic programming structure:

Figure 4 - Basic Dynamic Program Structure

At each chronological step, the decision maker decides whether to accept the deal at hand,

reject it, seek more information, or apply another alternative (e.g., control, options, hedging,

etc.).

Setup Valuation Funnel Application

Page 20: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

6

1.2.5 Intuitive Understanding of the Research Problem Our solution provides a DA Methodology for a large class of problems that include, among

others, Venture Capitalists, book publishers, and academic search committees.

Characterizing the optimal policy and the values within the deal structure can provide a

deeper understanding of the underlying problems. The monotonicity of the optimal policy

and the predictable structure of the values of information and control are also worth noting.

1.2.6 Solution to Motivating Problem We apply the solution process we developed in this dissertation to answer the motivating

questions we listed in Section 1.2.1.

1.3 Dissertation Structure The dissertation is organized as follows. We discuss relevant research and background in

Chapter 2. In Chapter 3 we discuss modeling the risk attitude of the decision maker. We

focus on two u-functions, namely, the exponential and the power u-functions. In Chapter 4

we define the model and notation used to represent the problem of fleeting opportunities. In

chapters 5, 6, 7, and 8 we discuss our valuation template. In Chapter 5, we describe Step 1:

modeling the frame and the decision basis. Step 2, building the valuation funnel, is discussed

in chapters 6 and 7. In Chapter 6 we discuss the additive setup and in Chapter 7 we discuss

the multiplicative setup. Step 3, applying the funnel, is explained in Chapter 8, along with

detailed examples of applying the process. In chapter 9 we highlight directions for future

research and conclude.

Our proofs are included in Appendix A1. In Appendix A2 we give an example of a generic

decision diagram. We give an overview of the venture capital method in appendix A3.

1.4 Using this Dissertation This dissertation is developed as a DA methodology for problems fitting the fleeting

opportunities structure. It is a valuation template that practitioners may follow to help

decision makers with fleeting opportunities. Our dissertation structure is intended to help

practitioners apply the results of the dissertation. Practitioners may use Chapter 3 to assess

and approximate the risk attitude of the decision maker. Chapter 5 outlines the first step of

the solution process and structures the framing and decision basis process. The goal of this

Page 21: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

7

step is to formulate the deal flow. After this step, decision makers should have clearly frame

boundaries and assessed alternatives, uncertainties, and preferences relevant to the

problem.

Chapter 6 and 7 address the second step for the additive and multiplicative settings,

respectively. The goal of this step is to build the valuation funnel. After carrying out this step,

decision makers will have determined the factors involved in the decision about each deal,

including the different buying prices, information, control, etc. These incremental values, or

multipliers, help the decision maker make real-time choices when alternatives are available.

In addition, the resulting valuation funnel can be easily used to evaluate alternatives

concerning the deal flow as a whole. The two chapters are intended to be independently self-

sufficient, so that the practitioner may use the one relevant to the problem at hand. For this

reason, note that the chapters follow the same structure and discuss the same situations.

Chapter 8 discusses the third step of the process. The goal of this step is to discuss the

application of the valuation funnel. By the end of this step, the decision maker may use the

results of the valuation funnel to evaluate meta-decisions that relate to the process itself.

Additionally, the results will provide the optimal policy regarding real-time decisions.

Page 22: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

8

Chapter 2 – Review & Summary of Contributions

“Observe the wonders as they occur around

you. Don't claim them. Feel the artistry moving

through and be silent.”

Jalal ad-Din Rumi

In this chapter we give an overview of the literature and a summary of our contributions. In

section 1 we review the relevant literature and then follow with a summary of our

contributions in Section 2. In Section 3 we highlight the impact of our contributions.

2.1 Literature Review Our motivating problem is studied in the literature in two main strands. The first deals with

evaluating decisions focused on an opportunity as it stands alone. The second evaluates

decisions about opportunities in the context of repeated offerings. We review both areas of

literature in the following sections, first giving a brief review of the relevant literature in risk

attitude and value of information as our proposed solution contributes to these areas. We

refer the reader to papers by Howard and recommend the upcoming book by Howard &

Abbas (2011E).

The Decision Analysis view of probability is based on the concept of probability as a measure

of belief. The underlying premise is the Bayesian updating of information originally discussed

in Bayes (1763). Laplace (1902) gives an early discussion of this view of probability. Howard

(1965) discusses the application of Bayesian methods in systems engineering. Howard (1992)

revisits this view of probability and compares it to the prevalent statistics view of probability.

Jaynes (2003) gives a detailed study of probability as a measure of belief.

In this dissertation we follow the nomenclature suggested by Howard (2004). Howard

emphasizes using a precise language for decision distinctions. While these might differ from

the accepted norms in the literature, we adopt them for the sake of clarity.

Page 23: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

9

2.1.1 Risk Aversion For a general overview of the foundations of the theory of risk aversion, we refer the reader

to the seminal papers of Arrow (1971) and Pratt (1964). We are specifically interested in two

forms of u-curves, namely, the exponential u-curve and the power u-curve. For an overview

of the exponential u-curve we refer the reader to Howard (1998). One of the earlier

applications of the power u-curve is that of Kelly (1956), who recommends setting the

objective function to the logarithmic u-curve (a special case of the power u-curve). Thorp

(1997) studies the logarithmic u-curve in the context of practical applications. Cover (1991)

discusses the power function in detail, albeit within the context of information theory. We

refer the reader to Abbas (2003) for advanced concepts on utility functions.

We discuss the exponential and power u-curves in more detail in Chapter 3, where we also

present our extensions to the power u-curve.

2.1.2 The Value of Information The value of information has been studied in a variety of forms in the literature. We are

concerned with buying information to improve deal selection; thus, we are concerned with

the economic value of buying information and the personal indifference buying price of

information (PIBP). For a review of this concept, please refer to Howard (1967). The results in

this paper can be directly extended to the value of control. We refer the reader to Matheson

and Matheson (2005) for more details on this concept.

Results of research on the value of information in specific decision situations have been

characterized in the literature. One such result is that the value of free, but possibly

imperfect, information is always nonnegative and is bounded by the value of perfect

information. Another result is that the value of information is positive if and only if it changes

the optimal decision; if the information does not compel a change in the optimal decision, its

value is zero.

Gould (1974) showed the lack of a monotonic relationship between the value of information

and the risk aversion coefficient. Hilton (1981) surveyed the properties of the value of

information. Hilton further demonstrated the lack of a monotonic relationship between the

value of information and decision flexibility. In a different domain, Barron and Cover (1988)

studied the value of information in repeated gambles with logarithmic utility. They defined

the value of information in growth ratios and proposed bounds on the value.

Page 24: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

10

More recently, Delquié (2008) gave a brief overview of the research on the value of

information and attempted to characterize the value of information through the intensity of

preference, i.e., the difference in utility across alternatives. Additionally, Bickel (2008)

defined and characterized the relative value of information (RVOI) as the value of imperfect

information relative to that of perfect information. Using normal priors, exponential utility,

and two alternatives, Bickel showed that RVOI is maximal when the decision maker is

indifferent between the two alternatives.

2.1.3 Stand-alone Opportunities We classify the methods of evaluating stand-alone opportunity into three categories,

namely, discounted cash flow methods, decision analytic methods, and application-specific

methods.

Discounted Cash Flow Most valuation systems in the business world are based on discounted cash flow (DCF)

analysis. Here, decision makers are asked to estimate the future cash flows of the different

alternatives, discount them for risk, and finally choose the alternative with the highest net

present value. The main issue with such a system is that it mixes time and risk preference in a

single parameter, that is, the discount factor. Still, the DCF literature is useful for our

purposes, as it gives a detailed view of how to build the economic model of a decision before

we incorporate uncertainties. For the practitioner, we suggest Damodaran (2002) and

Copeland et al. (2005). A brief tutorial of DCF is given in Jennergen (2002).

Decision Analysis This dissertation is based on the Stanford system of Decision Analysis (DA). We assume that

the reader is well versed in DA. The use of DA allows the decision maker to assess risk and

time preference separately. The principles of DA allow the decision maker to calculate the

value of information and control, among other quantities. The term “Decision Analysis” was

coined by Howard in his seminal paper Howard (1966a). Other key papers by Howard include

Howard (1966b), where he presented an early treatment of the value of information.

Howard (1968) gives his early detailed study of decision analysis. He revisited his analysis in

several later papers, most notably Howard (1988) and Howard (2007).

Page 25: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

11

Another set of references for DA is based on the view of multiple objectives. Notable papers

in this strand include Pratt et al. (1964), Raiffa (1968), Keeney & Raiffa (1976), and Keeney

(1982).

For applications of DA, we refer the reader to McNamee & Celona (1990), Matheson &

Matheson (1998), and Matheson (1983). On assessments, we refer the reader to Tversky &

Kahneman (1974) for their seminal paper on biases and to Spetzler & von Holstein (1975) for

a discussion of assessments in the light of biases.

Decision diagrams are an essential tool for framing and structuring the decision problem.

They were first introduced by Howard & Matheson (1981). We refer the reader to Shachter

(1986) and Shachter (1988) for a detailed study of decision diagrams and influence diagrams.

Specific Applications In addition to the above two methods, there is a variety of research on valuation and

decision-making in specific areas of application. These studies span both the descriptive and

prescriptive methods. For example, we give a brief review of the literature of venture capital

decision-making. The prescriptive literature listed below conflicts with the principles of

Decision Analysis largely in its attitude towards uncertainty.

Descriptive The descriptive research of VC decision-making began in the 1970s. Wells (1973) was one of

the earliest who worked on examining the criteria used by VC to evaluate ventures. The

earlier papers suggested that VC focus more on the characteristics of the team, while the

later papers suggest that the industry and market characteristics are more important. Most

of this work, however, was done through interviews or questionnaires and had small sample

sizes. Fried & Hisrich (1988) gave a more extensive study and suggested a unifying model for

describing the VC decision-making process. They conclude that VC apply three principles

when evaluating investments, namely, the viability of the project, the capability of the

management, and the size of the prospects. Multiple authors extended this work, most

notably Zacharakis & Shepherd (2001), who incorporated specific cognitive biases that affect

VC. Others include Zacharakis & Meyer (1998) and Shepherd & Zacharakis (1999). In

Appendix A3 we give an overview of VC valuation in the literature and from VC surveys.

Page 26: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

12

Prescriptive The interest in building prescriptive models for VC decision-making grew in the 1980s. One of

the earlier models was developed by Tyebjee & Bruno (1984). This work was further

extended by others, notably MacMillan et al. (1985) and MacMillan et al. (1987). These

models were based on correlation relations. Zacharakis & Meyer (2000) and Shepherd &

Zacharakis (2002) argue that decision aids should be the focus of future research on VC

decision-making. They also tested some of the previous models and found that they

outperformed VC in their sample. Kemmerer et al. (Unpublished) developed a causal

Bayesian network as a decision aid for Venture Capitalists.

2.1.4 Repeated Opportunities After our overview of the literature on stand-alone opportunities, we turn to evaluating

opportunities within a deal flow. The deterministic problem that relates to our work is the

Knapsack Problem (KP) and the probabilistic problem related to our work is the Secretary

Problem (SP). Worth noting of the later extensions to the secretary problem is the Dynamic

Stochastic Knapsack Problem (DSKP). In the following we give a brief review of the KP, SP,

and the DSKP.

Knapsack Problem In this static resource allocation problem, the resource capacity is known and the requests

have known requirements and rewards. The goal is to maximize the rewards obtained by

distributing the resources most effectively over the requests. The stochastic version of the KP

differs in that the rewards and/or requirements are random, while the set of items is still

known. The stochastic KP was first introduced by Ross & Tsang (1989). Multiple objective

functions were studied, including ones maximizing the mean value, the percentile of total

value, and a linear combination of mean and variance. Refer to Kellerer et al. (2004) for a

detailed overview of the different variations.

Secretary Problem In the original secretary problem, a series of secretaries is interviewed until one offer is

made. The objective is to allocate a single resource (a job) to a single request (a secretary

application) with the aim of obtaining one of the best applicants; that is, to minimize the

ranking of the applicant selected relative to the others. Gilbert & Mosteller (1966) first

extended the secretary problem to allow for multiple choices. Two types of objectives were

Page 27: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

13

studied; the first endeavors to minimize the ranks of the requests granted and the second to

maximize the rewards. Some of the main problems considered in reward maximization

include the asset selling problems and the sequential stochastic assignment problems (SSAP).

Kleinberg (2005) considered the DSKP as a generalization of this strand.

In the asset selling problem, also called the ‘full information’ secretary problem, the decision

maker holds assets for which offers (deals) come in over time. In MacQueen & Miller (1960),

the decision maker has a holding cost and the offers are random. In the SSAP, the problem is

concerned with assigning people to requests arriving over time. Each person has a known

value and each job has a random value, which becomes known upon arrival. Once assigning a

specific job to a specific person, the decision maker attains the multiplication of his or her

values. Refer to Derman et al. (1972) for a discussion of the SSAP.

Dynamic Stochastic Knapsack Problem The Dynamic Stochastic Knapsack Problem was studied by Papastavron et al. (1996) and

Kleywegt & Papastavron (1998). Their original setup considered a limited capacity resource

with requests arriving randomly over time. The requests have random rewards and/or

random capacity requirements that only become known upon arrival. The decision maker

must decide in real time whether to accept or reject the request, where recall is not allowed.

The study showed that the problem has an optimal policy, defined as a threshold. They also

study the changes in the optimal policy as the deal flow progresses over time. They excluded

the possibility of seeking information and hence required that the decision maker either

accept or reject an incoming deal. Furthermore, they did not allow the decision maker to be

risk-averse.

Kleywegt (1996)studied different variations of the DSKP. He worked on discrete and

continuous time horizons and random requirements and/or rewards. Along similar lines, Van

Slyke & Young (2000) studied yield management problems. Boulis & Srivastava (2004) used

the DSKP to design power-saving mechanisms in wireless networks. Feller (2002)extended

the DSKP to a multidimensional setting in which the resource has multiple dimensions across

which the requests have their requirements. While extensions to the DSKP were varied, they

usually focused on applying the DSKP on different problems without much extension to the

underlying problem structure.

Page 28: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

14

2.2 Contributions Now that we have reviewed the extant research, we highlight our contributions, which

address some of the limitations in the research just described. While our focus is in the

context of repeated opportunities, we offer some contributions in the context of stand-alone

opportunities. A summary of our contributions is given below.

2.2.1 Risk Aversion We extend the application of the power u-curve by defining the certain equivalent multiplier

and the u-value multiplier. The use of these definitions simplifies the calculations in the

multiplicative context. This is of special interest to us, as many of the motivating questions

we consider deal with situations in which equity is considered the medium of payment, and

hence the prospects are multiplicative.

We additionally studied different approximations to the power u-curve. Here we

approximate the results using several moments of the underlying distribution. It is

worthwhile to note the approximation of the u-multiplier as a function of the moments of

the log of the prospect’s probability distribution. For a large range of risk aversion, this

approximation converges quickly in a few moments.

2.2.2 The Value of Information The value of information has proven a challenge for general characterization in the literature.

In the structure of fleeting opportunities we characterized the value of the deals with

information in relation to the parameters of the model (i.e., capacity and time). More

interestingly, we characterized the value (IBP) of information and found that it exhibits a

complex relationship with both time and capacity. In relation to both parameters, the IBP

first increases until it reaches a maximum at a well-defined quantity before decreasing again

until it converges at the value of information on the specific deal outside the deal flow. In

other words, we found that the IBP for information about a given deal as part of the deal

flow begins with a smaller value than that for the deal as it stands alone and then reaches a

maximum that is often higher than the IBP of the information outside the deal flow before it

converges to that value. The fact that the value begins as less than that of the information

within the deal flow allows an opportunity for the decision maker to wait for the best deal,

which is, in effect, similar to the effect of information, as expected. However, the

Page 29: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

15

observation that the value of information would actually exceed that of the IBP of

information outside the deal flow was an unexpected result.

2.2.3 Stand-alone Opportunities We further developed the understanding of the power u-curve within a multiplicative setting

so that it can be incorporated into the DA process. We defined the certain equivalent

multiplier (CM) and defined the value of information and control in terms of the CM. We also

studied how to approximate the CM based on the moments of the underlying distribution.

2.2.4 Repeated Opportunities We extended the research by allowing a multiplicative setting and integrated more

alternatives. Specifically, we incorporated risk aversion (exponential and power u-curves),

value of information, value of control, and value of options. In the case of information, we

studied how the value of information changes with time and capacity. We also characterized

the optimal policy against time and capacity. Further, we discussed extending the cost

structure to include capacity and time requirements. We also studied relaxing the deadline

requirement and allowing an infinite horizon.

In our extensions, we were able to maintain the intuitive threshold optimal policies. We also

studied how the value of the deal flow changes with capacity and time. The results were

similar in both the additive and multiplicative settings.

2.3 Impact of These Contributions The system we created gives a valuation template for the fleeting opportunities problem.

This template allows decision makers to answer questions about single opportunities as they

arise and about the deal flow as a whole. We believe this template will help extend the

application of Decision Analysis and spur more research within the fleeting opportunities

problem and, more generally, valuation problems.

Page 30: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

16

Chapter 3 – Risk Attitude “The actual science of logic is conversant at

present only with things either certain,

impossible, or entirely doubtful, none of which

(fortunately) we have to reason on. Therefore

the true logic for this world is the calculus of

Probabilities, which takes account of the

magnitude of the probability which is, or ought

to be, in a reasonable man's mind.”

James Maxwell

In this chapter we discuss the decision makers’ attitudes towards risk. We solve the process

using two u-curves, namely, the exponential and the power u-curves. We use them,

respectively, in the additive and multiplicative settings described above.

In section 1, we give a summary of the exponential u-curve. In Section 2, we discuss our work

on the power u-curve in detail. The proofs for our results in Section 2 are in the appendix. In

Section 3 we compare both u-curves and elaborate on the conditions leading to the choice of

one over the other.

3.1 The Exponential U-Curve

3.1.1 Introduction In this section we give an overview of the exponential u-curve, its risk aversion coefficients,

and an assessment method. In Section 2 we give some definitions related to the exponential

u-curve. In Section 3 we discuss the u-curve’s application and then we specifically study the

value of information and control in Section 4. Finally, we give a proximal analysis in section 5.

Page 31: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

17

Definition 3.1.1: Exponential U-Function The exponential u-function is defined as:

𝑢(𝑥) = 1 − 𝑒−𝛾(𝑥) This function has one main parameter, γ, which is used to model attitude towards risk.

Risk Attitudes

Absolute Risk Aversion We study the risk attitude modeled by the exponential u-function using the Arrow-Pratt

coefficient of absolute and relative risk aversion. Refer to Arrow (1971) and Pratt (1964).

For the exponential u-function we find the absolute risk aversion coefficient as follows:

𝑢"𝑢′

= −𝛾2 𝑒−𝛾𝑥

𝛾𝑒−𝛾𝑥= −𝛾

The coefficient is specified solely by γ and it determines the risk attitude of the decision

maker as follows:

• γ > 0 𝑢"𝑝𝑢′𝑝

< 0 Risk-averse

• γ = 0 𝑢"𝑝𝑢′𝑝

= 0 Risk-neutral

• γ < 0 𝑢"𝑝𝑢′𝑝

> 0 Risk-seeking

Relative Risk Aversion The Arrow-Pratt coefficient of relative risk aversion for the exponential u- function is given by

𝑢"𝑢′∙ 𝑥 = −𝛾 ∙ 𝑥

Hence, the exponential u- function has a constant relative risk aversion coefficient.

Assessment Method Here we suggest a method, devised by Howard (1998), to assess γ. Decision makers are

asked to answer the following question: given a deal in which they can gain with probability

𝑝 or lose 𝑥 with probability (1 − 𝑝), for what value of p is the deal worth nothing to them? In

other words, what is the value of 𝑝 for which the following diagram is true?

Page 32: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

18

After we assess 𝑝 we can obtain γ as given by:

𝛾 = ln �𝑝

1 − 𝑝�

For example, when p = 0.5, γ = 0 and the decision maker is risk-neutral.

3.1.2 Definitions Decision makers with an exponential u-curve are said to follow the delta property. The delta

property, defined in Howard (1998), means that decision makers’ certain equivalent of a

specific deal is invariant to shifting in the prospects of the deal. That is, if the prospects are

shifted by δ then the certain equivalent is also merely shifted by δ. This is true because the

absolute risk-aversion coefficient is constant in the prospects. For more on the delta

property please refer to Howard (1998).

3.1.3 Applying the Exponential U-Curve The exponential u-curve simplifies the calculation of the certain equivalent of the deal.

Decision makers do not need to include their wealth when calculating the certain equivalent

of a given deal.

3.1.4 The Value of Information and Control In the same lines as in Section 3.1.3, the value of information and control can be easily

calculated when decision makers follow the delta property. The value of information and

control can be calculated by finding the value of the deal with free information and control

and then subtracting the value of the deal without information and control from it. More on

this can be found in Howard (1998).

3.1.5 Proximal Analysis When only a few statistics are available to describe the uncertainties at hand, we can

approximate the CM using proximal decision analysis as shown in Howard (1970). Howard

gave the following approximation to the certain equivalent of a deal X when the decision

maker follows the delta property.

𝐶𝐸(𝑋) = �(−1)n−1

n!γn−1 κn

n≥1

0 𝑥

~ −𝑥

𝑝

1 − 𝑝

Page 33: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

19

Where κnis the nth cumulant of the deal X and is defined as follows:

κn = 𝜇𝑛 −��𝑛 − 1𝑘 − 1

� κn

𝑛−1

𝑘=1

𝜇𝑛−𝑘

3.2 The Power U-Curve Now that we have given an overview of the exponential u-curve we turn to the power u-

curve.

3.2.1 Introduction In this section we give the equation of the power u-curve, its risk aversion coefficients, and a

suggestion for an assessment method. In Section 2 we give some definitions related to the

power u-curve. In Section 3 we discuss the u-curve’s application then we specifically study

the value of information and control in Section 4. Finally, we give a proximal analysis in

Section 5.

Definition 3.2.1: The Power U-Function The power u-function is defined as:

𝑢𝑝(𝑥 ∙ 𝑤) =(𝑥 ∙ 𝑤)𝜆 − 1

𝜆

This function has two parameters, namely, w and λ. The term (x∙w) represents the prospects

of the deal where x is the distribution of the deal as a multiple of w. We will discuss the

interpretation of the parameters next. In general, however, the parameters are chosen in

order to best fit the u- function to the assessed risk attitude of the decision maker.

The following is the definition of wealth by Bernoulli (1954):

“By this expression [wealth] I mean to connote food, clothing, all things which add to

the conveniences of life, and even to luxury – anything that can contribute to the

adequate satisfaction of any sort of want. There is then nobody who can be said to

possess nothing at all in this sense unless he starves to death. For the great majority

the most valuable portion of their possessions so defined will consist in their

productive capacity, this term being taken to include even the beggar’s talent: a man

who is able to acquire ten ducats yearly by begging will scarcely be willing to accept a

Page 34: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

20

sum of fifty ducats on condition that he henceforth refrain from begging or otherwise

trying to earn money.”

Following this definition, we can in general interpret w as the wealth of the decision maker,

while λ can be understood as a measure of his risk aversion. More detailed analysis of the

parameters follows within our discussion of the risk attitude represented by the power u-

curve.

Risk Attitude

Absolute Risk Aversion As in 3.1.1, we study the risk attitude modeled by the power u- function using the Arrow-

Pratt coefficient of absolute and relative risk aversion.

For the power u-function we find the absolute risk aversion coefficient as follows:

𝑢"𝑝𝑢′𝑝

= 𝜆(𝜆 − 1)(𝑥 ∙ 𝑤)𝜆−2

𝜆(𝑥 ∙ 𝑤)𝜆−1 = 𝜆 − 1

(𝑥 ∙ 𝑤)

The prospects of the deal are non-negative; hence λ determines the risk attitude of the

decision maker as follows:

• λ > 1 𝑢"𝑝𝑢′𝑝

> 0 Risk seeking

• λ = 1 𝑢"𝑝𝑢′𝑝

= 0 Risk neutral

• λ < 1 𝑢"𝑝𝑢′𝑝

< 0 Risk averse

Note that λ = 0 reduces the absolute risk aversion coefficient to − 1𝑥∙𝑤

, which is equivalent to

that of the log u- function.

For λ<1, the risk aversion of the decision maker decreases with the number of prospects. It is

worth noting that for risk-averse decision makers, the interpretation of w differs depending

on λ. Figure 5 shows the power u- function for λ≤0 for x 0

Page 35: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

21

Figure 5 – Power u-function for λ ≤ 0

Note that, as 𝑥 → 0, the u-curve for 𝜆 ≤ 0 reaches negative infinity. This falls in line with

Bernoulli’s inclusion in his definition of wealth the stipulation that one will never take a deal

with the possibility of losing all of his wealth.

Figure 6 shows the power u- function for 0 < 𝜆 < 1 𝑎𝑠 𝑥 → 0:

Figure 6 – Power u-function for 0 < λ < 1

Here we note that the u-curves for 0 < 𝜆 < 1 have finite negative values when the prospect

is 0. This result indicates that the decision makers are willing to take on deals that have

nonzero probability of losing all of their wealth. If we use Bernoulli’s definition of wealth, we

-25

-20

-15

-10

-5

0

5

0 0.5 1 1.5 2 2.5 3 3.5

λ = 0

λ = -0.5

λ = -0.75

λ = -1

Prospect

u-va

lues

Power u-function for λ ≤ 0

-3.5

-3

-2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

0 0.5 1 1.5 2 2.5 3 3.5

λ = 0.75

λ = 0.5

λ = 0.25

λ = 0.05

Prospect

u-va

lues

Power u-function for 0 < λ < 1

Page 36: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

22

will be hard-pressed to find a decision maker who is willing to take such a chance. However,

we note that the u-curve can be scaled by w so that we can still call it wealth.

Relative Risk Aversion The Arrow-Pratt coefficient of relative risk aversion for the power u- function is given by

𝑢"𝑝𝑢′𝑝

. (𝑥 ∙ 𝑤) = 𝜆 − 1

Hence, the power u- function has a constant relative risk aversion coefficient.

Assessment Method In the remainder of this dissertation we will use the following u- function:

𝑢(𝑥 ∙ 𝑤) = (𝑥 ∙ 𝑤)𝜆

𝜆

This function is easier to deal with mathematically and it results in the same conclusions as

the general power u-function discussed above. This is the case because it is only differs from

𝑢𝑝 by a constant shift.

In the following analysis we assess λ where we take w to be the wealth of the decision

maker, as defined by Bernoulli. The decision maker has to answer the following question:

given a deal that doubles his wealth with probability p or halves it with probability 1 – p, for

what value of p is he indifferent between his current wealth and the deal?

From this we can easily infer λto be:

𝜆 = log2 �1 − 𝑝𝑝

When p = 0.5, λ = 0, and hence we have the log u-function. Note that Abbas (2007) shows

that it is not sufficient to assert that the decision maker is invariant to scaling in the

prospects using the results of specific assessments. To clarify his point, Abbas gives an

example in which the decision maker can assert invariance to scaling in the prospects with

regards to an infinite series of assessments and still not follow the power u- function. This

𝑤0 2𝑤0

~ 0.5𝑤0

𝑝

1 − 𝑝

Page 37: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

23

issue can be overcome by directly asking decision makers if they are willing to follow this

property across the prospective range of concern.

3.2.2 Definitions

Multipliers

Definition 3.2.2: U-Multiplier (UM) We denote by UM the u-multiplier of an alternative, defined as:

𝑈𝑀 = 𝑢(𝑋)/𝑢(𝑊0) = �𝑝𝑖

𝑛

𝑖=𝑗

(1 + 𝑟𝑖𝑓)𝜆

over the probability distribution of the alternative. The UM of a deal is the maximum of the

u-multipliers of its alternatives.

It is also possible to define the certain equivalent multiplier of the deal, which relates directly

to the u-multiplier.

Definition 3.2.3: Certain Equivalent Multiplier (CM) We denote by CM the Certain Equivalent Multiplier. Consider a deal with a u-multiplier of

UM. The CM of the deal is then defined as follows

𝐶𝑀 = 𝐶𝐸(𝑋)/𝑊0 = ��𝑝𝑖(1 + 𝑟𝑖𝑓)𝜆𝑛

𝑖=1

𝜆

= √𝑈𝑀𝜆

Interpreting the CM Intuitively, we can think of the CM as the risk-adjusted return of an investment (as it relates

to one’s initial wealth). For risk-neutral people, the CM will equal the average return on the

initial wealth. For risk-averse people, the CM equals the rate of return for which the decision

maker is indifferent between having 𝑀𝐶 ∙ 𝑊0 for certain and taking the uncertain deal.

When faced with such investment opportunities, the decision maker has at least one other

alternative, which is usually to invest his assets in a bonds or savings account with a

guaranteed return of r0. This allows us to split the u-multiplier into two factors; the first

includes outcomes that generate a return larger than r0 and the second includes outcomes

that generate a return less than r0.

Page 38: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

24

Example 3.2.1 To demonstrate the use of the above definitions, we introduce the following example. Here

we consider the case of Sara, an entrepreneur faced with the following deal.

Figure 7 – Example 3.2.1 Original Deal Structure

Assume that the savings rate is 5%, which is currently the only other alternative for the

investment available to Sara. Sara follows the power u-curve with λ = 0.5. This allows Sara

to calculate her UM and CM as follows:

𝑈𝑀 = 0.05(1 + 10)0.5 + 0.4(1 + 0.3)0.5 + 0.45(1 + 0.3)0.5 = 1.11205

Thus, the CM is

𝐶𝑀 = √𝑈𝑀𝜆 = 1.23667

The tree above, then, reduces to the following:

Figure 8 – Example 3.2.1 Simplified Deal

As shown by the figure above, this startup is equivalent to a 23.7% certain return for Sara,

which is higher than the 5% savings rate. Therefore, Sara should start the company because

it constitutes the best decision.

0.05 IPO(1+1000%)W0

0.4 Do Well(1+30%)W0

Start

0.45 Escape(1-10%)W0

0.1 Go Under(1-60%)W0

Don’t (1+5%)W0

>>> Start(1+23.7%)W0

Don’t(1+5%)W0

Page 39: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

25

3.2.3 Applying the Power U-Curve In this section we give two definitions describing the indifference buying and selling

fractions. These definitions follow closely those of the Personal Indifference Buying and

Selling Prices as defined by Howard (1998).

Indifference Buying Fractions

Definition 3.2.4: The Personal Indifference Buying Fraction The personal indifference buying fraction, Fb, is the certain fraction that when applied to all

the prospects of the deal, decision makers will be indifferent between having the deal and

keeping their current wealth. In other words, decision makers are indifferent between their

current wealth and having the deal multiplied by (1-Fb).

Proposition 3.2.1 The fraction Fb for which decision makers are indifferent between having that fraction as a

certain return on their initial wealth and applying the uncertain deal (pi, ri) with a certain

equivalent multiplier CM on their wealth satisfies:

𝐹𝑏 =𝐶𝑀 − 1𝐶𝑀

Definition 3.2.5: Personal Indifference Selling Fraction The personal indifference selling fraction, Fs, is the fraction that when offered as a certain

return on the decision makers’ initial wealth; they will be indifferent between having the deal

and having the certain return. In other words, decision makers are indifferent between their

current wealth multiplied by (1+Fs) and having the deal.

Proposition 3.2.2 The fraction Fs for which the decision maker is indifferent between keeping a deal (pi, ri) with

a certain equivalent multiplier CM he currently owns and a having Fs as a certain return on

his initial wealth satisfies:

𝐹𝑠 = 𝐶𝑀 − 1

Combining Deals In the following section we consider the effect of combining irrelevant deals.

Page 40: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

26

Proposition 3.2.3 Consider a decision maker with a deal with a certain equivalent multiplier CM1 who is offered

a deal with a certain equivalent multiplier CM2 in addition to the first deal. If the two deals

are irrelevant, then the decision maker’s CM of the combined deal is:

𝐶𝑀(𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑑𝑒𝑎𝑙) = 𝐶𝑀1 ∙ 𝐶𝑀2

The maximum fraction decision makers should be willing to offer in return for the second

deal is their indifference buying fraction for the second deal, namely:

𝐹𝑏 =𝐶𝑀2 − 1𝐶𝑀2

Example 3.2.2 At a dinner party, Sara is introduced to a marketing expert who believes she can help Sara

reach her target market more efficiently. This marketing strategy can improve the value of

the startup as follows:

Figure 9 – Example 3.2.2 Deal Structure with a Combined Deal

The marketing expert offers to help Sara in exchange for 10% of the company. Sara is able to

calculate the maximum she should offer him as follows.

First, Sara finds the CM of the new deal as

𝐶𝑀 = 1.178571

Now, the indifference buying fraction for the expert’s services equals:

𝐹𝑏 =𝐶𝑀 − 1𝐶𝑀

=1.178571− 1

1.178571= 0.151515

Since this fraction (~16%) is higher than the 10% cost of the service, Sara immediately

accepts the expert’s offer.

0.75 Gain(1+30%)S

0.25 Loss(1-15%)S

Page 41: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

27

3.2.4 The Value of Information and Control Along similar lines to Section 3.1.4, using the power u-curve allows us to derive closed-form

expressions for the value of information and control in terms of equity payments. These are

presented in the following propositions.

Value of Information The value of information can be calculated by comparing the CM of the deal with free

information to that with no information.

Proposition 3.2.4 Let 𝐶𝑀1 and 𝐶𝑀2 be the CM of the deal without information and with free information,

respectively. The maximum percentage the decision maker should be willing to pay can be

calculated as:

𝐹𝑏 = 1 − 𝐶𝑀1

𝐶𝑀2

Or, in terms of the UM:

𝐹𝑏 = 1 − �𝑈𝑀1

𝑈𝑀2

𝜆

Example 3.2.3 Now, we go back to the original scenario in which Sara was considering whether or not to

start the venture. A market researcher offers Sara the following deal. In return for 10% of

Sara’s equity, he will tell her whether or not, the startup will “go under.” Should Sara accept?

The deal with free help from the marker researcher is represented as follows:

Figure 10 – Example 3.2.3 Deal Structure with Perfect Information

Start Go Under(1-60%)W0

0.1 "Goes Under"

>>> Don’t(1+5%)W0

0.06 IPO(1+1000%)W0

>>> Start 0.44 Do Well(1+30%)W0

0.9 "Doesn’t" 0.5 Escape(1-10%)W0

Don’t(1+5%)W0

Page 42: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

28

We calculate the CM of the deal with free information to be 1.32, and, hence, Sara should

accept the offer of information.

The Value of Control Here we consider the case when the decision maker is offered to change the distribution of a

set of prospects in return for a fraction of the deal. Consider the following proposition.

Proposition 3.2.5 Consider a deal with n possible outcomes (𝑝𝑖 , 𝑟𝑖) and𝐶𝑀1. The decision maker follows the

power u-function with a certain value of λ. Therefore, the value of control, or the maximum

fraction Fb the decision maker should be willing to pay to change the deal from(𝑝𝑖 , 𝑟𝑖) to

(𝑞𝑖, 𝑟𝑖) with 𝐶𝑀2 can be calculated as:

𝐹𝑏 = 1 − 𝐶𝑀1

𝐶𝑀2

Or, in terms of the UM:

𝐹𝑏 = 1 − �𝑈𝑀1

𝑈𝑀2

𝜆

Example 3.2.4 Sara is still interested in improving her odds of success. After some study, she believes that,

with extra funding, she can improve the chances of an IPO and ensure that she will never go

under. Effectively, this funding will ensure that she can get out of the venture after losing at

most 10% of her equity.

Figure 11 – Example 3.2.4 Improved Deal Structure

0.1 IPO(1+1000%)W0

0.4 Do Well(1+30%)W0

0.5 Escape(1-10%)W0

Page 43: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

29

A Venture Capitalist offers Sara the required funds in return for a 15% stake in the company.

Since the CM of the updated deal is 1.59 compared to a CM of 1.237 for the original deal,

Sara immediately calculates her buying fraction as:

𝐹𝑏 = 1 −1.2371.59

= 0.223

and thus accepts these funds. Note that now her new CM increases as follows:

𝐶𝑀(𝑤𝑖𝑡ℎ 𝑓𝑢𝑛𝑑𝑖𝑛𝑔) = (1 − 𝑐𝑜𝑠𝑡) ∙ 𝐶𝑀

𝐶𝑀(𝑤𝑖𝑡ℎ 𝑓𝑢𝑛𝑑𝑖𝑛𝑔) = (1 − 0.15)(1.59) = 1.37

This is compared to a CM of 1.237 from the original deal.

3.2.5 Proximal Analysis of the Power U-Function When few statistics are available to describe the uncertainties at hand, we can approximate

the CM using proximal decision analysis. This section deals with several methods of proximal

analysis as applied to the power u-curve.

“Slightly” Risk-Averse Decision Makers The Pratt approximation shown in Pratt (1964) around the mean of the deal allows us to obtain an approximation of the CE as follows:

𝐶𝐸(𝑋) ≈ 𝐸(𝑋) − 12𝑟�𝐸(𝑋)�.𝑉𝐴𝑅(𝑋)

Where X represents the prospects and r (E(X)) is the risk odds of the decision maker’s u-

curve.

Proposition 3.2.6 The second order approximation for the CM of a deal (pi,ri ) around the mean is the

following:

𝐶𝑀� ≈ 𝐸(𝑅𝑖) −12

(1 − 𝜆).𝑉𝐴𝑅(𝑅𝑖)𝐸(𝑅𝑖)

where ri = 1 + ri

Example 3.2.5 Use of the above approximation allows Sara to quickly calculate the CM of her initial deal

(assume at this point that λ = 0.9). Since

𝐸(𝑅𝑖) = 2.045

Page 44: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

30

𝑉𝐴𝑅(𝑅𝑖) = 8.97

It is straightforward to calculate that𝐶𝑀� = 1.83. The true 𝐶𝑀 is 1.93, which is an error of

5.2%. Note that as λ decreases, this error grows. This approximation is best for individuals

who are almost risk-neutral, or when λ is close to 1.

Small Fraction Investments It is also possible to create a Taylor Series as 𝑓 → 0. This is especially important for investors

such as Venture Capitalists, who in general invest a small fraction of their wealth in each

venture.

Proposition 3.2.7 The second order approximation for the UM of a deal around is the following:

𝑈𝑀� ≈ 1 + 𝜆 ∙ 𝐸(𝑟𝑖) ∙ 𝑓 + 𝜆 ∙ (𝜆 − 1) ∙ 𝐸(𝑟𝑖2) ∙𝑓2

2

It is also possible to approximate the optimal fraction the decision maker should invest in a

deal (𝑝𝑖 , 𝑟𝑖), as well as to calculate the maximum fraction at which the decision maker is

indifferent to investing in the deal.

Corollary 3.2.1 The optimal fraction, f*, the decision maker will invest in a deal, using the second-degree

Taylor series approximation, is the following:

𝑓∗ = 𝐸(𝑟𝑖)

(1 − 𝜆)𝐸(𝑟𝑖2)

Corollary 3.2.2 The maximum fraction, fm, the decision maker will invest in a deal (𝑝𝑖 , 𝑟𝑖), using the second-

degree Taylor series approximation, is the following

𝑓∗ = 𝐸(𝑟𝑖)

(1 − 𝜆)𝐸(𝑟𝑖2)

When λ Disappears Another interesting case occurs when λ is approximately 0, or when the individual u-curve is

almost a log function (recall that the power u-curve converges to a log function when λ = 0).

Proposition 3.2.8 The Taylor series expansion for the UM of a deal around λ0 is the following:

Page 45: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

31

𝑈𝑀� ≈�𝜆𝑗

𝑗!𝐸[(ln(𝑅𝑖))𝑗]

𝑗=0

Example 3.2.6 We can approximate Sara’s UM with two or three moments depending on the required

accuracy. If Sara’s λ is now 0.2, the UM can be approximated as

𝐸(ln(𝑅𝑖)) = 0.25

𝐸(ln(𝑅𝑖)2) = 0.85

𝑈𝑀� = 1.063 → 𝐶𝑀� = 1.36

𝑤ℎ𝑒𝑟𝑒 𝑖𝑠 𝑒𝑥𝑎𝑐𝑡 𝑈𝑀 = 1.065 𝑤𝑖𝑡ℎ 𝑒𝑟𝑟𝑜𝑟 = 0.2%

𝑎𝑛𝑑 𝑡ℎ𝑒 𝑒𝑥𝑎𝑐𝑡 𝐶𝑀 = 1.37 𝑤𝑖𝑡ℎ 𝑒𝑟𝑟𝑜𝑟 = 1%

Discrete Approximation Proposition 8 relates the UM approximation to the moments of the logarithm of the

distribution. For people with -1 < λ < 1, the UM approximation converges very quickly for

most well behaved distributions, since 𝜆𝑗

𝑗! → 0 quickly in j.

For “well-behaved” distributions, the moments of the log distribution will change more

slowly than the reduction of the λj/j term. Therefore, the approximation will converge quickly

depending on λ. The number of terms needed for a good approximation determines the

number of moments needed for the approximation. This, in turn, determines the number of

degrees needed to assess the discrete approximation of the distribution. Recall that a

discrete distribution with n degrees can be used to equate (2n – 1) moments of a continuous

distribution. Thus, if the distribution converges after m terms, then it is more than enough to

assess m degrees of the discrete distribution.

Page 46: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

32

From Figure 12, we can see that four terms are sufficient unless higher moments (5 and

above) grow exponentially.

3.3 Comparison between Exponential and Power U-Functions

3.3.1 Introduction In this section we discuss the use of the power and exponential u-functions. First, we discuss

the applicability of the u-functions to the decision involved in a single deal. Then we consider

the use of the u-function within our process.

3.3.2 Comparison when Considering a Single Deal Here, the decision maker is only concerned with the deal at hand. In this situation, the only

considerations that matter are the range of prospects and the medium of payment.

We consider the range of the prospects as they compare to the decision makers’ wealth

levels. If it is small, risk attitude does not matter, as the decision makers’ attitude is almost

risk-neutral. If the range is of medium size then the risk attitude becomes important; we call

this situation “medium range” in what follows. If the uncertainty is of the same order of

magnitude as the decision makers’ wealth, then we call this situation “large range.” The

medium of payment, or currency, can be in absolute terms as cash or proportional to the

decision makers’ assets as equity payments. The following table discusses the applicability of

the u-functions in these situations.

Figure 12 – Power Function approximation error converges to zero

The factor converges to zero quickly

Page 47: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

33

Currency Range of prospects

Cash Equity

Medium Range (1) Exponential (2) Both are acceptable

Large Range (3) Both have limitations; Power function is preferred

(4) Power

Table 1 - Applicability of the Power and Exponential U-Functions

Area (1) When the prospects have medium range and the currency is cash, we prefer using the

exponential u-function as it allows us the mathematical convenience of calculating the

indifference prices of information and control in closed form. This is due to the constant

absolute risk aversion coefficient of the exponential u-function.

Area (2) When the currency is equity, the power u-function allows us the mathematical convenience

of calculating the indifference prices as fractions of the decision maker’s equity in closed

form. This is due to the constant relative risk aversion coefficient of the power u-function.

That being said, this consideration is merely mathematical convenience and both u-functions

are suitable for this setting.

Area (3) In this situation the prospects are in the large range and the currency is cash. This situation is

not ideal for either u-function. The exponential u-function is not suited for prospects with

large ranges, as it is not sensitive to large prospects with small probabilities. The power u-

function, for its part, poses some mathematical inconvenience when prospects are

represented in cash, since it requires that the prospects be translated from absolute to

proportional terms.

Area (4) This is the situation with most entrepreneurial settings and it is best suited to the use of the

power function. Here the prospects have a large range and the currency is equity. The power

u-function has a decreasing absolute risk aversion coefficient, which allows it to be sensitive

to large prospects with small probabilities. Its constant relative risk aversion coefficient

allows for multiplicative separation.

3.3.3 Comparison when Considering Fleeting Opportunities Here we consider the situation in which decision makers are considering fleeting

opportunities as defined above. Recall that within the problem constraint 1.2, we require

Page 48: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

34

that the effects of the opportunities must be separable and accounted for independently of

deals at hand and arriving in the future. We discuss this constraint within the additive and

multiplicative settings in the following section.

The Additive Setting In the additive setting, the value to the decision maker is an additive function of the various

deals, as follows:

𝐶𝐸(𝑉𝑎𝑙𝑢𝑒) = 𝐶𝐸(𝐴 + 𝐵 + 𝐶 + 𝐷)

To satisfy constrain 1.2, the u-function must allow the certain equivalent to be additively

separable. That is:

𝐶𝐸(𝑉𝑎𝑙𝑢𝑒) = 𝐶𝐸(𝐴 + 𝐵 + 𝐶 + 𝐷) = 𝐶𝐸(𝐴) + 𝐶𝐸(𝐵) + 𝐶𝐸(𝐶) + 𝐶𝐸(𝐷)

This allows the decision maker to value the deal at hand alone. This is only true with the

exponential u-function.

The Multiplicative Setting In the multiplicative setting, the value to the decision maker is a multiplicative function of

the various deals as follows:

𝐶𝐸(𝑉𝑎𝑙𝑢𝑒) = 𝐶𝐸(𝐴 ∙ 𝐵 ∙ 𝐶 ∙ 𝐷)

To satisfy constraint 1.2, the u-function must allow the certain equivalent to be

multiplicatively separable. That is:

𝐶𝐸(𝑉𝑎𝑙𝑢𝑒) = 𝑊 ∙ 𝐶𝑀(𝐴 ∙ 𝐵 ∙ 𝐶 ∙ 𝐷) = 𝑊 ∙ 𝐶𝑀(𝐴) ∙ 𝐶𝑀(𝐵) ∙ 𝐶𝑀(𝐶) ∙ 𝐶𝑀(𝐷)

Where, as above, w is the decision maker’s wealth and CM is the certain equivalent

multiplier. This allows the decision maker to value the deal at hand alone. This is only true

with the power u-function.

Page 49: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

35

Chapter 4 – Model and Notation “More important than the quest for certainty is

the quest for clarity”

Francois Gautier

In this chapter we discuss the basic model that we apply in this dissertation. We extend this

model as we go explain our process more completely in the later chapters. In section 1 we

illustrate the general model. We apply this model in both additive and multiplicative settings.

We discuss these settings in sections 2 and 3, respectively. In Section 4 we compile a list of

the notation used in this dissertation.

4.1 General Description We consider situations in which decision makers have flows of fleeting opportunities that

arise over time. They can only accept a limited number of deals and have to immediately

decide how to react to each deal as it arrives.

The decision maker is offered a deal at each the start of each time period over a horizon of T

periods and has a limited capacity to accept C deals. A deal is represented as an uncertain

distribution over prospects. When offered a deal, she must decide whether to accept or

reject the deal or whether to gather more information before reaching that decision. We

represent the state of the decision maker by the current time period t and capacity level c

and denote it by (t,c).

As discussed in Chapter 1, the distribution over deals that might be offered is assumed to the

same during each time period. In addition, the deals offered at any time are irrelevant, as are

the outcomes of the deals themselves. At the start of each time period, the decision maker

receives a deal and has to decide what to do about it. This is presented in the following

figure.

Page 50: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

36

Figure 13 - Deal Flow Structure

Once a deal arrives at time t, the decision maker can accept it, reject it, or apply further

alternatives to the situation (e.g., acquire relevant information). Once she accepts it, she

assigns one resource unit and time passes on one unit. If she rejects it, she maintains her

resources but time still goes forward. If she takes the third alternative, she continues going

through alternatives until she either accepts or rejects the deal. The following figure

illustrates the alternatives facing the decision maker.

4.2 Components of the Model: Deals, Time Horizon, and Capacity As stated above, our model assumes that the distribution of deals available in each time

period stays the same, irrespective of the deals that were available in past periods. At each

period, decision makers are offered a deal from a distribution of possible opportunities that

includes a “zero” representing the possibility of not getting a deal.

Figure 14 - Deal Setup Structure

Receive Random Deal for

State Capacity Time

C = c T = t

Accept

Another Alternative

Reject

Alternative Evaluation

Accept

Reject

State Capacity Time

C = c - 1 T = t +1

C = c T = t +1

Page 51: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

37

We define the length of the time horizon, T, as the number of periods available to the

decision maker. At time T+1 she can no longer accept deals. The length of the time period is

set in such a way that there is a single deal under consideration during any time period,

which is no longer available after the time period is over.

In the Venture Capital example, VC firms usually have a commitment to their investors to

provide them with returns in a specified period (e.g., 7 years). To be able to do so, they

impose a deadline on investing that will allow the firm to sell the startups before the

specified period passes. Private equity firms have a similar condition. They usually operate

through rotational funds that are required to close in a certain length of time.

While we do impose a deadline in the model above, we relax this condition when we study

infinite horizons in Chapters 6 and 7. In most practical situations, however, there is an

effective or implicit deadline that has a part in determining the policies.

We define the initial capacity, C, as the maximum number of deals that decision makers can

accept through the deal flow. Our model assumes that deals have the same capacity

requirement. So when decision makers accept a deal, they irrevocably allocate to it a

resource unit from their capacity. We relax the condition of irrevocable allocation when we

consider the options presented in Chapters 6 and 7.

In the Venture Capital example, the number of deals in which VC firms can invest is limited

by the time of their partners. Each partner can only be on the directors’ board of a limited

number of startups.

4.3 The Additive Model Here we consider situations in which the value of the investment to decision makers is

modeled as an additive function of the deals accepted by them.

4.3.1 Representation of Deals For the sake of clarity and without loss of generality, we limit our discussion to a discrete

representation of deals. A deal has n potential outcomes, each with a return 𝓍𝑖 in addition to

the decision makers’ initial wealth W0. The ith outcome can be written as:

𝑊0 + 𝓍𝑖

The outcome occurs with a probability𝑝𝑖. The deal may be represented by the following tree:

Page 52: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

38

The mean of the deal is and the u- value is

𝐸(𝑋) = 𝑊0 + �𝑝𝑖 𝓍𝑖

𝑛

𝑖=1

It is possible to calculate the certain equivalent as the inverse of the u-value.

4.3.2 Model Layout

Where

• 𝑋𝑛: A specific deal • 𝑊0: is the decision makers’ initial wealth • 𝑉𝑐𝑡: Certain equivalent of future value of the deal flow at time t with capacity c

before observing a specific deal • 𝑉𝑐𝑡|𝑋𝑛: 𝑉𝑐𝑡given that the deal currently available is 𝑋𝑛 • 𝐶𝐸(𝑋𝑛): is the certain equivalent of deal 𝑋𝑛 • ℱ�𝑉𝑐𝑡+1,𝐶𝐸(𝑋𝑛)�: is a function of the future value and certain equivalent of 𝑋𝑛

𝑉𝑐𝑡|𝑋𝑛 + 𝑊0

Accept

Another Alternative

Reject

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) + 𝑊0

𝑉𝑐𝑡+1 + 𝑊0

ℱ�𝑉, 𝑐, 𝑡,𝐶𝐸(𝑋𝑛)� + 𝑊0

𝑝1 Outcome 1

𝐷𝑒𝑎𝑙 𝑋

𝑊0 + 𝓍1

𝑊0 + 𝓍𝑖

𝑊0 + 𝓍𝑛

𝑝𝑖 Outcome i

𝑝𝑛 Outcome n

Figure 15 - Representation of Deals in the Additive Model

Figure 16 – Additive Model Layout

Page 53: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

39

4.4 The Multiplicative Model Here we consider situations in which the value to decision makers is modeled as a

multiplicative function of the deals accepted by them.

4.4.1 Representation of Deals As above, for clarity, we illustrate the discrete representation of deals. A deal has n potential

outcomes for an investment of a fraction f of the decision makers’ initial wealth w0. The ith

outcome can be represented, in the context of investments, as a multiple of the decision

makers’ investment. Thus the ith outcome can be written as:

(1 + 𝑟𝑖)𝑓 ∙ 𝑊0 + (1 − 𝑓)𝑊0 = (1 + 𝑟𝑖𝑓)𝑊0

This outcome occurs with a probability of 𝑝𝑖. The deal may be represented by the following

tree:

We denote the deal (pi , ri), with X.

The mean of the deal is

𝐸(𝑋) = 𝑊0�𝑝𝑖(1 + 𝑟𝑖𝑓)𝑛

𝑖=1

and the u- value is

𝑢(𝑋) = �𝑝𝑖

𝑛

𝑖=1

𝑢�(1 + 𝑟𝑖𝑓)𝑊0�

The certain equivalent can be calculated as the inverse of the u-value.

𝐷𝑒𝑎𝑙 𝑋

(1 + 𝑟1𝑓)𝑊0

(1 + 𝑟𝑖𝑓)𝑊0

(1 + 𝑟𝑛𝑓)𝑊0

𝑝𝑖 Outcome i

𝑝𝑛 Outcome n

𝑝1 Outcome 1

Figure 17 –Representation of Deals in the Multiplicative Model

Page 54: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

40

4.4.2 Model Layout

Where

• 𝐶𝑀(𝑋𝑛) is the certain equivalent multiplier of deal 𝑋𝑛. • 𝑀𝑐

𝑡 is the certain equivalent multiplier of future value of the deal stream at time t with capacity c left before observing a specific deal.

• 𝑀𝑐𝑡|𝑋𝑛: 𝑀𝑐

𝑡 given that the deal available is 𝑋𝑛 • ℱ�𝑀𝑐

𝑡+1,𝐶𝑀(𝑋𝑛)� is a function of the future value and certain equivalent of 𝑋𝑛

4.5 Terms and Notation

Decision Analysis

𝑃𝐼𝐵𝑃(𝑋𝑛) Personal Indifferent Buying Price for a deal Xn. This is defined as the price b at which the decision maker is indifferent between buying deal Xn and keeping his current wealth.

𝑃𝐼𝑆𝑃(𝑋𝑛) Personal Indifferent Selling Price for a deal Xn. This is defined as the price s at which the decision maker is indifferent between selling deal Xn and keeping it.

𝑢(𝑥) U-value of deal x.

Exponential U-curve

𝑢(𝑥) = 1 − 𝑒−𝑥/𝜌 Where 𝜌 is a parameter of risk attitude denoted as the risk tolerance of the decision maker.

Power U-curve 𝑢(𝑥 ∙ 𝑤) =

(𝑥 ∙ 𝑤)𝜆

𝜆

Where λ is a parameter of risk attitude 𝐶𝐸(𝑋𝑛) Certain Equivalent of deal Xn. This is defined as the PISP of deal

Xn

𝑀𝑐𝑡|𝑋𝑛 ∙ 𝑊0

Accept

Another Alternative

Reject

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∙ 𝑊0

𝑀𝑐𝑡+1 ∙ 𝑊0

ℱ�𝑀, 𝑐, 𝑡,𝐶𝑀(𝑋𝑛)� ∙ 𝑊0

Figure 18 – Multiplicative Model Layout

Page 55: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

41

𝐶𝑀(𝑋𝑛) Certain Equivalent Multiplier of deal Xn. This is defined as the multiple f for which the decision maker is indifferent between selling deal Xn and keeping it.

𝐶𝑀(𝑋𝑛) =𝐶𝐸(𝑋𝑛)𝑊

Deal Flow Characterization

𝑇 Length of time horizon

𝑡 Current time

𝐶 Initial capacity (number of resources available)

𝑐 Capacity remaining (current number of resources available)

𝑋𝑛 A specific deal

𝑉𝑐𝑡 Certain equivalent of the future value of the deal stream at time t with capacity c.

𝑉𝑐𝑡|𝑋𝑛 Certain equivalent of the future value of the deal stream at time t with capacity c after observing a deal Xn

𝑀𝑐𝑡 Certain equivalent multiplier of future value of the deal stream at

time t with capacity c.

𝑀𝑐𝑡|𝑋𝑛 Certain equivalent multiplier of future value of the deal stream at

time t with capacity c after observing a deal Xn

𝑅𝑐𝑡 Marginal value of a resource unit at time t and capacity c. This is defined for the additive setting as:

𝑅𝑐𝑡 = 𝑉𝑐𝑡+1 − 𝑉𝑐−1𝑡+1 Or, in the multiplicative setting as

𝑅𝑐𝑡 =𝑀𝑐

𝑡+1

𝑀𝑐−1𝑡+1

Page 56: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

42

Incremental Values/Multipliers

𝑖𝑉𝑐𝑡(𝑋𝑛) Incremental value of deal Xn at time t with capacity c. This is defined as the contribution of deal Xn to the future value of the deal stream at time t with capacity c.

𝑖𝑉𝑐𝑡(𝑋𝑛) = max (𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑡 , 0)

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) Incremental value of deal Xn with free information at time t with capacity c. This is defined as the contribution of deal Xn with free information to the future value of the deal stream at time t with capacity c.

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝐸�𝑖𝑉(𝑋𝑛|𝐼𝑖)� 𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) Incremental value of information for a deal Xn at time t with

capacity c. This is defined as the PIBP of information on deal Xn at time t and capacity c.

𝑚 Monetary cost of information/control

𝑖𝑀𝑐𝑡(𝑋𝑛) Incremental multiplier of deal Xn at time t with capacity c. This is

defined as the contribution of deal Xn to the future value multiple of the deal stream at time t with capacity c.

𝑖𝑀𝑐𝑡(𝑋𝑛) = max (

𝐶𝑀(𝑋𝑛)𝑅𝑐𝑡

, 1)

𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) Incremental multiplier of deal Xn with free information at time t with capacity c. This is defined as the contribution of deal Xn with free information to the future value multiple of the deal stream at time t with capacity c.

𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀�𝑖𝑀(𝑋𝑛|𝐼𝑖)� 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) Incremental multiplier of information for a deal Xn at time t with

capacity c. This is defined as the PIBP of information on deal Xn at time t and capacity c.

𝑓 Fraction paid for information/control.

Page 57: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

43

Chapter 5 – Step 1: Frame and Decision Basis

“Doubt, indulged and cherished, is in danger of

becoming denial; but if honest, and bent on

thorough investigation, it may soon lead to full

establishment of the truth.”

“Who never doubted, never half believed.

Where doubt is, there truth is - it is her

shadow.”

Ambrose Bierce

In previous chapters we discussed the underlying models and their appropriate risk attitude

curves. In Chapters 5, 6, 7, and 8 we analyze our proposed process.

Figure 19 - Step 1 of the Solution Process

5.1 Overview In this chapter we describe the first step in our three-step process. Here decision makers

formulate the situation by determining the frame and building the decision basis. Recall that

the decision basis includes decision makers’ preferences, information, and the decision

Page 58: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

44

alternatives. The goal of this step is to formulate the deal flow. After completing this step,

decision makers should have clear frame boundaries and assessments of alternatives,

uncertainties, and preferences relevant to the problem.

This chapter is organized into four parts. In Section 2, we discuss framing both the

opportunities themselves and the deal flow as a whole. In Section 3, we assess the decision

basis beginning with the decision makers’ time and risk preferences. We then identify the

available alternatives to the opportunities and the deal flow. Finally, we assess the decision

makers’ information about the uncertainties and sketch a method to model them.

5.2 Framing The frame of the decision determines the boundary of the issues considered for analysis

now. In our process, framing is done on two levels. First, we frame the deals available to

decision makers, and then we frame the deal flow as a whole. The two levels of framing are

shown in Figure 20.

Figure 20 - Two Levels of Framing

Decision makers must determine the decisions and uncertainties that are to be considered

within the analysis. The related decisions are divided into three categories. The categories

are decisions taken as given, decisions to be considered now, and decisions that are delayed

until later. The decisions and uncertainties are modeled in a decision diagram. Decision

diagrams, first introduced in Howard & Matheson (1981), are used to model the relationships

Page 59: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

45

between the elements affecting the decision and to communicate them between the

stakeholders.

5.2.1 Framing the Deal Flow In this step we define the boundaries of the decisions considered with regards to the deal

flow as a whole, in order to frame the flow itself. This includes deciding on the capacity

available, the deadline, and the alternatives available within the deal flow. For example,

decision makers may be offered information about or control of the entire deal stream. Also,

decision makers might be able to change their capacity and/or the deadline. The inclusion of

such decisions is accomplished while framing the deal flow.

5.2.2 Framing the Deals Decision makers must frame the deals available to them during each period. The goal here is

to determine the boundaries of the decisions considered within each deal. This framing

exercise will result in decision diagrams of the deals available to the decision makers.

We suggest using a generic (or template) decision diagram to represent each deal type. Here,

decision makers model their beliefs about each category of deal using a generic diagram that

captures a frame specific to that category. This way, they only need to update the diagram

with their beliefs about specific deals as they arrive. The following is an example of a generic

diagram for online consumer startups. In Appendix A2, we give a detailed description of the

diagram’s nodes. Richman (2009) vetted this diagram with approximately 20 Venture

Capitalists.

Page 60: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

46

Figure 21 - Generic Diagram for Internet Consumer Startups

5.3 Basis for the Decisions The decision basis includes preferences, alternatives, and information. The following graph

summarizes the assessments needed for this step.

Figure 22 - Decision Basis

5.3.1 Preferences We begin with modeling the decision makers’ preferences. Specifically, we are concerned

with time and risk preferences.

Time Preference We assess the decision makers’ time preference as a discount rate. We represent all the cash

flows in their net present value (NPV) before the analysis. Note that the discount rate is used

to solely model time preference and not risk preference.

Invest?

INITIAL EXECUTION RESULTS LIQUIDATION

Market Size & Growth

Technology

Team

Competitors

Possible Applications

Business Model

Revenue

Cost Future Financing

Exit

Dilution

ValueProf it

Cash Balance

Observables

PartnershipsHiring Barriers to Entry

Z

Market Share & Growth

Page 61: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

47

Risk Preference Here we assess the decision makers’ risk preference by assessing the parameters of the u-

curve in use. As discussed earlier, we require that the decision maker use either the

exponential or the power u-curve.

5.3.2 Alternatives Here we assess the alternatives available to the decision maker within the decisions we

included in the framing stage.

Deal Flow Alternatives Deal flow alternatives include decisions related to sourcing deals, prepaying for information,

control, and options, and changing the capacity and/or the deadline.

Deal-Specific Alternatives Deal-specific alternatives include, among many others, those related to accepting the deal

and buying information, control, or options on the deal.

5.3.3 Information In this step we assess the information of the decision maker and his beliefs about the deals

and the alternatives. The latter includes the accuracy of information and control alternatives.

Prior Deals To facilitate this step, we suggest modeling the decision makers’ beliefs about deal

categories using generic diagrams. We then assess decision makers’ belief about the

uncertainties and probabilities within each template. In the following tree we have three

categories, each with a probability 𝑝𝑖.

Page 62: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

48

Figure 23 - Modeling the Prior Deals

We then assess decision makers’ beliefs about the information gathering activities, control,

sourcing deals, options, and the rest. For example, we assess the ability to discern and learn

about the opportunities as detectors with certain accuracies and costs. Along the same lines,

we assess the ability to influence the distribution of the opportunities as controllers with

certain accuracies and costs.

Prior Alternatives Here we model the decision maker’s beliefs about the alternatives available. For example,

this includes the specificity and sensitivity of detectors.

Page 63: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

49

Chapter 6 – Step 2.1: Additive Valuation Funnel

“The most important questions of life are

indeed, for the most part, really only

problems of probability.”

“Probability theory is nothing but common sense reduced to calculation.”

Pierre Laplace

So far, we have the frame and the decision basis of the complete deal flow. In Chapters 6 and

7 we turn to the second step of our process, that is, building the valuation funnel.

Figure 24 - Step 2 of the Solution Process

6.1 Overview The second step of our template is to build the valuation funnel. In this chapter we build the

valuation funnel for the additive setting. The goal of this step is to build the valuation funnel

for the additive setting. After this step, decision makers will have calculated the indifference

Page 64: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

50

buying prices (IBP) of the deals, information, control, etc. These incremental values help the

decision maker make real-time decisions when the alternatives are available. In addition, the

resulting valuation funnel can be easily used to evaluate alternatives concerning the deal

flow as a whole. More on the application of the funnel is given in Chapter 8.

This chapter is organized into six parts. In Section 2 we give the basic problem setup, in which

we describe the model and define the main terms. In Section 3 we present the optimal

policy, characterize the values within the deal flow, and then return to characterizing the

optimal policy. In Section 4 we extend our main results to the long-run problem setup and

discuss the policy iteration algorithm used to solve for the results. Finally, in Section 5, we

study some extensions to the basic model. These extensions are a more flexible cost

structure, the use of the probability of knowing detectors, the option of reversing an

allocation decision, and finally the value of perfect hedging.

6.2 Basic Problem Structure The basic problem considered here concerns a decision maker who is offered a deal at the

start of each time period over a horizon of t periods. When offered a deal, the decision

maker can accept the deal, reject it, or seek information and then decide. The basic model

assumes independence over time and among deals.

When a decision maker is offered a deal 𝑋𝑛 she has one of three alternatives. If she directly

accepts the deal, she allocates one unit of her capacity (initial capacity 𝐶) and gets the deal’s

certain equivalent. If she rejects the deal, she keeps her capacity. Finally, she can seek

information at a cost 𝑚. If she seeks information, she will observe an indication 𝐼, which

updates her beliefs about the deal and will then allow her to decide whether or not to accept

the deal.

The following graph shows the structure of the dynamic program where 𝑉𝑐𝑡 is the certain

equivalent of the future value of the deal stream at time t with capacity c remaining before

observing a specific deal. 𝑉𝑐𝑡|𝑋𝑛 is the certain equivalent of the future value after observing

the deal 𝑋𝑛.

Page 65: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

51

For a review of the research on the value of information, please refer to Howard (1967).

6.2.1 Definitions

Definition 6.2.1: Marginal Value of Capacity We denote the marginal value of a unit of capacity at (c,t) by 𝑅𝑐𝑡 and define this quantity

𝑅𝑐𝑡 = 𝑉𝑐𝑡+1 – 𝑉𝑐−1𝑡+1

Definition 6.2.2: Incremental Value of a Deal We denote the incremental value of a deal 𝑋𝑛 at (c,t) by 𝑖𝑉𝑐𝑡(𝑋𝑛) and define this quantity

𝑖𝑉𝑐𝑡(𝑋𝑛) = [𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑡]+

This is the indifference buying price (IBP) of the deal 𝑋𝑛at time 𝑡 and capacity 𝑐.

Definition 6.2.3: The Incremental Value of a Deal with Information We denote the incremental value of a deal 𝑋𝑛 with information at (c,t) by 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) and

define this quantity

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝐸�𝑖𝑉𝑐𝑡(𝑋𝑛|𝐼𝑖)�

This is the indifference buying price (IBP) of the deal 𝑋𝑛when offered information with

indication 𝐼𝑖 at time 𝑡 and capacity 𝑐.

Definition 6.2.4: The Incremental Value of Information We denote the incremental value of information on a deal 𝑋𝑛 at (c,t) as 𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) and

define this quantity as the indifference buying price (IBP) of information on this deal at (c,t),

or

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛)− 𝑖𝑉𝑐𝑡(𝑋𝑛)

Figure 25 - Basic Problem Structure

Accept

Seek info

Reject

Reject

Accept

𝑉𝑐𝑡|𝑋𝑛

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚

𝑉𝑐𝑡+1 − 𝑚

Ii

Page 66: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

52

Example 6.2.1 We refer to the following simple example throughout this chapter. We consider the situation

facing a risk-averse decision maker in the following scenario: The decision maker exhibits

exponential utility 𝑢(𝑥) = 1 − 𝑒−𝛾𝑥 with γ = 0.1. The decision maker has a time horizon of 50

periods (T=50) and can accept at most 4 opportunities (C=4). The deals have the following

structure (and differ in p):

Figure 26 - Example Deal Structure

In each time period, the decision maker can either get nothing or one of six possible deals.

Figure 27 - Example Deal Flow

Page 67: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

53

6.2.2 The Value of Control The value of control can be studied along the same lines as the value of information. All of

our results below for the value of information can be duplicated for that of control. For a

more detailed study of the value of control, please refer to Matheson & Matheson (2005).

6.3 Main Results In this section we present the main results of this chapter. We present the optimal policy and

characterize the main values across the model parameters (c,t). Specifically, we characterize

the optimal policy, the value of the deal flow, the incremental value of capacity, the

incremental values of deals with and without information, and the buying price of

information.

6.3.1 Optimal Policy Given the scenario above, the decision maker’s optimal policy is defined in the following

proposition.

Proposition 6.3.1a: Optimal Information Gathering Policy When offered a deal 𝑋𝑛 the decision maker should buy information if and only if

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) ≥ 𝑖𝑉𝑐𝑡(𝑋𝑛) + 𝑚

Proposition 6.3.1b: Optimal Allocation Policy After information is received, the decision maker should accept the deal if and only if

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, the deal is worth accepting without information if and only if

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡

6.3.2 Characterizing the Certain Equivalent and Threshold Given the above scenario, we can find that the certain equivalent of the deal flow (𝑉𝑐𝑡) and

the marginal value of capacity (𝑅𝑐𝑡) change in intuitive ways with capacity (c) and time (t).

Proposition 6.3.2: Characterizing the Deal Flow Certain Equivalent

I. 𝑉𝑐𝑡 is non-decreasing in c for all t

II. 𝑉𝑐𝑡 is non-increasing in t for all c

Page 68: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

54

In other words, the value of the deal flow increases as the capacity at hand increases and

decreases as the deadline approaches.

Example 6.3.1 Using the example above, Figure 28 is a graph of the deal flow value.

Figure 28 - Examples of Deal Flow Values

As stated in Proposition 6.3.2, the value of the deal flow decreases with time and increases

with capacity. This agrees with the intuition that more capacity and more time are desirable.

As time progresses, the chance of getting high-valued deals and hence of earning high

rewards in the future diminishes.

Proposition 6.3.3: Characterizing Threshold (𝑹𝒄𝒕) I. 𝑅𝑐𝑡 is non-increasing in c for all t II. 𝑅𝑐𝑡 is non-increasing in t for all c

Proposition 6.3.3 states that the marginal value of capacity decreases as capacity increases

and as the deadline approaches. Otherwise stated, the optimal policy becomes more lenient

with more capacity at hand and as we approach our deadline.

6.3.3 Characterizing Indifference Buying Prices Based on the above results, we characterize the way in which the indifference buying price of

a deal in and of itself and that of the deal with information 𝑖𝑉𝐼𝑐𝑡 changes with c and t. Then

we characterize the indifference buying price of information 𝑖𝑉𝑜𝐼𝑐𝑡.

Page 69: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

55

Corollary 6.3.1: Characterizing IBP of Deals (𝑖𝑉𝑐𝑡(𝑋𝑛) and 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛)) The IBP values of a deal 𝑋𝑛 with and without information exhibit the following two

properties:

I. 𝑖𝑉𝑐𝑡(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) are non-decreasing in c for all t II. 𝑖𝑉𝑐𝑡(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) are non-decreasing in t for all c

The marginal value of capacity is non-increasing in time and capacity. From this, it

immediately follows that the incremental value of the deal n when the capacity is c at

time t is non-decreasing in both c and t. That is, the value of a deal increases as the

deadline approaches and/or the available capacity increases.

Proposition 6.3.4: Characterizing the IBP of Information �𝒊𝑽𝒐𝑰𝒄𝒕(𝑿𝒏)�

The IBP of information exhibits the following properties:

I. For a given value of c, the IBP of information is increases with t and reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝐸(𝑋𝑛); then it decreases with increasing t until it converges at 𝐶𝐸(𝑋𝑛∗) − 𝐶𝐸(𝑋𝑛)

II. For a given value of t, the IBP of information is increases in c and reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝐸(𝑋𝑛); then it decreases with increasing c until it converges at 𝐶𝐸(𝑋𝑛∗) − 𝐶𝐸(𝑋𝑛)

where 𝑋𝑛∗ is the deal with free information. The proposition above can be represented

graphically as follows. We define the value of information as 𝑉𝑜𝐼 = 𝐶𝐸(𝑋𝑛∗) − 𝐶𝐸(𝑋𝑛).

Figure 29 - Characterizing the IBP of Information

Example 6.3.2 Based on the example above, the following graph characterizes the IBP of information for a

specific deal.

Page 70: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

56

Figure 30 - Example IBP Value

In Figure 30 we can see that the IBP of information increases with time and capacity until it

reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝐸(𝑋𝑛). Before that point, the deal 𝑋𝑛 is not worth

accepting without information. Hence, the only alternative to buying information is to reject

the deal. After this point, the alternative to buying information is to accept the deal without

information. Hence, the value of information decreases as 𝑖𝑉(𝑋𝑛) increases. At the threshold

we have 𝑖𝑉(𝑋𝑛) = 𝐶𝐸(𝑋𝑛) and (𝑋𝑛∗) = 𝐶𝐸(𝑋𝑛∗) , so that the value of information within the

dynamic program ultimately converges to the value of information for a stand-alone deal.

Another interesting point illustrated in Figure 30 is that the value of information for a deal

within the deal flow can exceed the value of information for the stand-alone deal. The reason

for this fact is that the value of information is relative to the incremental values of the deal

with and without information. The incremental value of the deal without information might

equal zero even if the stand-alone value of the deal is positive. Hence, the difference

between the incremental values can be higher than that of the values of the stand-alone deal

with and without information. Note that this is not the case for the value of a deal. The value

of a deal within the deal flow can never exceed the value of the stand-alone deal.

6.3.4 Characterizing the Optimal Policy

Proposition 6.3.5: Characterization of the Optimal Policy The optimal policy for a given deal 𝑋𝑛 is characterized as follows:

Page 71: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

57

I. For a given value of c, the optimal policy can only change over time from rejecting, to buying information, and finally to accepting.

II. For a given value of t, the optimal policy can only change over capacity from rejecting, to buying information, and finally to accepting.

Proposition 6.3.5 states that if a certain type of deal is to be accepted whenever the capacity

is c, then it must be accepted also for higher capacity levels. Similarly, if it is rejected at the

capacity level c, then it must be rejected for all lower capacity levels. Further information is

sought only for a bounded range of capacity levels. In other words, if it is optimal to buy

information for a deal 𝑋𝑛 and capacity c, then for higher capacity levels one would never

reject without information and for lower capacity levels would never accept without

information. The same behavior is also observed when the capacity is fixed and time

progresses. These statements are represented graphically below:

Figure 31 – Characterizing Optimal Policy

Example 6.3.3 Figure 32 shows the case at three units of capacity when the decision maker is offered a

detector with symmetric accuracy of 0.9 and costs two monetary units. By symmetric

accuracy, we mean that the detector indicates the probability of success and the probability

of failure with equal accuracy.

Page 72: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

58

Figure 32 – Example of Optimal Policy Over Time

The graph illustrates the elements of the proposition. As time progresses, the optimal action

changes in one direction: from rejecting to buying information and from buying information

to accepting. Note that the same pattern is observed over deals in this example; however,

this is not true in general.

6.3.5 Multiple Detectors Here we discuss the situation in which the decision maker is offered multiple irrelevant

detectors. The following proposition identifies the optimal detector. We then discuss

ordering detectors.

Proposition 6.3.6: Identifying the Optimal Detector Consider two detectors with incremental value of deals with information 𝑖𝑉𝐼1 and 𝑖𝑉𝐼1 and

cost 𝑚1 and 𝑚2, respectively. Detector 1 will be optimal when:

𝑖𝑉𝐼1− 𝑖𝑉𝐼2 > 𝑚1 −𝑚2

Otherwise, detector 2 will be optimal.

This optimality is not myopic, that is, if the decision maker is offered the use of both, he/she

should not always start with the optimal detector.

Example 6.3.4: Multiple detectors To demonstrate the points above we consider the two detectors. Detector 1 has an accuracy

of 0.9 and costs 2 units of currency. Detector 2 has an accuracy of 0.8 and costs 1 unit of

currency. The following illustrates the case with capacity = 3 and deal 5. It shows the

difference between the 𝑖𝑉𝐼 values of both detectors.

Page 73: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

59

Figure 33 - Example 6.3.4: The ordering of detectors is not myopic.

Figure 33 shows that for deal 5 (p = 0.80) from t = 40 onwards the difference between 𝑖𝑉𝐼1

and 𝑖𝑉𝐼2 is always greater than the difference between the detectors’ prices. This result

indicates that we should always choose Detector 1.

Figure 34 shows the optimal policy for capacity 3. The vertical axis is the probability of

technical success, whereas the horizontal axis is the time period. The ‘accept’, ‘use detector

1’, ‘use detector 2’, and ‘reject’ alternatives are color-coded.

Recall that after t=40, it was optimal to use Detector 1 if the decision maker had to choose

between the two detectors. When we allow the use of both detectors, the graph shows that

it is optimal to begin with Detector 2 (t=40-41). Thus, the optimality is not myopic.

Figure 34 - Example 6.3.4: The ordering of detectors is not myopic.

6.4 The Long-Run Problem Here we discuss the situation in which the decision maker is interested in situations with no

deadline. This is the case when the decision maker does not face a relevant limitation on

Page 74: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

60

time. In the case of a movie producer, for example, the decision maker only considers the

value of time through discounting, but does not have a deadline imposed on allocating

his/her capacity.

As Howard & Matheson (1972) discussed, discounting in the risk-averse model causes

inconsistency. For this reason, we limit our discussion of the long-run problem to the risk-

neutral setting and leave the risk-averse model for future research.

6.4.1 Problem Structure and Definitions In many practical instances of this problem, the decision maker faces a large number of time

periods. Consequently, the computational complexity required to find an optimal policy

increases. Moreover, the dynamic nature of the optimal policy makes the storage and the

administration of the policy difficult. To overcome these difficulties, we look into the infinite-

horizon problem, i.e., one in which the deadline 𝑇 is infinity. The main virtue of the infinite-

horizon problem is that, under certain conditions, it admits a stationary optimal policy. This

policy is in turn nearly optimal for the finite horizon problem, with an error diminishing with

an increasing actual time horizon.

We reformulate our problem to introduce a discount factor 0 < 𝛿 < 1 in the value function

so that the maximum value is guaranteed to be finite and a stationary optimal policy exists.

Discounting ensures that the expected present value is finite under every policy.

For the infinite-horizon model, let 𝑉𝑐|𝑋𝑛 denote the maximum expected value when the

available capacity is c and the current deal offered is 𝑋𝑛. If 𝑐 ≥ 1 and this deal is accepted,

the decision maker collects its certain equivalent and moves on to the next period with

capacity 𝑐 − 1. If the deal is not accepted, then the capacity remains the same in the next

period. The third available action for a positive capacity is to investigate the deal further

(gather information) and base the decision to accept or reject it on the outcome of the

investigation. Figure 35 summarizes the evolution of the discounted model.

Page 75: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

61

where 𝛿 is the discounted rate.

Definition 6.4.1: Values in the Long-Run Problem We extend the definitions of the values from Section 6.3 to the long-run problem.

Marginal Value of Capacity 𝑅𝑐 = 𝑉𝑐 − 𝑉𝑐−1

Incremental Value of a Deal 𝑖𝑉𝑐(𝑋𝑛) = [𝐶𝐸(𝑋𝑛) − 𝛿𝑅𝑐]+

Incremental Value of a Deal with Information 𝑖𝑉𝐼𝑐(𝑋𝑛) = 𝐶𝐸�𝑖𝑉𝑐(𝑋𝑛|𝐼𝑖)�

Incremental Value of Information 𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) = 𝑖𝑉𝐼𝑐(𝑋𝑛) − 𝑖𝑉𝑐(𝑋𝑛)

6.4.2 Extension of the Results to the Long-Run Problem Here we characterize the problem parameters along similar lines as in Section 6.3. We find

that all the relations are maintained along the capacity dimension.

Proposition 6.4.1: Characterizing Long-Run Problem I. When offered the deal n, the decision maker should buy information if and only

if 𝑖𝑉𝐼𝑐(𝑋𝑛) ≥ 𝑖𝑉𝑐(𝑋𝑛) + 𝑚. After information is received, the decision maker should accept the deal if and only if 𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐 . Otherwise, the deal is worth accepting without information if and only if 𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐.

II. 𝑉𝑐 is non-decreasing in c. III. 𝑅𝑐 is non-increasing in c. IV. 𝑖𝑉𝑐(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑉𝐼𝑐(𝑋𝑛) are non-decreasing in c. V. 𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) is non-decreasing in c and reaches a maximum when 𝑅𝑐 = 𝐶𝐸(𝑋𝑛);

then it decreases to 𝐶𝐸(𝑋𝑛∗)− 𝐶𝐸(𝑋𝑛), where 𝑋𝑛∗ is the deal with free information.

Accept

Reject

Seek info

Reject

Accept

𝑉𝑐|𝑋𝑛

𝛿 𝑉𝑐−1 + 𝐸(𝑋𝑛)

𝛿 𝑉𝑐

𝛿 𝑉𝑐−1 + 𝐸(𝑋𝑛|𝐼𝑖) −𝑚

𝛿 𝑉𝑐 − 𝑚

Ii

Figure 35 - Problem Structure with Infinite Horizon

Page 76: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

62

VI. The optimal policy can only change over c from rejecting to buying information, and from buying information to accepting.

Policy improvement, iterative approximations algorithms, or a combination of the two can be

employed to find a stationary optimal or a near-optimal stationary policy. We detail a policy

improvement algorithm below.

6.4.3 Policy Improvement Algorithm Let a decision be a vector of actions, which determines whether to accept a deal of type n,

to reject such a deal, or to seek further information (denoted by A, R, and I, respectively) at

each capacity level and for each deal type. The present value of using the same decision D in

every period is 𝑉𝑐(𝐷)|𝑋𝑛 if initially the capacity is c and the first deal observed is of type n.

The maximum present value when deal n is observed at capacity c is 𝑉𝑐|𝑋𝑛. Also let 𝑉𝑐(𝐷) be

the expectation of 𝑉𝑐(𝐷)|𝑋𝑛 over all deal types. If 𝑝 = (𝑝𝑛) is the distribution of deal types,

then 𝑉𝑐(𝐷) = ∑ 𝑝𝑛𝑉𝑐(𝐷)|𝑋𝑛𝑛 . The policy improvement algorithm starts with an arbitrary

decision D and computes 𝑉𝑐(𝐷)|𝑋𝑛 for every c and n.

We can start with the decision that consists of rejecting every deal at every capacity level,

since its value is already known to be zero. Alternatively, we can start with the decision to

accept every incoming deal. In each iteration, we look for a decision such that using this

decision in the very first period instead of the decision at hand returns a larger present value.

This algorithm ends after a finite number of iterations and outputs a stationary optimal

policy. This is the generic policy improvement algorithm, as introduced first by Howard

(1960).

When the option to seek further information is eliminated, we are left with a series of simple

stopping problems, i.e., problems in which there are two available actions, to continue or to

stop. This observation allows an efficient implementation of the policy improvement

algorithm, which terminates after at most Kc iterations if K is the total number of deal types

and c is the initial capacity.

When c=1, the problem is trivially a simple stopping problem where the certain equivalent

𝐸(𝑋𝑛) of the deal n is the reward from stopping when deal n is offered. If one chooses to

continue, then the capacity continues to be 1 and the cost of continuing is zero. When c >1,

the problem in the present period can be regarded as a simple stopping problem in which

Page 77: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

63

the reward from stopping is the certain equivalent of the observed deal 𝐸(𝑋𝑛) and the

maximum present value 𝛿 ∑ 𝑝𝑛𝑉𝑐−1|𝑋𝑛𝑛 with capacity c – 1. As shown in (Veinott, 2004), a

simple stopping problem with S states can be solved in at most S iterations with a policy

improvement algorithm that changes the action in at most one state in every iteration. In our

case, the number of states at each capacity level is the number of deal types, so solving the

corresponding simple stopping problem for a capacity level c requires at most K iterations.

Since the initial capacity is c, the algorithm should end in Kc iterations. Refer to Appendix 1

for the pseudo-code summarizing the algorithm.

When the deals are ordered in decreasing order of their certain equivalents, a threshold

policy is optimal. That is, deals with small indices will be accepted, whereas ones with higher

indices will be rejected. Therefore, if the action for a deal at a given capacity level is switched

to accepting, the action for all deals with indices that are higher than the index of this one

should be updated to accepting. Similarly, if rejecting a deal is optimal, it is optimal to reject

all deals at all smaller capacity levels. These principles can be exploited to simplify the

algorithm.

Policy Improvement Pseudo Code (𝑐,𝑛) = 𝐴 for all 𝑐,𝑛

𝑉0(𝐷)|𝑋𝑛 = 0; 𝑉0(𝐷) = 0

𝑉1(𝐷)| 𝑋𝑛 = 𝐸(𝑋𝑛);

𝑉1(𝐷) = ∑ 𝑝𝑛𝑉1(𝐷)|𝑋𝑛𝑛

Iterations:

For 1 ≤ c ≤ C

If 𝐸(𝑋𝑛) + 𝛿𝑉𝑐−1(𝐷) < 𝛿𝑉𝑐(𝐷) 1 ≤ n ≤ K

𝐷(𝑐,𝑛) = 𝑅

Compute 𝑉𝑐(𝐷)|𝑋𝑛 by solving the system:

If 𝐷(𝑐,𝑛) = 1

𝑉𝑐(𝐷)|𝑋𝑛 = 𝐸(𝑋𝑛) + 𝛿𝑉𝑐−1(𝐷)

End If

If 𝐷(𝑐,𝑛) = 𝑅

𝑉𝑐(𝐷)|𝑋𝑛 = 𝛿𝑉𝑐(𝐷)

End If

𝑉𝑐(𝐷) = ∑ 𝑝𝑛𝑉𝑐(𝐷)|𝑋𝑛𝑛

Page 78: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

64

6.5 Extensions In this section we extend our work to allow for flexibility in capturing different assumptions.

We study the optimal policy in four cases. First, we look into situations when gathering

information imposes time and capacity costs in addition to monetary costs. For example, the

time spent on gathering information about a specific deal reduces our ability to source more

deals. Similarly, the time a partner allocates to gathering information might decrease the

time she has to be on company boards and hence decreases the capacity available to the

firm. We use this structure in the first and second extensions only.

In the second extension, we consider situations where the decision maker may choose to

reverse a single accepting decision. This can be seen as having the option to continue with

the allocation if no better opportunity arises. Otherwise, the decision maker, at a price, may

choose to reclaim his resource unit and allocate it to the new opportunity.

In the third extension, we employ the Probability of Knowing structure for information. In

this setup, information gathering is modeled as an exercise in obtaining perfect information.

This structure is useful when modeling situations in which the decision maker is seeking

information on a distinction that is known by others. Additionally, this structure helps to

assess the relative value of information-gathering activities.

6.5.1 Multiple Cost Structures In this section we extend our work on the value of information to include a delay and a

capacity cost in addition to the monetary cost. We set up the problem, extend our optimal

policy results, and then follow with a discussion of the use of multiple detectors.

Problem Structure As an abstraction, we consider that information is offered at a cost of k resource units, d

delay units, and m monetary cost. A graphical depiction of the problem is shown below

Page 79: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

65

Definition 6.5.1: Threshold with Multiple Cost Structures We generalize the threshold 𝑅𝑐𝑡 definition given in Section 6.3 to:

𝑅𝑐𝑡(𝑘,𝑑) = 𝑉𝑐𝑡+1 − 𝑉𝑐−𝑘𝑡+𝑑

Note that the threshold given earlier is now stated as 𝑅𝑐𝑡(1,1).

The Optimal Policy with Multiple Cost Structures

Proposition 6.5.1a: The Optimal Information-Gathering Policy with Multiple Cost Structures When offered a deal 𝑋𝑛, the decision maker should buy information if and only if

𝑖𝑉𝐼𝑐−𝑘𝑡+𝑑(𝑋𝑛) ≥ 𝑖𝑉𝑐𝑡 (𝑋𝑛) + 𝑅𝑐𝑡(𝑘,𝑑 + 1) + 𝑚

Proposition 6.5.1b: The Optimal Allocation Policy with Multiple Cost Structures After information is received, the decision maker should accept the deal if and only if

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐−𝑘𝑡+𝑑(1,1)

Otherwise, the deal is worth accepting without information if and only if

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡(1,1)

Example 6.5.1 To better illustrate this example we changed the parameters of the problem; we have a

capacity of 8 resource units and a time horizon of 10 periods. Figure 37 shows the value of

the deal flow with perfect information (clairvoyance) at different cost structures.

Accept

Seek info

Reject

Reject

Accept

𝑉𝑐𝑡|𝑋𝑛

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1

𝑉𝑐−1−𝑘𝑡+1+𝑑 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚

𝑉𝑐−𝑘𝑡+1+𝑑 − 𝑚

Ii

Figure 36 - Problem Structure with Information at Multiple Cost Types

Page 80: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

66

Figure 37 – Example 6.5.1: Value of the deal flow with clairvoyance at different cost structures

The red line represents the value of the original deal flow with no clairvoyance. The blue line

shows the value of the deal flow with clairvoyance offered at a cost of 3 monetary units. The

green line shows the value of the deal flow when clairvoyance is offered at 3 time units per

usage. Finally the black line shows the value when the cost of clairvoyance is 3 resource units

of capacity.

Note that in the beginning of the time horizon, the deal flow with clairvoyance at a capacity

cost is worse than that with clairvoyance at a monetary cost. This changes as time passes

when the capacity at hand exceeds the number of periods left.

Figure 38 shows the 𝑖𝑉𝑜𝐼 of clairvoyance in the different cases described above.

Page 81: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

67

Figure 38 - Example 6.5.1: Incremental multiple of the deal with clairvoyance at different cost structures

As stated above, 𝑖𝑉𝐼 increases with time, as shown in the properties above. Note that the

green line (with delay cost) goes to zero at t = 9, because within this deal flow decision

makers cannot afford the delay cost, as there are only two units of time left.

Multiple Detectors We consider the case when the decision maker is offered two irrelevant detectors with

indications 𝐼1and 𝐼2 and costs (𝑘1,𝑑1,𝑚1)and (𝑘2,𝑑2,𝑚2), respectively.

Corollary 6.5.1: Identifying the Optimal Detector with Multiple Cost Structures Given the scenario above, Detector 1 will be optimal when:

𝑖𝑉𝐼1 − 𝑖𝑉𝐼2 ≥ 𝑅𝑐𝑡(𝑘2,𝑑2 + 1 ) − 𝑅𝑐𝑡 (𝑘1,𝑑1 + 1 ) + 𝑚1 −𝑚2

where 𝑖𝑉𝐼1 = 𝑖𝑉𝐼𝑐−𝑘1𝑡+𝑑1𝑜𝑣𝑒𝑟 𝐼1,𝑎𝑛𝑑 𝑖𝑉𝐼2 = 𝑖𝑉𝐼𝑐−𝑘2

𝑡+𝑑2𝑜𝑣𝑒𝑟 𝐼2

Otherwise, Detector 2 will be optimal.

This analysis can be extended to more than two detectors. Also, as previously discussed, this

criterion is not myopic. If the decision maker is offered both detectors, she might still start

with the less ‘optimal’ detector.

6.5.2 Decision Reversibility In this section we consider the option of reversing decisions. For a background on options

within Decision Analysis please refer to Howard (1995).

Page 82: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

68

Problem Setup and Definitions This section requires generalizing the definitions used in this chapter so far.

Definition 6.5.2: Values with Decision Reversibility We extend the definitions of the values from Section 6.3. The generalizations include adding

a placeholder for the deal with an option.

The Certain Equivalent of a Deal Flow with the Option to Reverse a Decision 𝑉𝑐𝑡(𝑍)|𝑋𝑛 is the certain equivalent of the future value of the deal at time t with remaining

capacity c after observing a deal Xn with an option to reverse an allocation to deal Z.

𝑉𝑐𝑡(𝑍) is the certain equivalent of the future value of the deal at time t with remaining

capacity c before observing a specific deal with an option to reverse an allocation to deal Z.

Threshold with an Option to Reverse a Decision 𝑅𝑐𝑡(𝑘,𝑑,𝑍) is the threshold with an option to reverse an allocation to deal Z.

𝑅𝑐𝑡(𝑘,𝑑,𝑍) = 𝑉𝑐𝑡+1(𝑍) − 𝑉𝑐−𝑘𝑡+1+𝑑(𝑍)

The Incremental Cost of Capacity 𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍) is the incremental cost of capacity.

𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍) = [𝑅𝑐𝑡(1,1,𝑍) − 𝐶𝐸(𝑋𝑛)]+

Definition 6.5.2: The Incremental Value of a Deal with an Option 𝑖𝑉𝑂𝑐𝑡(𝑋𝑛,𝑍) = 𝑉𝑐𝑡+1(𝑋𝑛)− 𝑉𝑐−1𝑡+1(𝑍)

The incremental value of a deal with the option to reverse an investment decision is defined

as the indifference buying price of the deal with the option and is equal to the contribution

of the deal with a free option to of the value of the deal flow.

We limit our discussion to the option of reversing a single allocation decision at any time. The

decision maker can keep an option on one deal only. So, in effect, the decision maker has

three alternatives at each stage. The first alternative is to reject. The second is to accept

without buying an option and the third is to accept with an option to reverse the decision

later.

In general, we assume that the decision maker has previously bought an option on deal Z and

now he is offered a deal 𝑋𝑛 at time t with remaining capacity c. The first alternative is for the

decision maker to reject and not change anything. Another alternative is for him to accept

Page 83: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

69

deal 𝑋𝑛 without buying an option so that he still has the option on Z. The third alternative is

for him to accept 𝑋𝑛 and buy an option. Here, since we allow one option only, he discards Z

and does not allocate more capacity. Figure 39 is a graphical description of the deal.

Optimal Policy with an Option to Reverse an Allocation Decision

Proposition 6.5.2: Optimal Allocation Policy with an Option When offered a deal 𝑋𝑛 with an option on deal 𝑍, the decision maker should accept the deal

and buy an option on it if and only if:

𝑖𝑉𝑂𝑐𝑡(𝑋𝑛,𝑍) ≥ 𝑖𝐶𝑐𝑡 (𝑋𝑛,𝑍) +𝑚 + 𝐶𝐸(𝑍)

Otherwise, the decision maker should accept if and only if:

𝐶𝐸(𝑋𝑛) > 𝑅𝑐𝑡(1,1,𝑍)

Therefore, the option is worth buying when the incremental value of the option is higher

than the sum of its price, the incremental cost of capacity, and the value of the deal

liquidated.

6.5.3 The Probability of Knowing Detectors In this section we consider the probability of knowing detectors. This is an alternative

method of modeling information-gathering activities defined by Howard (1998). We include

this procedure, as it allows us a clear way to value and order detectors. Here a detector is

modeled as a probability of obtaining clairvoyance versus no information. In this extension

we do not use the multiple cost structure. We set up the problem, extend our optimal policy

results, and then follow with a discussion of using multiple detectors.

𝑉𝑐𝑡(𝑍)|𝑋𝑛

Reject

Accept Don’t

Buy option 𝑉𝑐𝑡+1(𝑋𝑛) + 𝐶𝐸(𝑋𝑛)− 𝐶𝐸(𝑍) −𝑚

𝑉𝑐−1𝑡+1(𝑍) + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1(𝑍)

Figure 39 - Problem structure with an option to reverse an allocation decision

Page 84: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

70

Problem Structure The setup here differs from the basic one in the structure of the information. When the

decision maker seeks information, he either gets clairvoyance with probability 𝑝 or gets no

information. After getting the indication, the decision maker can then decide whether to

accept or reject the deal at hand. The following is a graphical representation of the problem.

If decision makers receive clairvoyance, they get the incremental value with perfect

information (iVPI) as

𝑖𝑉𝑃𝐼𝑐𝑡 (𝑋𝑛) = 𝐶𝐸(𝑖𝑉𝑐𝑡(𝑋𝑛|𝐼))

If they do not receive clairvoyance, the deal does not change.

The Optimal Policy with a Probability of Knowing Detectors

Proposition 6.5.3a: The Optimal Information Gathering Policy with a Probability of Knowing Detectors Given a detector defined as above with a probability of knowing p and price m, the decision

maker should buy information if and only if:

𝑢(𝑖𝑉𝑃𝐼𝑐𝑡 (𝑋𝑛)− 𝑖𝑉𝑐𝑡(𝑋𝑛)) >𝑢(𝑚)𝑝

Figure 40 - Problem Structure with the Probability of Knowing Detectors

Ii

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject 𝑉𝑐𝑡|𝑋𝑛

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1

Reject

Accept 𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚

𝑉𝑐𝑡+1 − 𝑚

Reject

Accept 𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) −𝑚

𝑉𝑐𝑡+1 − 𝑚

p

Page 85: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

71

Proposition 6.5.3b: The Optimal Allocation Policy with a Probability of Knowing Detectors If clairvoyance is received, the decision maker should accept the deal if and only if:

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, if no clairvoyance is received or the decision maker did not buy information, then

the decision maker should accept the deal if and only if:

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡

Multiple Detectors with a Probability of Knowing Detectors Now say we have two irrelevant detectors with probability of clairvoyance p1 and p2 and

costs m1 and m2, respectively.

Corollary 6.5.3: Identifying the Optimal Detector with a Probability of Knowing Detectors Given the setup above, Detector 1 will be optimal when

𝑢(𝑚1)𝑝1

<𝑢(𝑚2)𝑝2

Otherwise Detector 2 will be optimal. In this setup, the optimality is myopic, so if we have

multiple irrelevant detectors we use them in decreasing order of 𝑢(𝑚) 𝑝⁄ .

Page 86: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

72

Chapter 7 – Step 2.2: The Multiplicative Valuation Funnel

“Doubt is not a pleasant condition, but

certainty is absurd.”

François-Marie Voltaire

As stated in the introduction, this chapter follows the structure of Chapter 6 and will contain

repeated information. The goal here is to have each chapter be independent of the other so

that readers can elect to read the chapter relevant to their context.

Figure 41 - Step 2 of the Solution Process

7.1 Overview The second step of our template is to build the valuation funnel. In this chapter we build the

valuation funnel for the multiplicative setting. The goal of this step is to build the valuation

funnel for the multiplicative setting. After this step, decision makers will have calculated the

indifference buying fractions of the deals, information, control, etc. These indifference

fractions help the decision maker to make real-time decisions when the alternatives are

available. In addition, the resulting valuation funnel can be easily used to evaluate

alternatives concerning the deal flow as a whole. More on the application of the funnel is

given in Chapter 8.

Page 87: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

73

This chapter is organized into six parts. In Section 2 we give the basic problem structure, in

which we describe the model and define the main terms. In Section 3 we present the optimal

policy, characterize the values within the deal flow, and then return to characterizing the

optimal policy. In Section 4 we extend our main results to the long-run problem structure.

Finally, in Section 5, we study some extensions to the basic model. These extensions are a

more flexible cost structure, the use of the probability of knowing detectors, the option of

reversing an allocation decision, and finally the value of perfect hedging. Note that the

results here differ more in the probability of knowing detectors than those in Chapter 6.

7.2 Basic Problem Structure The basic problem considered here concerns a decision maker who is offered a deal at the

start of each time period over a horizon of t periods. When offered a deal, the decision

maker can accept the deal, reject it, or seek information and then decide. The basic model

assumes independence over time and among deals.

When a decision maker is offered a deal 𝑋𝑛 she has one of three alternatives. If she directly

accepts the deal, she allocates one unit of her capacity (with initial capacity 𝐶) and gets the

deal’s certain multiplier. If she rejects the deal, she keeps her capacity. Finally, she can seek

information at a cost of a fraction 𝑓 of her capacity. If she seeks information, she will observe

an indication that updates her beliefs about the deal and then will decide whether or not to

accept the deal.

Figure 42 shows the structure of the dynamic program, where 𝑀𝑐𝑡 is the certain multiplier of

a future multiple of the deal stream at time t with capacity c remaining before observing a

specific deal. 𝑀𝑐𝑡|𝑋𝑛 is the certain multiplier of a future multiple after observing the deal.

Page 88: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

74

For a review of research on the value of information, please refer to Howard (1967).

7.2.1 Definitions

Definition 7.2.1: The Marginal Multiple of Capacity We denote the marginal multiple of a unit of capacity at (c,t) by 𝑅𝑐𝑡, defined as

𝑅𝑐𝑡 = 𝑀𝑐𝑡+1

𝑀𝑐−1𝑡+1

Definition 7.2.2: The Incremental Multiple of a Deal We denote the incremental multiple of a deal 𝑋𝑛at (c,t) by 𝑖𝑀𝑐

𝑡(𝑋𝑛), defined as

𝑖𝑀𝑐𝑡(𝑋𝑛) =

𝐶𝑀(𝑋𝑛)𝑅𝑐𝑡

∨ 1

This is the indifference buying fraction (IBF) of the deal 𝑋𝑛at time t and capacity c.

Definition 7.2.3: The Incremental Multiple of a Deal with Information We denote the incremental multiple of a deal 𝑋𝑛 with information at (c,t) by 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛);

defined as

𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀�𝑖𝑀𝑐𝑡(𝑋𝑛|𝐼𝑖)�

This is the indifference buying fraction (IBF) of the deal 𝑋𝑛 when offered information with

indication 𝐼𝑖 at time t and capacity c.

Definition 7.2.4: The Incremental Multiple of Information We denote the incremental multiple of information about a deal 𝑋𝑛 at (c,t) as 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛),

defined as the IBF of information on this deal at (c,t), or

Figure 42 - Basic Problem Structure

Accept

Seek info

Reject

Reject

Accept

𝑀𝑐𝑡|𝑋𝑛

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛)

𝑀𝑐𝑡+1

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓)

𝑀𝑐𝑡+1 ∙ (1 − 𝑓)

Ii

Page 89: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

75

𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) =𝑖𝑀𝐼𝑐𝑡(𝑋𝑛)𝑖𝑀𝑐

𝑡(𝑋𝑛)

Example 7.2.1 We refer to the following simple example throughout this chapter. We consider the situation

facing a risk-averse decision maker with the following setup: The decision maker exhibits

power utility

𝑢(𝑥) = (𝑥)𝜆

𝜆

with λ = 0.1. The decision maker has a time horizon of 50 periods (T=50) and can accept at

most 4 opportunities (c=4). The deals have the following structure (and differ in p):

Figure 43 - Example Deal Structure

In each time period, the decision maker can either get nothing or one of six possible deals.

Figure 44 - Example Deal Flow

Page 90: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

76

7.2.2 The Multiple of Control The multiple of control can be studied along the same lines as the multiple of information. All

of our results below for the multiple of information can be duplicated for that of control. For

a more detailed study of the value of control, please refer to Matheson & Matheson (2005).

7.3 Main Results In this section we present the main results of this chapter. We present the optimal policy and

characterize the main values across the model parameters (c,t). Specifically, we characterize

the optimal policy, the multiple of the deal flow, the incremental multiple of capacity, the

incremental multiple of deals with and without information, and the buying fraction of

information.

7.3.1 Optimal Policy Given the setup above, the decision maker’s optimal policy is defined in the following

proposition.

Proposition 7.3.1a: The Optimal Information Gathering Policy When offered a deal 𝑋𝑛, the decision maker should buy information if and only if

𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) ≥𝑖𝑀𝑐

𝑡(𝑋𝑛)1 − 𝑓

Proposition 7.3.1b: The Optimal Allocation Policy After information is received, the decision maker should accept the deal if and only if

𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, the deal is worth accepting without information if and only if

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

7.3.2 Characterizing the Certain Multiplier and Threshold Given the above structure, we can show that the certain multiplier of the deal flow (𝑀𝑐

𝑡) and

the marginal multiple of capacity (𝑅𝑐𝑡) change in intuitive ways with capacity (c) and time (t).

Proposition 7.3.2: Characterizing the Deal Flow Certain Multiplier I. 𝑀𝑐

𝑡 is non-decreasing in c for all t II. 𝑀𝑐

𝑡 is non-increasing in t for all c

Page 91: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

77

In other words, the value of the deal flow increases as the capacity at hand increases and

decreases as the deadline approaches.

Example 7.3.1 Using the setup example above, Figure 45 is a graph of the deal flow multiple.

Figure 45 - Example Deal Flow Value

As stated in Proposition 7.3.2, the multiple of the deal flow decreases with time and

increases with capacity. These observations agree with the intuition that more capacity and

more time are desirable. As the time progresses, the chance of getting high-valued deals and

hence of earning high rewards in the future diminishes.

Proposition 7.3.3: Characterizing Threshold I. 𝑅𝑐𝑡 is non-increasing in c for all t II. 𝑅𝑐𝑡 is non-increasing in t for all c

Proposition 7.3.3 states that the marginal multiple of capacity decreases as capacity

increases and as the deadline approaches. Otherwise stated, the optimal policy becomes

more lenient with more capacity at hand and as we approach our deadline.

7.3.3 Characterizing Indifference Buying Fractions Based on the above results, we characterize the way in which the indifference buying

fraction of a deal 𝑖𝑀𝑐𝑡 and that of the deal with information 𝑖𝑀𝐼𝑐𝑡 changes with c and t. Then

we characterize the indifference buying fraction of information 𝑖𝑀𝑜𝐼𝑐𝑡.

Corollary 7.3.1: Characterizing the IBF of Deals (𝑖𝑀𝑐𝑡(𝑋𝑛) and 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛))

The IBF multiples of a deal 𝑋𝑛 with and without information exhibit the following two properties

I. 𝑖𝑀𝑐𝑡(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) are non-decreasing in c for all t

Page 92: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

78

II. 𝑖𝑀𝑐𝑡(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) are non-decreasing in t for all c

The marginal multiple of capacity is non-increasing in time and capacity. From this, it

immediately follows that the incremental multiple of the deal 𝑋𝑛 when the capacity is c at

time t is non-decreasing in both c and t. That is, the multiple of a deal increases as the

deadline approaches and/or the available capacity increases.

Proposition 7.3.4: Characterizing the IBF of Information �𝒊𝑴𝒐𝑰𝒄𝒕(𝑿𝒏)�

The IBF of information exhibits the following properties

III. For a value of given c, the IBF of information is increasing in t and reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝑀(𝑋𝑛); then it decreases in t until it converges at 𝐶𝑀(𝑋𝑛∗)/𝐶𝑀(𝑋𝑛).

IV. For a given value of t, the IBF of information is increasing in c and reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝑀(𝑋𝑛); then it decreases in c until it converges at 𝐶𝑀(𝑋𝑛∗)/𝐶𝑀(𝑋𝑛).

where 𝑋𝑛∗ is the deal with free information. The proposition above can be represented

graphically as follows. We define the multiple of information as 𝑀𝑜𝐼 = 𝐶𝑀(𝑋𝑛∗)/𝐶𝑀(𝑋𝑛).

Figure 46 - Characterizing the IBF of Information

Example 7.3.2 Based on the example scenario described above, the following graph characterizes the IBF of

information for a specific deal.

Page 93: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

79

Figure 47 - Example IBF Value

In Figure 47 we can see that the IBF of information increases with time and capacity until it

reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝑀(𝑋𝑛). Before that point, the deal n is not worth buying

without information. Hence, the only alternative to buying information is to reject the deal.

After this point, the alternative to buying information is to buy the deal without information.

Hence, the value of information decreases as 𝑖𝑀(𝑋𝑛) increases. At the threshold we have

𝑖𝑀(𝑋𝑛) = 𝐶𝑀(𝑋𝑛) and 𝑖𝑀(𝑋𝑛∗) = 𝐶𝑀(𝑋𝑛∗), so the value of information within the dynamic

program ultimately converges to the multiple of information for a stand-alone deal.

Another interesting point illustrated in Figure 47 is that the multiple of information for a deal

within the deal flow can exceed the multiple of information for the stand-alone deal. The

reason here is the multiple of information is relative to the incremental multiples of the deal

with and without information. The incremental multiple of the deal without information

might equal one even if the stand-alone multiple of the deal is larger than one. Hence, the

ratio of the incremental multiples can be higher than that of the multiples of the stand-alone

deal with and without information. Note that this is not the case for the multiple of a deal.

The multiple of a deal within the deal flow can never exceed the multiple of the stand-alone

deal.

Page 94: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

80

7.3.4 Characterizing the Optimal Policy

Proposition 7.3.5: Characterization of the Optimal Policy The optimal policy for a given deal 𝑋𝑛 is characterized as follows

I. For a given value of c, the optimal policy can only change over time from rejecting, to buying information, and finally to accepting.

II. For a given value of t, the optimal policy can only change over capacity from rejecting, to buying information, and finally to accepting.

Proposition 7.3.5 states that if a certain type of deal is to be accepted whenever the capacity

is c, then it must be accepted also for higher capacity levels. Similarly, if it is rejected at the

capacity level c, then it must be rejected for all lower capacity levels. Further information is

sought only for a bounded range of capacity levels. In other words, if it is optimal to buy

information for a deal 𝑋𝑛 and capacity c, then for higher capacity levels one would never

reject without information and for lower capacity levels one would never accept without

information. The same behavior is observed also when the capacity is fixed and time

progresses. These statements are represented graphically below:

Figure 48 – Characterization of Optimal Policy

Example 7.3.3 The following graph shows the case at three units of capacity when the decision maker is

offered a detector with symmetric accuracy of 0.9 that costs a 5% fraction of current

capacity. By symmetric accuracy, we mean that the detector indicates the probability of

success and the probability of failure with equal accuracy.

Page 95: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

81

Figure 49 - Example Optimal Policy Over Time

The graph illustrates the principles of the proposition. As time progresses, the optimal action

changes in one direction: from rejecting to buying information and from buying information

to accepting. Note that the same pattern is observed over deals in this example; however,

this is not true in general.

7.3.5 Multiple Detectors Here we discuss the situation in which the decision maker is offered multiple irrelevant

detectors. The following proposition identifies the optimal detector. We then discuss

ordering detectors.

Proposition 7.3.6: Identifying the Optimal Detector Consider two detectors with incremental multipliers of deals with information 𝑖𝑀𝐼1 and

𝑖𝑀𝐼1cost fractions 𝑓1 and 𝑓2, respectively. Detector 1 will be optimal when:

𝑖𝑀𝐼1𝑖𝑀𝐼2

>1 − 𝑓21 − 𝑓1

Otherwise, Detector 2 will be optimal.

This optimality is not myopic, that is, if the decision maker is offered the use of both, he

should not always start with the optimal detector.

Example 7.3.4: Multiple Detectors To illustrate the points above we consider the two detectors. Detector 1 has an accuracy of

0.9 and cost fraction 5%. Detector 2 has an accuracy of 0.8 and cost fraction 1%. Figure 50

illustrates the case with capacity = 3 and deal 3. It shows the difference between the 𝑖𝑀𝐼

values of both detectors.

Page 96: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

82

Figure 50 - Example 7.3.4: The ordering of detectors is not myopic.

The graph above shows that for deal 4 (p = 0.65) from t=41 onwards the ratio between 𝑖𝑀𝐼1

and 𝑖𝑀𝐼2 is always greater than the ratio between the detectors’ costs. This indicates that

we should always choose Detector 1.

The following graph shows the optimal policy for capacity 3. The vertical axis is the

probability of technical success, whereas the horizontal axis is the time period. The ‘accept’,

‘use detector 1’, ‘use detector 2’, and ‘reject’ alternatives are color coded.

Recall that after t=41, it was optimal to use Detector 1 if the decision maker had to choose

between the two detectors. When we allow the use of both detectors, the graph below

shows that it is optimal to begin with Detector 2 (t>45). Thus, the optimality is not myopic.

Figure 51 - Example 7.3.4: The ordering of detectors is not myopic.

Page 97: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

83

7.4 The Long-Run Problem Here we discuss the situation in which the decision maker is interested in situations with no

deadline. This is the case when the decision maker does not face relevant limitations on

time. In the case of a movie producer, for example, the decision maker only considers the

value of time through discounting but does not have a deadline imposed on allocating his

capacity.

As Howard & Matheson (1972) discussed, discounting in the risk-averse model causes

inconsistency. For this reason, we limit our discussion of the long-run problem to the risk-

neutral setting and leave the risk-averse model for future research.

7.4.1 Problem Structure and Definitions In many practical instances of this problem, the decision maker faces a large number of time

periods. Consequently, the computational complexity to find an optimal policy increases.

Moreover, the dynamic nature of the optimal policy makes the storage and the

administration of the policy difficult. To overcome these difficulties, we look into the infinite-

horizon problem, i.e., one in which the deadline 𝑇 is infinity. The main virtue of the infinite-

horizon problem is that, under certain conditions, it admits a stationary optimal policy. This

policy is in turn nearly optimal for the finite horizon problem, with an error diminishing with

an increasing actual time horizon.

We reformulate our problem to introduce a discount factor 0 < 𝛿 < 1 in the value function

so that the maximum value is guaranteed to be finite and a stationary optimal policy exists.

Discounting ensures that the expected present value is finite under every policy.

For the infinite-horizon model, let 𝑀𝑐|𝑋𝑛 denote the maximum expected multiple when the

available capacity is c and the current deal offered is 𝑋𝑛. If 𝑐 ≥ 1 and this deal is accepted,

the decision maker collects its certain multiplier and moves on to the next period with

capacity 𝑐 − 1. If the deal is not accepted, then the capacity remains the same in the next

period. The third available action for a positive capacity is to investigate the deal further

(gather information) and base the decision to accept or reject it on the outcome of the

investigation. Figure 52 summarizes the evolution of the discounted model.

Page 98: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

84

where 𝛿 is the discount rate.

Definition 7.4.1: Multiples in the Long-Run Problem We extend the definitions of the values from Section 7.3 to the long-run problem.

The Marginal Multiple of Capacity

𝑅𝑐 =𝑀𝑐

𝑀𝑐−1

The Incremental Multiple of a Deal

𝑖𝑀𝑐(𝑋𝑛) =𝐶𝑀(𝑋𝑛)𝛿𝑅𝑐

∨ 1

The Incremental Multiple of a Deal with Information 𝑖𝑀𝐼𝑐(𝑋𝑛) = 𝐶𝑀�𝑖𝑀𝑐(𝑋𝑛|𝐼𝑖)�

The Incremental Multiple of Information

𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) =𝑖𝑀𝐼𝑐(𝑋𝑛)𝑖𝑀𝑐(𝑋𝑛)

7.4.2 Extension of the Results to the Long-Run Problem Here we characterize the problem parameters along similar lines as in Section 7.3. We find

that all the relations are maintained along the capacity dimension.

Proposition 7.4.1: Characterizing the Long-Run Problem I. When offered the deal n, the decision maker should buy information if and only

if 𝑖𝑀𝐼𝑐(𝑋𝑛) ≥ 𝑖𝑀𝑐(𝑋𝑛)/(1− 𝑓). After information is received, the decision maker should buy the deal if and only if 𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐. Otherwise, the deal is worth buying without information if and only if 𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐.

II. 𝑀𝑐 is non-decreasing in c. III. 𝑅𝑐 is non-increasing in c.

Figure 52 - Problem Structure with Infinite Horizon

Accept

Reject

Seek info

Reject

Accept

𝑀𝑐|𝑋𝑛

𝛿 𝑀𝑐−1 ∙ 𝐶𝑀(𝑋𝑛)

𝛿 𝑀𝑐

𝛿 𝑀𝑐−1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓)

𝛿 𝑀𝑐 ∙ (1 − 𝑓)

Ii

Page 99: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

85

IV. 𝑖𝑀𝑐(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑀𝐼𝑐(𝑋𝑛) are non-decreasing in c. V. 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) is non-decreasing in c and reaches a maximum when 𝑅𝑐 = 𝐶𝑀(𝑋𝑛); then

it decreases to 𝑅𝑐 = 𝐶𝑀(𝑋𝑛), where 𝑋𝑛∗ is the deal with free information. VI. The optimal policy can only change over c from rejecting to buying information, and

from buying information to accepting.

7.5 Extensions In this section we extend our work to allow for flexibility in capturing different assumptions.

We study the optimal policy within four cases. First, we look into situations when gathering

information imposes time and capacity costs in addition to monetary costs. For example, the

time spent on gathering information about a specific deal reduces our ability to source more

deals. Similarly, the time a partner allocates to gathering information might decrease the

time she has to be on company boards and hence decreases the capacity available to the

firm. We use this structure in the first and second extensions only.

In the second extension, we consider situations where the decision maker may choose to

reverse a single accepting decision. This can be seen as having the option to continue with

the allocation if no better opportunity arises. Otherwise, the decision maker, at a price, may

choose to reclaim his resource unit and allocate it to the new opportunity.

In the third extension, we employ the Probability of Knowing structure for information. In

this setup, information gathering is modeled as an exercise in obtaining perfect information.

This structure is useful when modeling situations where the decision maker is seeking

information on a distinction that is known by others. Additionally, this structure helps to

assess the relative value of information-gathering activities.

7.5.1 Multiple Cost Structures In this section we extend our work on the value of information to include a delay and a

capacity cost in addition to the monetary cost. We set up the problem, extend our optimal

policy results, and then follow with a discussion of the use of multiple detectors.

Problem Structure As an abstraction, we consider that information is offered at a cost of k resource units, d

delay units, and a cost fraction f. A graphic representation of the problem is shown below

Page 100: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

86

Definition 7.5.1: Threshold with Multiple Cost Structures We generalize the threshold 𝑅𝑐𝑡 definition given in Section 7.3 to:

𝑅𝑐𝑡(𝑘,𝑑) =𝑀𝑐𝑡+1

𝑀𝑐−𝑘𝑡+𝑑

Note that the threshold given earlier is now stated as 𝑅𝑐𝑡(1,1).

The Optimal Policy with Multiple Cost Structures

Proposition 7.5.1a: The Optimal Information Gathering Policy with Multiple Cost Structures When offered a deal 𝑋𝑛 the decision maker should buy information if and only if

𝑖𝑀𝐼𝑐−𝑘𝑡+𝑑(𝑋𝑛) ≥ 𝑖𝑀𝑐𝑡 (𝑋𝑛) ∙

𝑅𝑐𝑡(𝑘,𝑑 + 1)1 − 𝑓

Proposition 7.5.1b: The Optimal Allocation Policy with Multiple Cost Structures After information is received, the decision maker should buy the deal if and only if

𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐−𝑘𝑡+𝑑(1,1)

Otherwise, the deal is worth buying without information if and only if

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

Example 7.5.1 To better illustrate this example, we changed the parameters of the problem; we have a

capacity of 8 resource units and a time horizon of 10 periods. The following graph shows the

value of the deal flow with perfect information (clairvoyance) at different cost structures.

Accept

Seek info

Reject

Reject

Accept

𝑀𝑐𝑡|𝑋𝑛

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛)

𝑀𝑐𝑡+1

𝑀𝑐−1−𝑘𝑡+1+𝑑 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓)

𝑀𝑐−𝑘𝑡+1+𝑑 ∙ (1 − 𝑓)

Ii

Figure 53 - Problem Structure with Information at Multiple Cost Types

Page 101: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

87

Figure 54 – Example 7.5.1: Multiples of the Deal Flow with Clairvoyance at Different Cost Structures

The red line represents the multiple of the original deal flow with no clairvoyance. The blue

line shows the multiple of the deal flow with clairvoyance offered at a cost fraction of 5%.

The green line shows the multiple of the deal flow when clairvoyance is offered at 3 time

units per usage. Finally the black line shows the multiple when the cost of clairvoyance is 3

resource units of capacity.

Note that in the beginning of the time horizon, the deal flow with clairvoyance at a capacity

cost is worse than that with clairvoyance at a cost fraction. This changes as time passes when

the capacity at hand exceeds the number of periods left.

Figure 55 shows the 𝑖𝑀𝑜𝐼 of clairvoyance in the different cases described above.

Figure 55 - Example 7.5.1: Incremental multiples of the deal with clairvoyance at different cost structures

Page 102: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

88

As stated above, 𝑖𝑀𝑜𝐼 increases with time, as shown in the properties above. Note that the

green line (with delay cost) goes to 1 (i.e., worthless) at time 10 because within this deal flow

decision makers cannot afford the delay cost, since there are only two units of time left.

Multiple Detectors We consider the case when the decision maker is offered two irrelevant detectors with

indications 𝐼1and 𝐼2 and costs (𝑘1,𝑑1,𝑓1)and (𝑘2,𝑑2,𝑓2), respectively.

Corollary 7.5.1: Identifying the Optimal Detector with Multiple Cost Structures Given the setup above, Detector 1 will be optimal when:

𝑖𝑀𝐼1𝑖𝑀𝐼2

≥𝑅𝑐𝑡(𝑘2,𝑑2 + 1 )𝑅𝑐𝑡(𝑘1,𝑑1 + 1 )

∙1 − 𝑓21 − 𝑓1

Where 𝑖𝑀𝐼1 = 𝑖𝑀𝐼𝑐−𝑘1𝑡+𝑑1/ 𝐼1, and 𝑖𝑀𝐼2 = 𝑖𝑀𝐼𝑐−𝑘2

𝑡+𝑑2/ 𝐼2

Otherwise Detector 2 will be optimal.

This can be extended to more than two detectors. Also, as previously discussed, this criterion

is not myopic. If the decision maker is offered both detectors she might still start with the

less ‘optimal’ detector.

7.5.2 Decision Reversibility In this section we consider the option of reversing decisions. For a background on options

within Decision Analysis please refer to Howard (1995).

Problem Setup and Definitions This section requires generalizing the definitions used in this chapter so far.

Definition 7.5.2: Multiples with Decision Reversibility We extend the definitions of the values from Section 7.3. The generalizations include adding

a placeholder for the deal with an option.

The Certain Equivalent of the Deal Flow with an Option to Reverse a Decision 𝑀𝑐

𝑡(𝑍)|𝑋𝑛is the certain multiplier of the future value of the deal at time t with remaining

capacity c after observing a deal Xn with an option to reverse an allocation to deal Z.

𝑀𝑐𝑡(𝑍) is the certain multiplier of the future value of the deal at time t with remaining

capacity c before observing a specific deal with an option to reverse an allocation to deal Z.

Page 103: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

89

Threshold with an Option to Reverse a Decision 𝑅𝑐𝑡(𝑘,𝑑,𝑍) is the threshold with an option to reverse an allocation to deal Z.

𝑅𝑐𝑡(𝑘,𝑑,𝑍) =𝑀𝑐𝑡+1(𝑍)

𝑀𝑐−𝑘𝑡+1+𝑑(𝑍)

The Incremental Cost Multiple of Capacity 𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍) is the incremental cost multiple of capacity.

𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍) =𝑅𝑐𝑡(1,1,𝑍)𝐶𝐸(𝑋𝑛) ∨ 1

Definition 7.5.2: The Incremental Value of a Deal with an Option

𝑖𝑀𝑂𝑐𝑡(𝑋𝑛,𝑍) =𝑀𝑐𝑡+1(𝑋𝑛)𝑀𝑐−1𝑡+1(𝑍)

The incremental multiple of a deal with an option to reverse a decision is defined as the

indifference buying fraction of the deal with the option and is equal to the contribution of

the deal with a free option to of the multiple of the deal flow.

We limit our discussion to the option of reversing a single accepting decision at any time. The

decision maker can keep an option on one deal only. So, in effect, the decision maker has

three alternatives at each stage. The first alternative is to wait. The second is to accept

without buying an option and the third is to accept with an option to reverse the decision

later.

In general, we assume that decision maker has previously bought an option on a deal Z and

now he is offered a deal 𝑋𝑛 at time t with remaining capacity c. The first alternative is for the

decision maker to wait and not change anything. Another alternative is for him to accept 𝑋𝑛

without buying an option so he still has the option on Z. The third alternative is for him to

accept 𝑋𝑛 and buy an option. Here, since we allow one option only, he discards Z and does

not allocate more capacity. The following is a graphical description of the deal.

Page 104: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

90

The Optimal Policy with an Option to Reverse an Allocation Decision

Proposition 7.5.2 The Optimal Allocation Policy with an Option When offered a deal 𝑋𝑛 with an option on deal 𝑍, the decision maker should accept the deal

and buy an option on it if and only if:

𝑖𝑀𝑂𝑐𝑡(𝑋𝑛,𝑍) ≥ 𝑖𝐶𝑐𝑡 (𝑋𝑛,𝑍) ∙𝐶𝑀(𝑍)1 − 𝑓

Otherwise, the decision maker should accept if and only if:

𝐶𝑀(𝑋𝑛) > 𝑅𝑐𝑡(1,1,𝑍)

So the option is worth buying when the incremental multiple of the deal with the option is

higher than the sum of its price, the incremental cost of capacity, and the multiple of the

deal liquidated.

7.5.3 The Probability of Knowing Detectors In this section we consider the probability of knowing detectors. This is an alternative

method of modeling information-gathering activities defined by Howard (1998). We include

this procedure because it allows us a clear way to value and order detectors. Here a detector

is modeled as a probability of obtaining clairvoyance versus no information. In this extension

we do not use the multiple cost structure. We set up the problem, extend our optimal policy

results, and then follow with a discussion of using multiple detectors.

Problem Structure The setup here differs from the basic one in the structure of information. When the decision

maker seeks information, he either gets clairvoyance with probability or gets no information.

After getting the indication, the decision maker can then decide whether to accept or reject

the deal at hand. The following is a graphical representation of the problem.

Reject

Accept Don’t

Buy option

𝑀𝑐𝑡(𝑍)|𝑋𝑛

𝑀𝑐𝑡+1(𝑋𝑛) ∙ 𝐶𝑀(𝑋𝑛)/𝐶𝑀(𝑍) ∙ (1 − 𝑓)

𝑀𝑐−1𝑡+1(𝑋𝑛) ∙ 𝐶𝑀(𝑋𝑛)

𝑀𝑐𝑡+1(𝑍)

Figure 56 - Problem Structure with an Option to Reverse an Allocation Decision

Page 105: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

91

If decision makers receive clairvoyance, they get the incremental multiple of the deal with

Perfect information (iMPI) is defined as

𝑖𝑀𝑃𝐼𝑐𝑡 (𝑋𝑛) = 𝐶𝑀(𝑖𝑀𝑐𝑡(𝑋𝑛|𝐼))

Note that if the decision maker does not receive clairvoyance, the deal doesn’t change.

The Optimal Policy with a Probability of Knowing Detectors

Proposition 7.5.3a: The Optimal Information Gathering Policy with a Probability of Knowing Detectors Given a detector defined as above with a probability of knowing p and cost fraction f, the

decision maker should buy information if and only if:

�𝑢 �𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛)𝑖𝑀𝑐

𝑡(𝑋𝑛) � − 1� ≥�𝑢 � 1

1 − 𝑓� − 1�

𝑝

Proposition 7.5.3b: The Optimal Allocation Policy with a Probability of Knowing Detectors If clairvoyance is obtained, the decision maker should buy the deal if and only if:

𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, if no clairvoyance is obtained or the decision maker did not buy information, then

the decision maker should buy the deal if and only if:

Figure 57 - Problem Structure with Probability of Knowing Detectors

Ii

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject 𝑀𝑐𝑡|𝑋𝑛

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛)

𝑀𝑐𝑡+1

Reject

Accept 𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓)

𝑀𝑐𝑡+1 ∙ (1 − 𝑓)

Reject

Accept 𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∙ (1 − 𝑓)

𝑀𝑐𝑡+1 ∙ (1 − 𝑓)

p

Page 106: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

92

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

Multiple Detectors with a Probability of Knowing Detectors Now say we have two irrelevant detectors with probability of clairvoyance p1 and p2 and cost

fractions f1 and f2, respectively.

Corollary 7.5.3: Identifying the Optimal Detector with a Probability of Knowing Detectors Given the situation above, Detector 1 will be optimal when

�𝑢 � 11 − 𝑓1

� − 1�

𝑝1<�𝑢 � 1

1 − 𝑓2� − 1�

𝑝2

Otherwise Detector 2 will be optimal. In this setup, this optimality is myopic. So if we have

multiple irrelevant detectors we use them in decreasing order of �𝑢 � 11−𝑓

� − 1� 𝑝� .

As stated earlier, the corollary here differs from its parallel in the additive setting.

Page 107: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

93

Chapter 8 – Step 3: Funnel Application and Examples

“Sometimes it's the smallest decisions that can

change your life forever.”

Keri Russell In the final step of the process, we discuss how we apply the results of the valuation funnel and discuss some examples in detail.

Figure 58 - Step 3 of the Solution Process

8.1 Overview The third step of our template is to apply the results of valuation funnel. The goal of this step

is to discuss the application of the valuation funnel. By the end of this step, the decision

maker may use the results of the valuation funnel to evaluate meta-decisions that relate to

the process itself. Additionally, the results will provide the optimal policy regarding real-time

decisions.

This chapter is organized into six parts. In Section 2 we discuss how the valuation funnel is

applied in meta-decisions and real time decisions. In Section 3 we outline the example setup.

In Section 4 we give real-time application examples, specifically, discussing when the decision

maker should accept or reject a deal and when to buy an additional alternative. In Section 5

we give examples of meta-decisions applications.

Page 108: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

94

8.2 Application of the Funnel In this chapter we discuss how to apply the funnel. The application process depends on the

framing of the decision. As discussed in Section 5.2, framing is done on two levels; the first

relates deal flow and the second to the deals as they arrive. Hence, we apply the funnel on

two different levels; the first is to evaluate meta-decisions, that is, decisions that affect the

deal flow as a whole. The second is to evaluate real-time decisions, that is, decisions that

occur during the process and are particular to specific deals.

Figure 59 – Two Levels of Funnel Application

8.3 Two Types of Decisions: Meta and Real-Time Meta-decisions include decisions to set the timeline, capacity level, deal sourcing, and

information sources available. In all of these contexts, the decision is not limited to a specific

deal; rather, it relates to the future deal flow as a whole.

The funnel can be used to evaluate meta-decisions through the following steps: 1. Model the meta-alternatives 2. Build the Valuation Funnel for each alternative 3. Choose the one with highest future deal flow value

In Section 8.6 we give three policy evaluation examples. The first is concerned with choosing

the focus of a Venture Capital firm. The second and third examples discuss issues around

hiring a partner for the VC firm.

Real-time decisions include accepting a deal, buying information, buying control, and buying

options. These are decisions that must be applied as the deals arrive.

Page 109: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

95

The results of the funnel provide a guide to evaluating deals. The funnel gives the

incremental values of the alternatives concerning a specific deal. Specifically, when the

funnel can be used to evaluate real-time decisions through the following steps:

1. Update the generic diagrams to incorporate beliefs about the specific deal

2. Refer to the funnel results to find the incremental value of the deal with the different

alternatives

3. Choose the alternative with the highest incremental value

For example, consider a deal offered at (c,t) for which the decision makers have three

alternatives, namely, accept, reject, and buy information. The decision maker first updates

the generic diagram with her beliefs. She then assesses the detector’s accuracy and cost.

Finally, she refers to the funnel results to find the incremental value of the deal and of the

detector. Now she chooses the alternative with the highest incremental value.

8.4 Setup Examples In this chapter we consider the following example of Saad, the owner of DA Ventures, a

venture capital firm. Saad’s firm focuses on early-stage technology startups. For this round,

they have the following parameters:

- They can invest in 9 companies at most (c = 9)

- They have 50 periods to receive proposals (T = 50)

- They have a 10% chance, at each period, of receiving no proposal

- They follow the delta property and their risk tolerance = $100Million (γ= 0.01)

The firm classifies the startups using four uncertainties, namely, hiring success, technical

success, customer acquisition, and dilution effect. For simplicity, the first three uncertainties

are modeled as binary uncertainty where failure in one indicates the failure of the venture.

The first, hiring success, pertains to the startup’s success in attaining the skills needed on the

team. The second uncertainty, technical success, models the startup’s success in developing

the technology needed for the venture, given that they have the needed skills on their team.

The third uncertainty, customer acquisition, models the success of the startup in acquiring

customers, given that they have the needed skills on their team and that their technology

works. The fourth uncertainty, dilution effect, models the stake the VC firm has in the

Page 110: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

96

startup, given its success. This uncertainty is broken down into three degrees, namely, high,

medium, and low dilution effect.

The firm is considering two categories of technology startups, hardware and software as a

service (SAAS). The following tree represents their beliefs about hardware startups:

Figure 60 - Application Example: Hardware Deals

The firm believes that any hardware startup proposal they receive is equally likely to be one

of twenty-seven different deals. The deals are equally probable combinations of the possible

values of the three uncertainties, p1, p2, and p3, where p1 = [0.2, 0.4, 0.6], p2 = [0.15, 0.3,

0.45], and p3 = [0.75, 0.85, 0.95].

The following tree represents their beliefs about software as service deals:

Figure 61 - Application Example: Software as Service Deals

Hardware Early Stage Startup Liquidity Effect

0.15 Low $Millions300

Customer Acquisitionp3 Success 0.7 Medium

200Technical Success

p2 Success 0.15 High100

Hiring Successp1 Success 1-p3 Failure

-15

1-p2 Failure-10

1-p1 Failure-0.5

Software As Service Early Stage Startup Liquidity Effect

0.15 Low $Millions500

Customer Acquisitionp3 Success 0.7 Medium

300Technical Success

p2 Success 0.15 High100

Hiring Successp1 Success 1-p3 Failure

-20

1-p2 Failure-5

1-p1 Failure-0.1

Page 111: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

97

The firm believes that any software as a service startup proposal they receive is equally likely

to be one of twenty-seven different deals. The deals are equally probable combinations of

the possible values of the three uncertainties, p1, p2, and p3, where p1 = [0.4, 0.6, 0.8], p2 =

[0.5, 0.65, 0.8], and p3 = [0, 0.2, 0.4].

With the above structure, we apply the funnel to real-time decisions and meta-decisions. For

real-time decisions we consider:

1. Buying Information

2. Accepting Deals

For meta-decisions we consider the following three situations:

1. Situation 1: Choosing the Focus of the Firm

2. Situation 2: Hiring Partners: Evaluating Their Skills

3. Situation 3: Hiring Partners: Evaluating Synergies in Their Skills

8.5 Real-Time Application Examples

8.5.1 Real-Time Setup Example Here we consider a situation in which Saad has 4 resource units left and is at t=40 (10 steps

away from the deadline). Here we would like to see which deals Saad should buy information

on and when he would accept the deals. In this example we assume information is offered at

a cost of m=8.

As stated before, we have to look for the incremental values of the deals with the different

alternatives and then accept the alternative with the highest incremental value. The situation

is described Figure 62.

Figure 62 - Example 8.4: Real-Time Decisions Structure

Page 112: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

98

8.5.2 Should Saad Buy Information? To answer this question we refer to Proposition 6.3.1a, which states that the decision maker

should buy information if and only if:

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) ≥ 𝑖𝑉𝑐𝑡(𝑋𝑛) + 𝑚

This is equivalent to buying information if and only if:

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) ≥ 𝑚

Thus, we find the incremental value of information for the different deals and compare them

to the cost of information. Saad buys information when the incremental value of information

is higher than its cost. The example is illustrated in Figure 63.

Figure 63 - Example 8.4: Should Saad Buy Information?

Figure 63 shows that Saad should not pay 8 monetary units for information on any deal at

time t=40. This analysis allows Saad to negotiate with the information provider. Saad may

choose to inform her that if she provides information at m=6 instead of 8 then he will be 50%

likely to buy it from her. At m=8, however, he will never buy information, regardless of the

deal at hand.

8.5.3 Should Saad Invest in the Deals? To answer this question we refer to Proposition 6.3.1b, which states that the decision maker

should accept the deal with information if and only if 𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡 and should accept it

without information if and only if 𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡. This is equivalent to stating that𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) >

0 or 𝑖𝑉𝑐𝑡(𝑋𝑛) > 0.

Page 113: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

99

Since we found above that Saad will not be buying information for any of the deals, we limit

our consideration to deals without information. We refer to the funnel results and find the

incremental values of the deals without information. If the incremental value is positive, then

we invest the deal; otherwise, we reject it. The example is illustrated in Figure 64.

Figure 64 - Example 8.4: Should Saad Invest in the Deals?

The graph above shows that Saad should invest if the deal is deal 4, deal 5, or deal 6.

Otherwise, Saad should reject the deal.

8.6 Examples of Meta-Decision Applications

8.6.1 Situation 1: Choosing the Focus of the Firm Saad is deciding where to focus his firm. He can focus on hardware startups, software

startups, or a combination of both. We study this in the case of the basic deal flow and of the

deal flow with information.

The Basic Case Figure 65 shows the difference between the value of the process focusing solely on hardware

startups and that focusing on software startups.

Page 114: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

100

Figure 65 – Example 8.5: Should Saad Focus on Hardware or Software?

From Figure 65 we can see that, regardless of the current stage of the process, the firm will

always prefer to focus on hardware startups over SAAS startups.

Figure 66 shows the possible combinations of the two categories. It is clearly shown that the

inclusion of SAAS hurts the value of the process.

Figure 66 – Example 8.5: What Combination of HW and SW Should Saad Focus on?

Information Effect Next, we study the effect of available information. The following graph shows the difference

between the two deal flows when we have free clairvoyance.

0

5

10

15

20

25

0 10 20 30 40 50 60

Val

ue (

$)

Time

HW >> SW

SW >> HW

0

10

20

30

40

0 10 20 30 40 50 60

Val

ue (

$)

Time

Page 115: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

101

Figure 67 - Example 8.5: With Information, Should Saad Focus on Hardware or Software?

It is worth noting how the optimal focus shifts for capacity levels of 4 and 1 units. For the first

proposals, it is better to focus on SAAS. This optimality shifts later back to hardware startups.

To understand this shift, note that while SAAS is less optimal, it has better prospects. With

information, the firm can identify the better deals. Since early in the process the firm can

afford to wait for better deals to arrive, for a deal flow with few resource units, it is worth

using SAAS and waiting for the better deals.

Figure 68 shows the possible combinations of the two categories.

Figure 68 - Example 8.5: With Information, What Combination of HW and SW Should Saad Focus on?

In the beginning of the process the firm benefits from having a pure SAAS focus; towards the

end the optimality shifts to a pure hardware focus. Figure 68 shows how the optimality shifts

in the middle of the time horizon.

-20

-15

-10

-5

0

5

10

15

20

25

0 10 20 30 40 50 60Val

ue (

$)

Time

HW >> SW

SW >> HW

60

70

80

90

100

110

120

130

0 10 20 30 40 50

Val

ue (

$)

Time

All HW

All SW

Combination

Page 116: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

102

8.6.2 Situation 2: Hiring Partners: Evaluating Synergies in Their Skills For the sake of clarity, we reduce the time horizon available here to 20 instead of 50.

Rana, an aspiring engineer, wants to join DA Ventures. Saad believes that Rana can add value

to the firm in two different ways through her network. She can recruit great talent to the

startups and she can attract more proposals. Here we consider the question of evaluating the

synergies in the skills. We model the skills of Rana as:

- Recruiting talent: modeled as control over the hiring uncertainty by multiplying the

odds by 2

- Attracting more proposals: modeled as increasing the time horizon from 20 to 40

Let V1 and V2 be the value added by controlling hiring and increasing the time horizon,

respectively. Let V3 be the value added by the combination of both the control and the

increase in the time horizon. The following graph shows the difference between V3 and

(V1+V2).

Figure 69 – Example 8.5: Are the Partner’s Skills Complementary or Synergetic?

Note that the difference is negative for low capacity levels and thus indicates that the skills

can substitute for each other. However, for capacity levels higher than 5, the difference is

positive, indicating synergy between the skills. When we have more resource units at hand,

increasing the time horizon allows us more opportunity to apply the control. With few

resource units, the time horizon is already long enough to maximize the benefit of the

control available.

-0.2

0

0.2

0.4

0.6

0.8

1

0 2 4 6 8 10 12

Valu

e ($

M)

Capacity

Complementary Synergetic

Hybrid – Sum

Page 117: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

103

8.6.3 Situation 3: Hiring Partners: Evaluating Their Skills Here we consider Maha, a professor in software engineering. Maha offered to join DA

Ventures to start a focus on SAAS. Maha requested that the organization put one third of the

deal sourcing activities into SAAS. DA Ventures needs now to evaluate how much the

addition of Maha will add to the firm. The decision makers believe that she will be able to

increase the number of proposals they receive by 20% (so T = 60), will increase their capacity

by one unit (so c = 10), and will be able to gather information about the technical and

customer acquisition uncertainties within the SAAS proposals. The firm, however, is not sure

about the accuracy of Maha’s information and hence not sure how to value Maha’s input to

the firm. To solve this, we model Maha’s information-gathering activities as detectors with

different levels of accuracy and then find the minimum accuracy needed for Maha to be

useful.

The following graph shows the value of the deal flow without Maha (black lines) and with

Maha at different levels of accuracy (acc = 0.55, 0.6, 0.65, 0.7).

Figure 70 – Example 8.5: How Good Must the Partner's Information be to Justify Hiring her?

In the graph above we see that the firm has to believe that Maha has an accuracy of at least

0.7 to be worthwhile. Recall that SAAS is less optimal than hardware, so including SAAS

reduces the value of the deal flow to the firm. Hence, Maha has to provide information on

SAAS so that it will offset the decrease in value.

30

35

40

45

50

55

60

0 10 20 30 40 50 60

Val

ue (

$)

Time

At least 70%

Higher accuracy required

Basic Case

Page 118: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

104

Chapter 9 – Conclusions and Future Research

“Life isn't about finding yourself. Life is about

creating yourself.”

“When I was young I observed that nine out of

every ten things I did were failures, so I did ten

times more work.”

George Bernard Shaw

9.1 Conclusions Our dissertation is motivated by the problem of the Venture Capitalist. We abstracted the

problem in the form of the Fleeting Opportunities structure. In this formulation, the decision

maker has flows of opportunities (deals) that arise over time. He may only accept a limited

number of deals and has to immediately decide how to react to each deal as it arrives. We

limited our discussion to situations where the distribution over deals does not change over

time and where the deals are considered irrelevant. Additionally, we limit the decision

maker’s risk attitude to either exhibit constant absolute risk aversion or constant relative risk

aversion.

In the fleeting opportunities setup, we generalized the current dynamic programming

structures to include risk aversion and information gathering among other extensions. In this

way, our structure will better capture the complexities of the underlying situation. We

introduced new definitions and principles to the application of the power u-curve.

Additionally, we characterized the value of information and the optimal policy, thus allowing

the decision maker to develop an intuitive understanding of the deal flow.

We employed the concept of Decision Analysis methodologies in solving this problem. The

result is a valuation template for the fleeting opportunities problem. The decision maker may

Page 119: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

105

follow the template to evaluate the alternatives presented by the deal flow and the

alternatives presented by the specific deals as they arrive. We call for the development of

valuation templates in other fields and in general to the development of other DA

methodologies in an effort to standardize Decision Analysis.

9.2 Future Research In the future we intend to extend this work to improve the application of the template to the

fleeting opportunities problem. We also intend to define valuation templates in different

application areas. We identify seven dimensions of future research.

9.2.1 Power U-Curve We are working on extending the work on the power u-curve to include risk sharing, risk

scaling, and hedging. Our study of the risk sharing and scaling properties of the power u-

curve will follow those of the exponential u-curve studied by Howard (1998). In hedging, we

are working on extending the work by Seyller (2008) to the multiplicative setting.

9.2.2 Random Capacity Requirements We would like to extend the work of Kleywegt & Papastavrou (2001) on requests with

random capacity sizes and include risk aversion, information, control, and options. This will

improve the applicability of the process in two ways. First, it will allow us to model requests

with different requirements that are not always known ahead of time. Second, it allows us to

model the capacity cost in a more consistent way. For example, we will not have to charge

the same capacity requirement for information as for a request.

9.2.3 Relevance Our model now assumes that requests are irrelevant given their arrival time. We aim to

change the underlying process to follow the Markov property. This, again, will improve the

extent to which the process models the actual requests.

Another dimension of relevance of requests is that within or across different input types.

That is, a more comprehensive model would allow the value of a specific deal type to be

affected by the number of the other types at hand. This extension will allow us to optimize

and diversify the portfolio.

Page 120: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

106

9.2.4 Options and liquidity We have described an option that allows us to reverse an investment decision. We would like

to also study the option to reverse the rejection decision, in other words, to put items on

hold.

We would like to also extend the investment reversal option to allow us to include liquidity

considerations. By modeling the evolution of deals as time passes we can include liquidity as

a future decision.

9.2.5 Learning While we model information-gathering activities, we do not model how decision makers

improve their ability to gather information. By modeling how the uncertainties in the

requests resolve, we can allow the deal flow to update the accuracy of the detectors and

thus to model learning.

9.2.6 Decision Analysis Methodologies We would like to follow our method in this dissertation and design other DA methodologies,

specifically templates for valuation, for areas other than the ones discussed here. Of special

interest to us are decisions relating to renewable energy projects and to Islamic financing

products. These two areas are understudied, in our belief; thus, an introduction of a

valuation template is likely to have an impact on their current practices.

“Be happy for this moment. This moment is

your life.”

Omar Khayyam

Page 121: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

107

Bibliography Abbas, A. E. (2003). Entropy Methods in Decision Analysis. Dissertation submitted to the

department of Management Science & Engineering of Stanford University.

Abbas, A. E. (2007). Invariant Utility Functions and Certain Equivalent Transformations.

Decision Analysis , 4 (1), 17-31.

Arrow, K. J. (1971). Essays in the Theory of Risk Bearing. Amsterdam, Holland: Markham

Publishing Co.

Barron, A. R., & Cover, T. M. (1988). A bound on the financial value of information. IEEE

Transactions on Information Theory , 34 (5).

Bayes, T. (1763). An Essay Towards Solving a Problem in the Doctrine of Chances. Biometrika

, 45, 293-315.

Bernoulli, D. (1954). Exposition of a New Theory on the Measurement of Risk. Econometrical ,

22 (1), 23-26.

Bickel, E. (2008). The relationship between perfect and imperfect information in a two-action

risk-sensitive problem. Decision Analysis , 5 (3), 116-128.

Boulis, A., & Srivastava, M. (2004). Node-Level Energy Management for Sensor Networks in

the Presence of Multiple Applications. Wireless Networks , 10 (6), 737-746.

Copeland, T. E., Koller, T., & Murrin, J. (2005). Valuation : Measuring and Managing the Value

of Companies (4 ed.). Wiley.

Cornell, B. (1993). Corporate Valuation: Tools for Effective Appraisal and Decision-Making.

McGraw-Hill.

Cover, T. M., & Thomas, J. A. (1991). Elements of Information The. New York: Wiley.

Damodaran, A. (2002). Investment Valuation: Tools and Techniques for Determining the

Value of Any Asset. New York: John Wiley and Sons.

Damodaran, A. (n.d.). Probabilistic Approaches: Scenario Analysis, Decision Trees and

Simulations. Unpublished .

Damodaran, A. (2001). The Dark Side of Valuation. FT Press.

Page 122: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

108

Delquié, P. (2008). The Value of Information and Intensity of Preference. Decision Analysis , 5

(3), 129-139.

Derman, C., Lieberman, G. J., & Ross, S. M. (1972). A Sequential Stochastic Assignment

Problem. Management Science (18), 349-355.

Feller, J. (2002). Pricing of Multidimensional Resources in Revenue Management

(Multidimensional Dynamic Stochastic Knapsack Problem). Operations Research Proceedings,

(pp. 407-413). Klagenfurt.

Fried, V. H., & Hisrich, R. D. (1988). Venture capital research: Past, present and future.

Entrepreneurship Theory and Practice , 13, 15-28.

Gilbert, J., & Mosteller, F. (1966). Recognizing the maximum of a sequence. Journal of

American Statistics Association (61), 35-73.

Gould, J. P. (1974). Risk, stochastic preference, and the value of information. Journal

Economic Theory (8), 64-84.

Hilton, R. W. (1981). Determinants of information value: Synthesizing some general results.

Management Science (27), 57–64.

Howard, R. A. (1965). Bayesian Decision Models for Systems Engineering. IEEE Transactions

on Systems Science and Cybernetics , 1 (1), 36-40.

Howard, R. A. (1966a). Decision Analysis: Applied Decision Theory. In D. B. Hertz, & J. Melese

(Ed.), Proceedings of the Fourth International Conference on Operational Research (pp. 55-

71). New York: Wiley-Interscience.

Howard, R. A. (1970). Decision Analysis: Perspectives on Inference, Decision and

Experimentation. In R. A. Howard, & J. E. Matheson (Eds.), Readings on the Principles and

Applications of Decision Analysis (Vol. 2, pp. 821-834). Menlo Park, CA: Strategic Decisions

Group.

Howard, R. A. (1988). Decision Analysis: Practice and Promise. Management Science (38),

679-695.

Howard, R. A. (1960). Dynamic Programming and Markov Processes. Cambridge, MA: The

MIT Press.

Page 123: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

109

Howard, R. A. (1992). In Praise of the Old Time Religion. Utility Theories: Measurements and

Applications , 27-55.

Howard, R. A. (1966b). Information Value Theory. IEEE Transactions on Systems Science and

Cybernetics , 2 (1), 22-26.

Howard, R. A. (1989). Knowledge Maps. Management Science , 35 (8), 903-922.

Howard, R. A. (1980). On Making Life and Death Decisions. In R. A. Howard, & J. E. Matheson

(Eds.), Readings on the Principles and Applications of Decision Analysis (Vol. 2, pp. 483-506).

Menlo Park, CA: Strategic Decisions Group.

Howard, R. A. (1995). Options. Wise Choices: Games, Decisions, and Negotiations. a

symposium in honor of Howard Raiffa. Harvard Buisness School.

Howard, R. A. (2004). Speaking of decisions: Precise decision language. Decision Analysis , 1

(2), 71-78.

Howard, R. A. (1968). The foundations of decision analysis. IEEE Transactions on Systems

Science and Cybernetics , 4 (3), 211–219.

Howard, R. A. (2007). The Foundations of Decisions Analysis Revisited. In W. Edwards (Ed.),

Advances in Decision Analysis from Foundations to Applications. Cambridge, UK: Cambridge

University Press.

Howard, R. A. (1998). The Fundamentals of Decision Analysis, manuscript in progress.

Stanford, CA: Unpublished.

Howard, R. A. (1967). Value of information lotteries. IEEE Transactions on Systems Science

and Cybernetics , 3 (1), 54–60.

Howard, R. A., & Abbas, A. E. (2011E). Foundations of Decision Analysis. Prentice Hall.

Howard, R. A., & Matheson, J. E. (1981). Influence Diagrams. In R. A. Howard, & J. E.

Matheson (Eds.), Readings on the Principles and Applications of Decision Analysis (Vol. 2, pp.

719-762). Menlo Park, CA: Strategic Decisions Group.

Howard, R. A., & Matheson, J. (1972). Risk-sensitive Markov Decision Processes.

Management Science (18), 356-369.

Jaynes, E. T. (2003). Probability Theory: the Logic of Science. England: Cambridge University

Press.

Page 124: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

110

Jennergren, L. P. (2002). A Tutorial on the McKinsey Model for Valuation of Companies.

SSE/EFI Working Paper Series in Business Administration No. 1998:1 .

Keeney, R. L. (1982). Decision Analysis: an Overview. Operations Research , 30, 803-838.

Keeney, R. L., & Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value

Trade-Offs. New York: Wiley.

Kellerer, H., Pferschy, U., & Pisinger, D. (2004). Knapsack Problems. Berlin: Springer.

Kelly, J. L. (1956). A new interpretation of information rate. Bell System Technical Journal , 35,

917-926.

Kemmerer, B., Mishra, S., & Shenoy, P. P. (Unpublished). Bayesian Causal Maps as Decision

Aids in Venture Capital Decision Making: Methods and Applications. Unpublished.

Kleinberg, R. D. (2005). A Multiple-Choice Secretary Algorithm with Applications to Online

Auctions. Proceedings of Symposium on Discrete Algorithms (16), 630–631.

Kleywegt, A. J. (1996). Dynamic and Stochastic Models with Freight Distribution Applications.

Ph.D. Dissertation. Boston, MA: Massachusetts Institute of Technology.

Kleywegt, A. J., & Papastavron, J. D. (1998). The Dynamic and Stochastic Knapsack Problem.

Operation Research (46), 17-35.

Kleywegt, A. J., & Papastavrou, J. D. (2001). The Dynamic and Stochastic Knapsack Problem

with Random Sized Items. Operations Research , 49 (1), 26-41.

Laplace, P. S. (1902). A Philosophical Essay on Probability. New York: John Wiley.

MacMillan, I. C., Zemann, L., & Subbanarasimha, P. N. (1987). Criteria distinguishing

successful from unsuccessful ventures in the venture screening process. Journal of Business

Venturing , 2, 123–137.

MacMillan, I., Siegel, R., & Narasimha, P. N. (1985). Criteria Used By Venture Capitalists To

Evaluate New Venture Proposals. Journal of Business Venturing , 1 (1), 119-128.

MacQueen, J., & Miller, R. J. (1960). Optimal Persistence Policies. Operations Research (8),

362–380.

Matheson, D. (1983). Managing the Corporate Business Portfolio. In R. A. Howard, & J. E.

Matheson (Eds.), Readings on the Principles and Applications of Decision Analysis, (Vol. 1, pp.

719-762). Menlo Park, CA: Strategic Decisions Group.

Page 125: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

111

Matheson, D., & Matheson, J. (2005). Describing and Valuing Interventions That Observe or

Control Decision Situations. Decision Analysis , 2 (3), 165–181.

Matheson, D., & Matheson, J. E. (1998). The Smart Organization: Creating Value Through

Strategic R&D. Massachusetts: Harvard Business School Press.

Matheson, J. E., & Howard, R. A. (1968). An Introduction to Decision Analysis. In R. A.

Howard, & J. E. Matheson (Eds.), Readings on the Principles and Applications of Decision

Analysis (Vol. 1, pp. 17-55). Menlo Park, CA: Strategic Decisions Group.

McNamee, P., & Celona, J. (1990). Decision Analysis for the Professional. Redwood City, CA:

Scientific Press.

Papastavron, J. D., Rajagopalan, S. H., & Kleywegt, A. J. (1996). The Dynamic and Stochastic

Knapsack Problem with Deadlines. Management Science (42), 1706-1718.

Pratt, J. W. (1964). Risk Aversion in the Small and in the Large. Econometrica , 32 (2), 122-

136.

Pratt, J. W., Raiffa, H., & Schlaifer, R. (1964). The Foundations of Decision under Uncertainty:

an Elementary Exposition. Journal of the American Statistical Association , 59, 353-375.

Raiffa, H. (1968). Decision Analysis: Introductory Lectures on Choice under Uncertainty.

Reading, Massachusetts: Addison-Wesley.

Richman, J. (2009). An Analysis of Decision-Making in Venture Capital. Stanford, CA:

Unpublished undergraduate honors thesis.

Ross, K. W., & Tsang, D. (1989). The Stochastic Knapsack Problem. IEEE Transactions on

Communications (34), 47-53.

Sahlman, W. A. (1986, May-June). Aspects of Financial Contracting in Venture Capital.

Harvard Business Review .

Seyller, T. C. (2008). The Value of Hedgin. Stanford: Dissertation submitted to the

department of Management Science & Engineering of Stanford University.

Shachter, R. D. (1986). Evaluating Influence Diagrams. Operations Research , 34 (Nov.-Dec.

1986), 871-882.

Shachter, R. D. (1988). Probabilistic Inference and Influence Diagrams. Operations Research ,

36 (July-Aug. 1988), 589-605.

Page 126: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

112

Shepherd, D. A., & Zacharakis, A. L. (1999). Conjoint Analysis: a New Methodological

Approach for Researching the Decision Policies of Venture Capitalists. Venture Capital - An

International Journal of Entrepreneurial Finance , 1 (3), 197 - 217.

Shepherd, D. A., & Zacharakis, A. L. (2002). Venture Capitalists' Expertise: A Call for Research

into Decision Aids and Cognitive Feedback. Journal of Business Venturing , 17 (1), 1-20.

Spetzler, C. S., & Staël von Holstein, A. S. (1975). Probability Encoding in Decision Analysis.

Management Science , 22 (3), 340-358.

Thorp, E. O. (1997). The Kelly Criterion in Blackjack, Sports Betting, and the Stock Market. The

10th International Conference on Gambling and Risk Taking.

Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases.

Science , 185 (4157), 1124-1131.

Tyebjee, T., & Bruno, A. (1984). A Model of Venture Capitalist Investment Activity.

Management Science , 30 (9), 1051-1066.

Van Slyke, R., & Young, Y. (2000). Finite Horizon Stochastic Knapsacks with Applications to

Yield Management. Operations Research , 48 (1 ), 155-172.

Veinott, A. F. (2004). Lecture Notes in Dynamic Programming. Stanford, CA: Stanford

University Bookstore.

Von Neumann, J., & Morgenstern, O. (1947). Theory of Games and Economic Behavior.

Princeton, New Jersey: Princeton University.

Wells, W. (1973). Venture Capital Decision-making. Thesis. Carnegie-Mellon University.

Zacharakis, A. L., & Meyer, G. D. (1998). A Lack of Insight: Do Venture Capitalists Really

Understand their own Decision Process? Journal of Business Venturing , 13 (1), 57-76.

Zacharakis, A. L., & Meyer, G. D. (2000). The Pothential of Acturial Decision Models: Can they

Improve the Venture Capital Investment. Journal of Business Venturing , 15 (1), 323–346.

Zacharakis, A. L., & Shepherd, D. A. (2001). The Nature of Information and Overconfidence on

Venture Capitalists' Decision Making. Journal of Business Venturing , 16 (4), 311-332.

Page 127: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

113

Appendix A1 – Proofs

A1.1 Chapter 3 Proofs

Proposition 3.2.1 Applying the buying price of the prospects, change the equivalent return of the deal to be:

𝑊0 ∙ 𝐶𝑀 ∙ (1 − 𝐹𝑏)

For the decision maker to be indifferent between the deal and his/her initial wealth, the

buying price must satisfy:

𝑊0 ∙ 𝐶𝑀 ∙ (1 − 𝐹𝑏) = 𝑊0

(1 − 𝐹𝑏) =1𝐶𝑀

→ 𝐹𝑏 =𝐶𝑀 − 1𝐶𝑀

Proposition 3.2.2 Along similar lines to the discussion above, the selling price must satisfy:

𝑊0 ∙ 𝐶𝑀 = 𝑊0 ∙ (1 + 𝐹𝑆)

𝐶𝑀 = (1 + 𝐹𝑆)

→ 𝐹𝑠 = 𝐶𝑀 − 1

Proposition 3.2.3 Let (pi, ri) with an investment fraction of f1 be the current deal with CEM =𝐶𝑀1. Let the new

deal with CEM=𝐶𝑀2 that the decision maker is offered be represented with (qi, si) and an

investment fraction f2. Let 𝑀𝑈1 and 𝑀𝑈2 be the u-multipliers of the two deals respectively.

The prospects the DM faces with the two deals are represented as follows:

(1 + 𝑟𝑖 ∙ 𝑓1)�1 + 𝑠𝑗 ∙ 𝑓2� ∙ 𝑊0

The CEM of the new deal can be found as follows:

𝑈𝑀(𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑑𝑒𝑎𝑙) = ∑ ∑ {𝑝𝑖(1 + 𝑟𝑖 ∙ 𝑓1)𝜆 ∙ 𝑞𝑗 ∙ �1 + 𝑠𝑗 ∙ 𝑓2�𝜆𝑚

𝑗=1𝑛𝑖=1 }

Page 128: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

114

𝑈𝑀(𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑑𝑒𝑎𝑙) = {�𝑝𝑖(1 + 𝑟𝑖 ∙ 𝑓1)𝜆} ∙ {�𝑞𝑗�1 + 𝑠𝑗 ∙ 𝑓2�𝜆}𝑚

𝑗=1

𝑛

𝑖=1

𝑈𝑀(𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑑𝑒𝑎𝑙) = 𝑈𝑀1 ∙ 𝑈𝑀2

𝐶𝑀(𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑑𝑒𝑎𝑙) = �𝑈𝑀(𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑑𝑒𝑎𝑙)𝜆 = �𝑈𝑀1 ∙ 𝑈𝑀2𝜆 = 𝐶𝑀1 ∙ 𝐶𝑀2

For the decision maker to be indifferent between buying the new deal and his/her current

prospects, the buying price must satisfy:

𝑊0 ∙ 𝐶𝑀1 = 𝑊0 ∙ 𝐶𝑀1 ∙ 𝐶𝑀2 ∙ (1 − 𝐹𝑏)

1 = 𝐶𝑀2 ∙ (1 − 𝐹𝑏)

→ 𝐹𝑏 =𝐶𝑀2 − 1𝐶𝑀2

Proposition 3.2.4 The decision maker before obtaining information has the following CEM:

𝑊0 ∙ 𝐶𝑀1

and after obtaining information and paying a fraction Fb, has the following CEM:

𝑊0.∙ 𝐶𝑀2 ∙ (1 − 𝐹𝑏)

For the decision maker to be indifferent between both states, the buying price must satisfy:

𝑊0 ∙ 𝐶𝑀1 = 𝑊0 ∙ 𝐶𝑀2 ∙ (1 − 𝐹𝑏)

𝐶𝑀1 = 𝐶𝑀2 ∙ (1 − 𝐹𝑏)

→ 𝐹𝑏 = 1 − 𝐶𝑀1

𝐶𝑀2

And in terms of UM we have:

1 − 𝐹𝑏 = 𝐶𝑀1

𝐶𝑀2

(1 − 𝐹𝑏)𝜆 = 𝑈𝑀1

𝑈𝑀2

→ 𝐹𝑏 = 1 − �𝑈𝑀1

𝑈𝑀2

𝜆

Page 129: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

115

Proposition 3.2.5 The value of control can be directly related to the value of information above. We can

consider the case when the wizard (perfect control) changes the deal from 𝐶𝑀1 to 𝐶𝑀2. The

proof then follows in the same lines as above.

Proposition 3.2.6 Consider a deal x ~ (pi, ri)

From the Pratt approximation, we have:

𝐶𝐸(𝑋) ≈ 𝐸(𝑋) −12

𝑟�𝐸(𝑋)�.𝑉𝐴𝑅(𝑋)

𝐶𝑀 ≈ 𝐸(𝑋)𝑊0

−12

𝑟(𝐸(𝑋)).𝑉𝐴𝑅(𝑋)𝑊0

(1)

𝑛𝑜𝑡𝑒 𝑡ℎ𝑎𝑡 𝐸(𝑋)𝑊0

= 𝐸(1 + 𝑟𝑖)𝑊0

𝑊0= 𝐸(1 + 𝑟𝑖) (2)

𝑎𝑛𝑑 𝑟(𝐸(𝑋)) = (1 − 𝜆)

𝑊0(1+< 𝑟𝑖 >) (3)

𝑓𝑖𝑛𝑎𝑙𝑙𝑦,𝑉𝐴𝑅(𝑋) = 𝐸(𝑋2) − 𝐸(𝑋)2

𝑉𝐴𝑅(𝑋) = 𝐸(( 1 + 𝑟𝑖)2𝑊02) – 𝐸((1 + 𝑟𝑖)𝑊0)2

𝑉𝐴𝑅(𝑋) = 𝐸(1 + 2𝑟𝑖 + 𝑟𝑖2)𝑊02 − 𝐸(1 + 𝑟𝑖)2𝑊0

𝑉𝐴𝑅(𝑋) = �𝐸�𝑟𝑖2�– 𝐸(𝑟𝑖)2�𝑊02 = 𝑉𝐴𝑅(𝑟𝑖)𝑊0

2

𝑉𝐴𝑅(𝑋)𝑊0

= 𝑉𝐴𝑅(𝑟𝑖)𝑊0 (4)

Now, we substitute (2), (3), and (4) back into (1) and get

𝐶𝑀 ≈ 𝐸(1 + 𝑟𝑖) −12

(1 − 𝜆)

𝑤�1 + 𝐸(𝑟𝑖)�.𝑉𝐴𝑅(𝑟𝑖)𝑤

𝐶𝑀 ≈ 𝐸(1 + 𝑟𝑖) −12

(1 − 𝜆)

�1 + 𝐸(𝑟𝑖)�.𝑉𝐴𝑅(𝑟𝑖)

𝑙𝑒𝑡 𝑅𝑖 = 1 + 𝑟𝑖 → 𝐸(𝑅𝑖) = 𝐸(1 + 𝑟𝑖) & 𝑉𝐴𝑅(𝑅𝑖) = 𝑉𝐴𝑅(𝑟𝑖)

Page 130: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

116

→ 𝐶𝑀 ≈ 𝐸(𝑅𝑖) − 12

(1 − 𝜆).𝑉𝐴𝑅(𝑅𝑖)𝐸(𝑅𝑖)

Proposition 3.2.7 Consider the case for the investor who invests a small fraction f of his wealth a deal x ~ (pi, ri)

𝑈𝑀 = �𝑝𝑖(1 + 𝑓 𝑟𝑖)𝜆𝑛

𝑖=1

𝑑 𝑈𝑀 𝑑 𝑓

= �𝑝𝑖𝜆 𝑟𝑖(1 + 𝑓 𝑟𝑖)𝜆−1𝑛

𝑖=1

𝑑2 𝑈𝑀 𝑑 𝑓2

= �𝑝𝑖𝜆 (𝜆 − 1)𝑟𝑖2(1 + 𝑓 𝑟𝑖)𝜆−2𝑛

𝑖=1

𝑇𝑎𝑦𝑙𝑜𝑟 𝑠𝑒𝑟𝑖𝑒𝑠 𝑒𝑥𝑝𝑎𝑛𝑠𝑖𝑜𝑛 𝑤𝑖𝑡ℎ 𝑡𝑤𝑜 𝑑𝑒𝑔𝑟𝑒𝑒𝑠 𝑎𝑟𝑜𝑢𝑛𝑑 𝑓 = 0 𝑖𝑠

𝑈𝑀 ≈ 1 + 𝜆 𝑓�𝑝𝑖

𝑛

𝑖=1

𝑟𝑖 + 𝜆(1 − 𝜆)𝑓2�𝑝𝑖

𝑛

𝑖=1

𝑟𝑖2

𝑈𝑀 ≈ 1 + 𝜆𝑓𝐸( 𝑟𝑖) + 𝜆(1 − 𝜆)𝑓2

2𝐸(𝑟𝑖2)

Corollary 3.2.1 The optimal fraction, f*, that maximizes the approximate u-multiplier from Proposition 7 is

calculated as follows:

𝑑 𝑈𝑀𝑑 𝑓

= 0 → 𝜆𝐸(𝑟𝑖) + 𝜆(1 − 𝜆)𝑓∗𝐸(𝑟𝑖2) = 0

→ 𝑓∗ = 𝐸(𝑟𝑖)

(1 − 𝜆)𝐸(𝑟𝑖2)

Corollary 3.2.2 With the approximation from Proposition 7, the maximum fraction, fm, above which the

decision maker is better off not investing, is calculated as follows:

𝑈𝑀 = 1 → 1 + 𝜆𝐸(𝑟𝑖)𝑓𝑚 + 𝜆(𝜆 − 1)𝐸(𝑟𝑖2)(𝑓𝑚)2

2= 1

→ 𝜆𝐸(𝑟𝑖)𝑓𝑚 + 𝜆(𝜆 − 1)𝐸(𝑟𝑖2)(𝑓𝑚)2

2= 0

Page 131: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

117

𝐸(𝑟𝑖)𝑓𝑚 + (𝜆 − 1)𝐸(𝑟𝑖2)(𝑓𝑚)2

2= 0 (𝜆 ≠ 0)

𝑓𝑚 �𝐸(𝑟𝑖) + (𝜆 − 1)𝐸(𝑟𝑖2)𝑓𝑚

2 � = 0

→ 𝑓𝑚 = 0, 𝑜𝑟

𝑓𝑚 = 2𝐸(𝑟𝑖)

(1 − 𝜆)𝐸(𝑟𝑖2)

Proposition 3.2.8 Consider a deal x ~ (pi, ri)

𝑈𝑀 = �𝑝𝑖(1 + 𝑟𝑖)𝜆𝑛

𝑖=1

𝑑(𝑗)𝑈𝑀 𝑑 𝜆(𝑗) = ��𝑝𝑖(1 + 𝑟𝑖)𝜆 (ln(1 + 𝑟𝑖))𝑗�

𝑛

𝑖=1

𝑑(𝑗) 𝑈𝑀(𝜆 = 0)𝑑 𝜆(𝑗) = �𝑝𝑖(ln(1 + 𝑟𝑖))𝑗

𝑛

𝑖=1

= 𝐸[(ln(1 + 𝑟𝑖))𝑗]

𝑇𝑎𝑦𝑙𝑜𝑟 𝑠𝑒𝑟𝑖𝑒𝑠 𝑒𝑥𝑝𝑎𝑛𝑠𝑖𝑜𝑛 𝑎𝑟𝑜𝑢𝑛𝑑 𝜆 = 0 𝑖𝑠

→ 𝑈𝑀 ≈�𝜆𝑗

𝑗!𝐸�(𝑙𝑛(𝑅𝑖))𝑗�

𝑛

𝑗=1

Page 132: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

118

A1.2 Chapter 6 Proofs

A1.2.1 Section 6.3 Main Results For convenience, we reproduce the recursion here:

The recursion can be rewritten as:

𝑉𝑐𝑡|𝑋𝑛 = �[𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)] ∨ 𝑉𝑐𝑡+1 ∨ �𝑉𝑐−1𝑡+1 − 𝑚 + 𝐶𝐸[𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡]�� (1)

Proposition 6.3.1a: Optimal Information Gathering Policy When offered a deal 𝑋𝑛 the decision maker should buy information if and only if

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) ≥ 𝑖𝑉𝑐𝑡(𝑋𝑛) + 𝑚

Proposition 6.3.1b: Optimal Allocation Policy After information is received, the decision maker should accept the deal if and only if

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, the deal is worth accepting without information if and only if

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡

The recursion can be rewritten as:

𝑉𝑐𝑡|𝑋𝑛 = �[𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)] ∨ 𝑉𝑐𝑡+1 ∨

�𝑉𝑐−1𝑡+1 −𝑚 + 𝐶𝐸[𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡]�� (1)

Using the definition of the incremental value of the deal with information, the recursion

simplifies to

Accept

Seek info

Reject

Reject

Accept

𝑉𝑐𝑡|𝑋𝑛

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚

𝑉𝑐𝑡+1 − 𝑚

Ii

Page 133: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

119

We seek information if and only if:

𝑉𝑐𝑡+1 + 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛)−𝑚 ≥ ��𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)� ∨ 𝑉𝑐𝑡+1�

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) ≥ {(𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑡) ∨ 0} + 𝑚

So,

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) ≥ 𝑖𝑉𝑐𝑡(𝑋𝑛) + 𝑚

After information is received, the deal is worth accepting if and only if:

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚 ≥ 𝑉𝑐𝑡+1 − 𝑚

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑉𝑐𝑡+1 − 𝑉𝑐−1𝑡+1

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡+1

Otherwise, we prefer to go with the deal without information. We buy the deal if and only if

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) ≥ 𝑉𝑐𝑡+1

𝑜𝑟 𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡

Proposition 6.3.2: Characterizing V 1. 𝑉𝑐𝑡 is non decreasing in c for all t 2. 𝑉𝑐𝑡 is non increasing in t for all c

Proposition 6.3.2 Statement 1 We again write the recursion as:

𝑉𝑐𝑡|𝑋𝑛 = �[𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)] ∨ 𝑉𝑐𝑡+1 ∨

�𝑉𝑐−1𝑡+1 −𝑚 + 𝐶𝐸[𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡]�� (1)

We prove this by induction. Take t = T, (1) becomes

𝑉𝑐𝑇|𝑋𝑛 = �[𝑉𝑐−1𝑇+1 + 𝐶𝐸(𝑋𝑛)] ∨ 𝑉𝑐𝑇+1 ∨

�𝑉𝑐−1𝑇+1 − 𝑚 + 𝐶𝐸[𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇]��

accept

Seek info

reject 𝑉𝑐𝑡|𝑋𝑛

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1

𝑉𝑐𝑡+1 + 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) −𝑚

Page 134: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

120

Since 𝑉𝑐𝑇+1 = 0 for all c, this equation reduces to

𝑓𝑜𝑟 𝑐 ≥ 1

𝑉𝑐𝑇|𝑋𝑛 = � 𝐶𝐸(𝑋𝑛) ∨ 0 ∨

𝐶𝐸[{𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇}]−𝑚 �

𝑓𝑜𝑟 𝑐 = 0

𝑉0𝑇|𝑋𝑛 = 0

Hence it is non decreasing in c at t = T

Now, assume this is true for t = k, or

𝑉𝑐𝑘 ≥ 𝑉𝑐−1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (2)

Now, prove that it is true for t = k-1, or

𝑉𝑐𝑘−1 ≥ 𝑉𝑐−1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (3)

Where:

𝑉𝑐𝑘−1|𝑋𝑛 = ��𝑉𝑐−1𝑘 + 𝐶𝐸(𝑋𝑛)� ∨ 𝑉𝑐𝑘 ∨

�𝑉𝑐−1𝑘 − 𝑚 + 𝐶𝐸[𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1]��

And

𝑉𝑐−1𝑘−1|𝑋𝑛 = ��𝑉𝑐−2𝑘 + 𝐶𝐸(𝑋𝑛)� ∨ 𝑉𝑐−1𝑘 ∨

�𝑉𝑐−2𝑘 − 𝑚 + 𝐶𝐸�𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐−1𝑘−1���

But since 𝑉𝑐𝑘 is non decreasing in c, (2), we have

𝑉𝑐−1𝑘−1|𝑋𝑛 ≤ ��𝑉𝑐−1𝑘 + 𝐶𝐸(𝑋𝑛)� ∨ 𝑉𝑐𝑘 ∨

�𝑉𝑐−1𝑘 − 𝑚 + 𝐶𝐸[𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1]�� = 𝑉𝑐𝑘−1|𝑋𝑛

Hence,

𝑉𝑐−1𝑘−1�𝑋𝑛 ≤ 𝑉𝑐𝑘−1�𝑋𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑋𝑛

Meaning (3) is true, namely

𝑉𝑐−1𝑘−1 ≤ 𝑉𝑐𝑘−1

And, finally, by induction

Page 135: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

121

𝑉𝑐−1𝑡 ≤ 𝑉𝑐𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 𝑎𝑛𝑑 𝑎𝑙𝑙 𝑡

Proposition 6.3.2 Statement 2 This statement follows directly from the recursion. We have

𝑉𝑐𝑡|𝑋𝑛 = �𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) ∨ 𝑉𝑐𝑡+1 ∨

𝑉𝑐−1𝑡+1 − 𝑚 + 𝐶𝐸({𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡})�

Hence

𝑉𝑐𝑡|𝑋𝑛 ≥ 𝑉𝑐𝑡+1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑋𝑛𝑎𝑛𝑑 𝑐

Leading to

𝑉𝑐𝑡 ≥ 𝑉𝑐𝑡+1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐

So, 𝑉𝑐𝑡 is non increasing in t for all c

Proposition 6.3.3: Characterizing R 1. 𝑅𝑐𝑡 is non increasing in c for all t 2. 𝑅𝑐𝑡 is non increasing in t for all c

Proposition 6.3.3 Statement 1 We prove this by induction. Take the case when t = T, the statement becomes

𝑅𝑐𝑇 ≥ 𝑅𝑐+1𝑇

This is true since

𝑅𝑐𝑇 = 𝑉𝑐𝑇+1 − 𝑉𝑐−1𝑇+1 = 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1

Hence 𝑅𝑐𝑇 is non increasing in c at t = T

Now, assume this is true for t = k, lets prove it for t = k – 1.

So we know that

𝑅𝑐𝑘 ≥ 𝑅𝑐+1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (1)

→ 𝑉𝑐𝑘+1 − 𝑉𝑐−1𝑘+1 ≥ 𝑉𝑐+1𝑘+1 − 𝑉𝑐𝑘+1

2𝑉𝑐𝑘+1 ≥ 𝑉𝑐+1𝑘+1 + 𝑉𝑐−1𝑘+1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑡 = 𝑘 + 1

For the statement to be true at t = k – 1, the following must be true

𝑅𝑐𝑘−1 ≥ 𝑅𝑐+1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 (2)

2𝑉𝑐𝑘 ≥ 𝑉𝑐+1𝑘 + 𝑉𝑐−1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑡 = 𝑘 (3)

Page 136: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

122

For each Xn, we can rewrite 𝑉𝑐𝑘 as:

𝑉𝑐𝑘|𝑋𝑛 = �𝐶𝐸(𝑋𝑛) + 𝑉𝑐−1𝑘+1 ∨ 𝑉𝑐𝑘+1 ∨

𝐶𝐸�{𝐶𝐸(𝑋𝑛|𝐼𝑖) + 𝑉𝑐−1𝑘+1 ∨ 𝑉𝑐𝑘+1} −𝑚��

Now, define 𝑄𝑐𝑘 as

𝑄𝑐𝑘(𝑋𝑛) = 𝐶𝐸�{𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘} −𝑚�

So, we have:

𝑉𝑐𝑘|𝑋𝑛 = 𝑉𝑐−1𝑘+1 + {𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

Now, rewrite (3) for each Xn as:

2{𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)} + 2 𝑉𝑐−1𝑘+1

≥ �𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐+1𝑘 ∨ 𝑄𝑐+1𝑘 (𝑋𝑛)�

+ 𝑉𝑐+1𝑘+1 + {𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑡1𝑘

∨ 𝑄𝑐−1𝑘 (𝑋𝑛)} + 𝑉𝑐−2𝑘+1 (4)

Which is equivalent to

𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + �𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐+1𝑘 ∨ 𝑄𝑐+1𝑘 (𝑋𝑛)� − 2{𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

+�𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐−1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)� ≤ 0 (5)

Note that from (1) we have

𝑅𝑐−1𝑘 ≥ 𝑅𝑐𝑘 ≥ 𝑅𝑐+1𝑘 (6)

This directly leads to:

𝑄𝑐−1𝑘 (𝑋𝑛) ≥ 𝑄𝑐𝑘(𝑋𝑛) ≥ 𝑄𝑐+1𝑘 (𝑋𝑛) (7)

From (6), we know that if

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑘

Then we cannot reject the deal for any higher c. The same goes for buying information, if

𝐶𝐸(𝑋𝑛) ≥ 𝑄𝑐𝑘(𝑋𝑛)

Then we cannot accept information for any higher c.

We first prove the following relation:

Page 137: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

123

𝑄𝑐−1𝑘 (𝑋𝑛)− 𝑅𝑐−1𝑘 ≤ 𝑄𝑐𝑘(𝑋𝑛)− 𝑅𝑐𝑘 (8)

𝐿𝐻𝑆 = 𝐶𝐸��𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐−1𝑘 � −𝑚� − 𝑅𝑐−1𝑘

= 𝐶𝐸 ��𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐−1𝑘 �+� − 𝑚

≤ 𝐶𝐸�[𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑘]+� − 𝑚

= 𝐶𝐸�{𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘} −𝑚� − 𝑅𝑐𝑘 = 𝑅𝐻𝑆

Note that (8) means that if Rejecting a deal was better than getting information on that deal

for a given c, then that will be the case for any lower c’s.

In the following, we drop (Xn) from the Q term for clarity.

After we reject cases that contradict (6), (7), or (8), we consider the following 10 cases:

Maximum Term

Case (c+1,t) (c,t) (c-1,t)

1 R R R

2 Q R R

3 Q Q R

4 Q Q Q

5 CE(Xn) R R

6 CE(Xn) Q R

7 CE(Xn) Q Q

8 CE(Xn) CE(Xn) R

9 CE(Xn) CE(Xn) Q

10 CE(Xn) CE(Xn) CE(Xn)

CASE 1:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝑅𝑐+1𝑘 − 2 𝑅𝑐𝑘 + 𝑅𝑐−1𝑘 = 𝑅𝑐+1𝑘 − 𝑅𝑐𝑘

but 𝑅𝑐+1𝑡 ≤ 𝑅𝑐𝑡 by the induction assumption, (1),→ (5) ≤ 0

CASE 2:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝑄𝑐+1𝑘 − 2 𝑅𝑐𝑘 + 𝑅𝑐−1𝑘 = 𝑄𝑐+1𝑘 − 𝑅𝑐𝑘

by (7), (5) ≤ 𝑄𝑐𝑘 − 𝑅𝑐𝑘

Page 138: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

124

by the case assumption, (5) ≤ 0

CASE 3:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝑄𝑐+1𝑘 − 2𝑄𝑐𝑘 + 𝑅𝑐−1𝑘 = 𝑅𝑐𝑘 + 𝑄𝑐+1𝑘 − 2𝑄𝑐𝑘 = {𝑅𝑐𝑘 − 𝑄𝑐𝑘} + {𝑄𝑐+1𝑘 − 𝑄𝑐𝑘}

from the case induction assumption, (1), 𝑄𝑐+1𝑘 ≤ 𝑄𝑐𝑘

and from the case assumption,𝑅𝑐𝑘 ≤ 𝑄𝑐𝑘 → (5) ≤ 0

CASE 4:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝑄𝑐+1𝑘 − 2 𝑄𝑐𝑘 + 𝑄𝑐−1𝑘

by (8), we have 𝑄𝑐−1𝑘 (𝑋𝑛)− 𝑅𝑐−1𝑘 ≤ 𝑄𝑐𝑘(𝑋𝑛) − 𝑅𝑐𝑘, so

(5) ≤ 𝑅𝑐𝑘 + 𝑄𝑐+1𝑘 − 2 𝑄𝑐𝑘 + 𝑄𝑐𝑘 − 𝑅𝑐𝑘 = 𝑄𝑐+1𝑘 − 𝑄𝑐𝑘

by the induction assumption, (1),𝑄𝑐+1𝑘 ≤ 𝑄𝑐𝑘 → (5) ≤ 0

CASE 5:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝐶𝐸(𝑋𝑛) − 2 𝑅𝑐𝑘 + 𝑅𝑐−1𝑘 = 𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑘

by the case assumption, 𝐶𝐸(𝑋𝑛) ≤ 𝑅𝑐𝑘 → (5) ≤ 0

CASE 6:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝐶𝐸(𝑋𝑛) − 2 𝑄𝑐𝑘 + 𝑅𝑐−1𝑘 = 𝑅𝑐𝑘 + 𝐶𝐸(𝑋𝑛)− 2 𝑄𝑐𝑘

by the case assumption, 𝐶𝐸(𝑋𝑛),𝑅𝑐𝑘 ≤ 𝑄𝑐𝑘 → (5) ≤ 0

CASE 7:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝐶𝐸(𝑋𝑛) − 2 𝑄𝑐𝑘 + 𝑄𝑐−1𝑘

by (8), we have 𝑄𝑐−1𝑘 (𝑋𝑛)− 𝑅𝑐−1𝑘 ≤ 𝑄𝑐𝑘(𝑋𝑛)− 𝑅𝑐𝑘, 𝑠𝑜

(5) ≤ 𝑅𝑐𝑘 + 𝐶𝐸(𝑋𝑛)− 2 𝑄𝑐𝑘 + 𝑄𝑐𝑘 − 𝑅𝑐𝑘 = 𝐶𝐸(𝑋𝑛)− 𝑄𝑐𝑘

by the case assumption, 𝐶𝐸(𝑋𝑛) ≤ 𝑄𝑐𝑘 → (5) ≤ 0

CASE 8:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝐶𝐸(𝑋𝑛) − 2 𝐶𝐸(𝑋𝑛) + 𝑅𝑐−1𝑘 = 𝑅𝑐𝑘 − 𝐶𝐸(𝑋𝑛)

by the case assumption, 𝑅𝑐𝑘 ≤ 𝐶𝐸(𝑋𝑛) → (5) ≤ 0

CASE 9:

Page 139: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

125

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝐶𝐸(𝑋𝑛) − 2 𝐶𝐸(𝑋𝑛) + 𝑄𝑐−1𝑘 = 𝑅𝑐𝑘 − 𝐶𝐸(𝑋𝑛) + 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘

by (8), we have 𝑄𝑐−1𝑘 (𝑋𝑛)− 𝑅𝑐−1𝑘 ≤ 𝑄𝑐𝑘(𝑋𝑛)− 𝑅𝑐𝑘, 𝑠𝑜

(5) ≤ 𝑅𝑐𝑘 − 𝐶𝐸(𝑋𝑛) + 𝑄𝑐𝑘 − 𝑅𝑐𝑘 = 𝑄𝑐𝑘 − 𝐶𝐸(𝑋𝑛)

by the case assumption, 𝑄𝑐𝑘 ≤ 𝐶𝐸(𝑋𝑛) → (5) ≤ 0

CASE 10:

(5) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + 𝐶𝐸(𝑋𝑛) − 2 𝐶𝐸(𝑋𝑛) + 𝐶𝐸(𝑋𝑛) = 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘

by the induction assumption, (1), 𝑅𝑐𝑘 ≤ 𝑅𝑐−1𝑘 → (5) ≤ 0

So since (5) is true for every Xn, we know that (3) is true and:

𝑅𝑐𝑘−1 ≥ 𝑅𝑐+1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (1)

Finally, by induction, we know that this is true for all t, or

𝑅𝑐𝑡 ≥ 𝑅𝑐+1𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 𝑎𝑡 𝑎𝑛𝑦 𝑡

Property 6.3.2 Statement 2 We prove this by induction.

Take the case when t = T, the statement becomes

𝑅𝑐𝑇−1 ≥ 𝑅𝑐𝑇

This is true since

𝑅𝑐𝑇 = 𝑉𝑐𝑇+1 − 𝑉𝑐−1𝑇+1 = 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1

However, since V is non decreasing in c, we have:

𝑅𝑐𝑇−1 = 𝑉𝑐𝑇 − 𝑉𝑐−1𝑇 ≥ 0

Hence 𝑅𝑐𝑇 is non increasing in t at t = T-1

Assume this is true for t=k and show this statement is true for t=k-1

So we know:

𝑅𝑐𝑘−1 ≥ 𝑅𝑐𝑘 (1)

Meaning:

𝑉𝑐𝑘 − 𝑉𝑐−1𝑘 ≥ 𝑉𝑐𝑘+1 − 𝑉𝑐−1𝑘+1 (2)

Page 140: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

126

And we want to show

𝑅𝑐𝑘−2 ≥ 𝑅𝑐𝑘−1 (3)

Alternatively,

𝑉𝑐𝑘−1 − 𝑉𝑐−1𝑘−1 ≥ 𝑉𝑐𝑘 − 𝑉𝑐−1𝑘 (4)

We rewrite 𝑉𝑐𝑡 for each 𝑋𝑛 as:

𝑉𝑐𝑡|𝑋𝑛 = {𝐶𝐸(𝑋𝑛) + 𝑉𝑐−1𝑡+1 ∨ 𝑉𝑐𝑡+1 ∨

𝐶𝐸({𝐶𝐸(𝑋𝑛|𝐼𝑖) + 𝑉𝑐−1𝑡+1 ∨ 𝑉𝑐𝑡+1} −𝑚)}

Recall that 𝑄𝑐𝑡 is defined as

𝑄𝑐𝑡(𝑋𝑛) = 𝐶𝐸({𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡} −𝑚)

So, we have:

𝑉𝑐𝑡|𝑋𝑛 = 𝑉𝑐−1𝑡+1 + {𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑡 ∨ 𝑄𝑐𝑡(𝑋𝑛)}

Now, rewrite (4) for each Xn as:

𝑉𝑐−1𝑘 +{𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘−1 ∨ 𝑄𝑐𝑘−1(𝑋𝑛)}

−𝑉𝑐−2𝑘 −�𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐−1𝑘−1 ∨ 𝑄𝑐−1𝑘−1(𝑋𝑛)�

−𝑉𝑐−1𝑘+1 −{𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

+ 𝑉𝑐−2𝑘+1 +�𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐−1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)� ≥ 0 (5)

Which is equivalent to

𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 −{𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘−1 ∨ 𝑄𝑐𝑘−1(𝑋𝑛)}

+�𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐−1𝑘−1 ∨ 𝑄𝑐−1𝑘−1(𝑋𝑛)�

+ {𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

− �𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐−1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)� ≤ 0 (5)

Note that from statement 1 we have

𝑅𝑐𝑡 ≥ 𝑅𝑐+1𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 𝑎𝑛𝑑 𝑐 ≥ 1 (6)

Leading to:

𝑄𝑐𝑡(𝑋𝑛) ≥ 𝑄𝑐+1𝑡 (𝑋𝑛)

Page 141: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

127

𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 𝑎𝑛𝑑 𝑐 ≥ 1 (7)

Also, (1) gives us

𝑄𝑐𝑘−1(𝑋𝑛) ≥ 𝑄𝑐𝑘(𝑋𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (8)

In similar lines as in statement 1, we prove:

𝑄𝑐𝑘−1(𝑋𝑛) − 𝑅𝑐𝑘−1 ≤ 𝑄𝑐𝑘(𝑋𝑛) − 𝑅𝑐𝑘 (9)

𝐿𝐻𝑆 = 𝐶𝐸�{𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1} −𝑚� − 𝑅𝑐𝑘−1

= 𝐶𝐸�[𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑘−1]+� − 𝑚

By the induction assumption,

𝐿𝐻𝑆 ≤ 𝐶𝐸�[𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑘]+� − 𝑚

= 𝐶𝐸�{𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘} −𝑚� − 𝑅𝑐𝑘 = 𝑅𝐻𝑆

Recall, from the proof of statement 1, that:

𝑄𝑐−1𝑡 (𝑋𝑛) − 𝑅𝑐−1𝑡 ≤ 𝑄𝑐𝑡(𝑋𝑛) − 𝑅𝑐𝑡 (10)

In the following, we drop (Xn) from the Q term for clarity.

In the same manner as in the proof for statement 3, we reject all the cases that contradict

statements (6), (7), and (8) and end up with the following 20 cases.

Maximum Term Case (c,k-1) (c-1,k-1) (c,k) (c-1,k) 1 R R R R 2 R R Q R 3 R R Q Q 4 R R CE(Xn) R 5 R R CE(Xn) Q 6 R R CE(Xn) CE(Xn) 7 Q R Q R 8 Q R Q Q 9 Q R CE(Xn) R 10 Q R CE(Xn) Q 11 Q R CE(Xn) CE(Xn) 12 Q Q Q Q 13 Q Q CE(Xn) Q 14 Q Q CE(Xn) CE(Xn) 15 CE(Xn) R CE(Xn) R 16 CE(Xn) R CE(Xn) Q

Page 142: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

128

17 CE(Xn) R CE(Xn) CE(Xn) 18 CE(Xn) Q CE(Xn) Q 19 CE(Xn) Q CE(Xn) CE(Xn) 20 CE(Xn) CE(Xn) CE(Xn) CE(Xn)

Now, let us consider the cases and evaluate relation (5):

CASE 1: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 − 𝑅𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝑅𝑐𝑘 − 𝑅𝑐𝑘−1

but, from (1),𝑅𝑐𝑘 ≤ 𝑅𝑐𝑘−1 → (5) ≤ 0 CASE 2: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝑄𝑐𝑘 − 𝑅𝑐−1𝑘 − 𝑅𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝑄𝑐𝑘 − 𝑅𝑐𝑘−1

From the case we know: 𝑅𝑐𝑘−1 ≥ 𝑄𝑐𝑘−1 From (8) we have: 𝑄𝑐𝑘−1 ≥ 𝑄𝑐𝑘

thus, (5) ≤ 0 CASE 3: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝑄𝑐𝑘 − 𝑄𝑐−1𝑘 − 𝑅𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝑅𝑐−1𝑘 + 𝑄𝑐𝑘 − 𝑄𝑐−1𝑘 − 𝑅𝑐𝑘−1

Note that, as with case 2, 𝑅𝑐𝑘−1 ≥ 𝑄𝑐𝑘 And by the case assumption, 𝑅𝑐−1𝑘 ≤ 𝑄𝑐−1𝑘 → (5) ≤ 0

CASE 4: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝑅𝑐−1𝑘 − 𝑅𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑘−1

By the induction assumption (1), (5) = 𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑘−1 ≤ 𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑘

By the case assumption, 𝐶𝐸(𝑋𝑛) ≤ 𝑅𝑐𝑘 → (5) ≤ 0

CASE 5: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝑄𝑐−1𝑘 − 𝑅𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝑅𝑐𝑘 − 𝑅𝑐𝑘−1

By the induction assumption, 𝑅𝑐𝑘−1 ≥ 𝑅𝑐𝑘 → (5) ≤ 0

CASE 6: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝑅𝑐−1𝑘 − 𝑅𝑐𝑘−1

(5) = 𝑅𝑐−1𝑘 − 𝑅𝑐𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) = �𝑅𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛)� + {𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑘−1}

But by the case assumptions, we have 𝑅𝑐−1𝑘 ≤ 𝐶𝐸(𝑋𝑛) 𝑎𝑛𝑑

𝐶𝐸(𝑋𝑛) ≤ 𝑅𝑐𝑘−1 → (5) ≤ 0 CASE 7:

(5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝑄𝑐𝑘 − 𝑅𝑐−1𝑘 − 𝑄𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝑄𝑐𝑘 − 𝑄𝑐𝑘−1 by (8), 𝑄𝑐𝑘 ≤ 𝑄𝑐𝑘−1 → (5) ≤ 0

CASE 8: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝑄𝑐𝑘 − 𝑄𝑐−1𝑘 − 𝑄𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 =

�𝑅𝑐−1𝑘 − 𝑄𝑐−1𝑘 � + {𝑄𝑐𝑘 − 𝑄𝑐𝑘−1} By the case assumption, 𝑅𝑐−1𝑘 ≤ 𝑄𝑐−1𝑘 and by (8), 𝑄𝑐𝑘 ≤ 𝑄𝑐𝑘−1 → (5) ≤ 0

CASE 9:

Page 143: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

129

(5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝑅𝑐−1𝑘 − 𝑄𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝐶𝐸(𝑋𝑛) − 𝑄𝑐𝑘−1 by the case assumption, 𝑄𝑐𝑘−1 ≥ 𝐶𝐸(𝑋𝑛) → (5) ≤ 0

CASE 10: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛)− 𝑄𝑐−1𝑘 − 𝑄𝑐𝑘−1 + 𝑅𝑐−1𝑘−1

= {𝑅𝑐−1𝑘 − 𝑄𝑐−1𝑘 } + {𝐶𝐸(𝑋𝑛) −𝑄𝑐𝑘−1} by the case assumptions, we have:

𝑅𝑐−1𝑘 ≤ 𝑄𝑐−1𝑘 and 𝐶𝐸(𝑋𝑛) ≤ 𝑄𝑐𝑘−1 → (5) ≤ 0 CASE 11:

(5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) − 𝑄𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝑅𝑐−1𝑘 − 𝑄𝑐𝑘−1 in similar manner to CASE 6, we add and subtrac𝑡 𝐶𝐸(𝑋𝑛)

(5) = �𝑅𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛)� + {𝐶𝐸(𝑋𝑛)− 𝑄𝑐𝑘−1} by the case assumptions we have:

𝑅𝑐−1𝑘 ≤ 𝐶𝐸(𝑋𝑛) 𝑎𝑛𝑑 𝐶𝐸(𝑋𝑛) ≤ 𝑄𝑐𝑘−1 → (5) ≤ 0

CASE 12: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝑄𝑐𝑘 − 𝑄𝑐−1𝑘 − 𝑄𝑐𝑘−1 + 𝑄𝑐−1𝑘−1 =

��𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1� − �𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 �� + {𝑄𝑐𝑘 − 𝑄𝑐𝑘−1}

by (8), 𝑄𝑐𝑘 ≤ 𝑄𝑐𝑘−1 by (9), 𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 → (5) ≤ 0

CASE 13: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) −𝑄𝑐−1𝑘 − 𝑄𝑐𝑘−1 + 𝑄𝑐−1𝑘−1

= �𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1� − � 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 � + {𝐶𝐸(𝑋𝑛)− 𝑄𝑐−1𝑘 }

by the case assumption we have 𝑄𝑐−1𝑘 ≥ 𝐶𝐸(𝑋𝑛)

and by (9), 𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 → (5) ≤ 0

CASE 14: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) − 𝑄𝑐𝑘−1 + 𝑄𝑐−1𝑘−1

= �𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1� − � 𝑄𝑐𝑘−1 − 𝑅𝑐−1𝑘 � by (9), 𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 𝑠𝑜,

(5) = �𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1� − � 𝑄𝑐𝑘−1 − 𝑅𝑐−1𝑘 � ≤ �𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 � − � 𝑄𝑐𝑘−1 − 𝑅𝑐−1𝑘 �

= 𝑄𝑐−1𝑘 − 𝑄𝑐𝑘−1 add and subtract 𝐶𝐸(𝑋𝑛)

𝑄𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛) + 𝐶𝐸(𝑋𝑛) − 𝑄𝑐𝑘−1 by the case assumptions,

𝑄𝑐−1𝑘 ≤ 𝐶𝐸(𝑋𝑛), 𝑎𝑛𝑑 𝐶𝐸(𝑋𝑛) ≤ 𝑄𝑐𝑘−1 → (5) ≤ 0

CASE 15: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝑅𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛) + 𝑅𝑐−1𝑘−1 = 0

CASE 16: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝑄𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛) + 𝑅𝑐−1𝑘−1 = 𝑅𝑐−1𝑘 − 𝑄𝑐−1𝑘

by the case assumption,𝑅𝑐−1𝑘 ≤ 𝑄𝑐−1𝑘 → (5) ≤ 0 CASE 17:

(5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) + 𝑅𝑐−1𝑘−1 = 𝑅𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛)

Page 144: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

130

and by the case assumption,𝑅𝑐−1𝑘 ≤ 𝐶𝐸(𝑋𝑛) → (5) ≤ 0 CASE 18: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝑄𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛) + 𝑄𝑐−1𝑘−1

= �𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1� − � 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 � by (9), 𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 → (5) ≤ 0

CASE 19: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛)− 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) + 𝑄𝑐−1𝑘−1

= 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 − 𝐶𝐸(𝑋𝑛) + 𝑄𝑐−1𝑘−1 by (9), 𝑄𝑐−1𝑘−1 − 𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘

so, (5) ≤ 𝑅𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛) + 𝑄𝑐−1𝑘 − 𝑅𝑐−1𝑘 = 𝑄𝑐−1𝑘 − 𝐶𝐸(𝑋𝑛) by the case assumption,

𝑄𝑐−1𝑘 ≤ 𝐶𝐸(𝑋𝑛) , → (5) ≤ 0 CASE 20:

(5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑋𝑛) + 𝐶𝐸(𝑋𝑛) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 by the induction assumption, (1),

𝑅𝑐−1𝑘−1 ≥ 𝑅𝑐−1𝑘 → (5) ≤ 0 So, (5) ≤ 0 is true for all Xn and hence must be true for the certain equivalent over Xn. So, (3)

is true, or 𝑅𝑐𝑘−1 ≥ 𝑅𝑐𝑘

Finally, by induction, we know that 𝑅𝑐𝑡−1 ≥ 𝑅𝑐𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑎𝑛𝑦 𝑡

Corollary 6.3.1: Characterizing iV and iVI The incremental values of a deal 𝑋𝑛 with and without information exhibit the following two

properties

I. 𝑖𝑉𝑐𝑡(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) are non decreasing in c for all t II. 𝑖𝑉𝑐𝑡(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) are non decreasing in t for all c

Corollary 6.3.1 for iV By definition of iV, we have

𝑖𝑉𝑐𝑡(𝑋𝑛) = {𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡 ∨ 0}

Consider the first term of the max relation:

𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑡

Since CE(Xn) is neither a function of t nor c, 𝑖𝑉𝑐𝑡(𝑋𝑛) changes with c and t in opposite

direction as that of 𝑅𝑐𝑡 . The second term of the max relation is zero which does not change

with either. Hence, 𝑖𝑉𝑐𝑡(𝑋𝑛) changes in the opposite direction of 𝑅𝑐𝑡.

Thus, the incremental value is non decreasing in c for all t and non decreasing in t for all c.

Corollary 6.3.1 for iVI By definition we have

Page 145: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

131

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝐸�𝑖𝑉𝑐𝑡(𝑋𝑛|𝐼𝑖)�

Note that the indication is not a function of the state (c,t) and hence 𝑖𝑉𝐼𝑐𝑡 follows 𝑖𝑉𝑐𝑡and is

also non decreasing in c for all t and non decreasing in t for all c.

Proposition 6.3.4: Characterizing the IBP of Information (iVoI) The IBP of information exhibits the following properties

I. For a given c, IBP of information is increasing in t and reaches a maximum when

𝑅𝑐𝑡 = 𝐶𝐸(𝑋𝑛) then it decreases in t until it converges at 𝐶𝐸(𝑋𝑛∗) − 𝐶𝐸(𝑋𝑛)

II. For a given t, IBP of information is increasing in c and reaches a maximum when

𝑅𝑐𝑡 = 𝐶𝐸(𝑋𝑛) then it decreases in c until it converges at 𝐶𝐸(𝑋𝑛∗) − 𝐶𝐸(𝑋𝑛)

Where 𝑋𝑛∗ is the deal with free information.

Recall that the IBP of information is defined as:

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝑖𝑉𝐼𝑐𝑡(𝑋𝑛)− 𝑖𝑉𝑐𝑡(𝑋𝑛)

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡 ∨ 0) − (𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑡 ∨ 0) (1)

We take two cases, namely, as 𝐶𝐸(𝑋𝑛) relates to 𝑅𝑐𝑡

CASE1: 𝐶𝐸(𝑋𝑛) ≤ 𝑅𝑐𝑡

(1) reduces to

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡 ∨ 0) − 0

So 𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) moves in the opposite direction as 𝑅𝑐𝑡

Thus, 𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) is increasing in t and c when 𝐶𝐸(𝑋𝑛) ≤ 𝑅𝑐𝑡

CASE2: 𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡

(1) reduces to

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) =

𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡 ∨ 0) − 𝐶𝐸(𝑋𝑛) + 𝑅𝑐𝑡

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡)− 𝐶𝐸(𝑋𝑛)

So 𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) moves in the same direction as 𝑅𝑐𝑡

Thus, 𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) is decreasing in t and c when 𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡

Page 146: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

132

Now, since 𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) increases in case1 and then decreases in case2, we can see that it

reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝐸(𝑋𝑛)

Finally, we study the convergence of the term. At 𝑡 = 𝑇, (1) reduces to:

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 0) − 𝐶𝐸(𝑋𝑛)

But, the value of the deal with free information outside the funnel, 𝐶𝐸(𝑋𝑛∗), equals:

𝐶𝐸(𝑋𝑛∗) = 𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 0)

Thus, 𝑖𝑉𝑜𝐼𝑐𝑡 = 𝐶𝐸(𝑋𝑛∗) − 𝐶𝐸(𝑋𝑛)

The same is true when c = C.

Proposition 6.3.5: Characterization of Optimal Policy The optimal policy for a given deal 𝑋𝑛 is characterized as follows

I. For a given c, the optimal policy can only change over time from rejecting, to buying information, and finally to accepting.

II. For a given t, the optimal policy can only change over capacity from rejecting, to buying information, and finally to accepting.

Proposition 3.5 Statement 1 We follow the notation used in proposition 3.3. Namely, define 𝑄𝑐𝑡(𝑋𝑛) as

𝑄𝑐𝑡(𝑋𝑛) = 𝐶𝐸(𝐶𝐸(𝑋𝑛�|𝑖) ∨ 𝑅𝑐𝑡) −𝑚

Now, we have

𝑉𝑐𝑡|𝑋𝑛 = 𝑉𝑐−1𝑡+1 + {𝑅𝑐𝑡 ∨ 𝑄𝑐𝑡(𝑋𝑛) ∨ 𝐶𝐸(𝑋𝑛)}

To prove statement 1, we need to show that for a given c:

• If 𝐶𝐸(𝑋𝑛) ≥ {𝑅𝑐𝑡,𝑄𝑐𝑡(𝑋𝑛)}, then this is true for any 𝑡∗ > 𝑡

• If 𝑄𝑐𝑡(𝑋𝑛) ≥ 𝑅𝑐𝑡 , then this is true for any 𝑡∗ > 𝑡

To prove the first condition, we note that 𝑅𝑐𝑡 is decreasing in t for a given c. Hence, if the first

condition is true at any given t, then it must be true for larger values of t. Now, from the

definition of 𝑄𝑐𝑡(𝑋𝑛) we see that it moves in the direction of 𝑅𝑐𝑡. Thus, 𝑄𝑐𝑡(𝑋𝑛) will decrease

in t and the first condition is true.

To prove the second condition we note that it is sufficient to prove

𝑄𝑐𝑡(𝑋𝑛)− 𝑅𝑐𝑡 ≥ 𝑄𝑐𝑡−1(𝑋𝑛)− 𝑅𝑐𝑡−1

Page 147: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

133

We rewrite this as

𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡)− 𝑚 − 𝑅𝑐𝑡 ≥

𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡−1)− 𝑚 − 𝑅𝑐𝑡−1

𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡) ≥ 𝐶𝐸(𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡−1)

𝑖𝑉𝐼𝑐𝑡(𝑋𝑛) ≥ 𝑖𝑉𝐼𝑐𝑡−1(𝑋𝑛)

Which we know is true from proposition 3.2

Proposition 3.5 Statement 2 This can be proven in the same lines as in statement1 by replacing c with t.

Proposition 6.3.6: Identifying Optimal Detector Consider two detectors with incremental value of deals with information 𝑖𝑉𝐼1 and 𝑖𝑉𝐼1 and

cost 𝑚1 and 𝑚2 respectively. Detector 1 will be optimal when:

𝑖𝑉𝐼1− 𝑖𝑉𝐼2 > 𝑚1 −𝑚2

Otherwise, detector 2 will be optimal.

This optimality is not myopic, that is, if the decision maker is offered the use of both, he/she

should not always start with the optimal detector.

let 𝐼𝑖1, 𝐼𝑖2 be the indications associated with detectors 1 and 2 respectively. The alternative of

buying information is worth

�𝑉𝑐−1𝑡+1 −𝑚𝑗 + 𝐶𝐸�𝐶𝐸�𝑋𝑛�𝐼𝑖𝑗� ∨ 𝑅𝑐𝑡��

where j is the number of the detector. For detector 1 to be preferable over detector 2, the

following needs to be true.

�𝑉𝑐−1𝑡+1 − 𝑚1 + 𝐶𝐸�𝐶𝐸�𝑋𝑛�𝐼𝑖1� ∨ 𝑅𝑐𝑡�� ≥ �𝑉𝑐−1𝑡+1 − 𝑚2 + 𝐶𝐸�𝐶𝐸�𝑋𝑛�𝐼𝑖2� ∨ 𝑅𝑐𝑡��

�−𝑚1 + 𝐶𝐸�𝐶𝐸�𝑋𝑛�𝐼𝑖1� ∨ 𝑅𝑐𝑡�� ≥ �−𝑚2 + 𝐶𝐸�𝐶𝐸�𝑋𝑛�𝐼𝑖2� ∨ 𝑅𝑐𝑡��

�−𝑚1 + 𝐶𝐸�(𝐶𝐸�𝑋𝑛�𝐼𝑖1� − 𝑅𝑐𝑡) ∨ 0�+ 𝑅𝑐𝑡� ≥ �−𝑚2 + 𝐶𝐸�(𝐶𝐸�𝑋𝑛�𝐼𝑖2� − 𝑅𝑐𝑡) ∨ 0� + 𝑅𝑐𝑡�

�−𝑚1 + 𝐶𝐸�𝑖𝑉𝑐𝑡�𝑋𝑛�𝐼𝑖1��� ≥ �−𝑚2 + 𝐶𝐸�𝑖𝑉𝑐𝑡�𝑋𝑛�𝐼𝑖2���

Page 148: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

134

𝑙𝑒𝑡 𝑖𝑉𝑜𝐼𝑐𝑡𝑗(𝑋𝑛) = 𝐶𝐸�𝑖𝑉𝑐𝑡�𝑋𝑛�𝐼𝑖𝑗��

−𝑚1 + 𝑖𝑉𝑜𝐼𝑐𝑡1(𝑋𝑛) ≥ −𝑚2 + 𝑖𝑉𝑜𝐼𝑐𝑡2(𝑋𝑛)

𝑖𝑉𝑜𝐼𝑐𝑡1(𝑋𝑛)− 𝑖𝑉𝑜𝐼𝑐𝑡2(𝑋𝑛) ≥ 𝑚1 −𝑚2

𝑖𝑉𝑜𝐼1− 𝑖𝑉𝑜𝐼2 ≥ 𝑚1 −𝑚2

A1.2.2 Section 6.4 The Long-Run Problem

Proposition 4.1: Characterizing Long-Run Problem In here we characterize the problem parameters in similar lines as in Section 3. We found

that all the relations are maintained along the capacity dimension.

I. When offered a deal 𝑋𝑛 the decision maker should buy information if and only if

𝑖𝑉𝐼𝑐(𝑋𝑛) ≥ 𝑖𝑉𝑐(𝑋𝑛) + 𝑚. Otherwise, the deal is worth buying without

information if and only if 𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐

II. 𝑉𝑐 is non decreasing in c

III. 𝑅𝑐 is non increasing in c

IV. 𝑖𝑉𝑐(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑉𝐼𝑐(𝑋𝑛) are non decreasing in c

V. 𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) is increasing in c and reaches a maximum when 𝑅𝑐 = 𝐶𝐸(𝑋𝑛) then it

decreases in c until it converges at 𝐶𝐸(𝑋𝑛∗)− 𝐶𝐸(𝑋𝑛). Where 𝑋𝑛∗ is the deal with

free information.

VI. The optimal policy can only change over c from rejecting, to buying information,

and finally to accepting.

Proposition 4.1 Statement 1 The recursion can be rewritten as:

Accept

Seek info

Reject

Reject

Accept

𝑉𝑐|𝑋𝑛

𝛿 𝑉𝑐−1 + 𝐶𝐸(𝑋𝑛)

𝛿 𝑉𝑐

𝛿 𝑉𝑐−1 + 𝐶𝐸(𝑋𝑛|𝐼𝑖)−𝑚

𝛿 𝑉𝑐 − 𝑚

Ii

Page 149: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

135

𝑉𝑐|𝑋𝑛 = �[𝛿 𝑉𝑐−1 + 𝐶𝐸(𝑋𝑛)] ∨ 𝛿 𝑉𝑐 ∨

�𝛿 𝑉𝑐−1 − 𝑚 + 𝐶𝐸[𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐]�� (1)

Using the definition of the incremental value of the deal with information, the recursion

simplifies to

We seek information when:

𝛿 𝑉𝑐 + 𝑖𝑉𝐼𝑐(𝑋𝑛) −𝑚 ≥ ��𝛿 𝑉𝑐−1 + 𝐶𝐸(𝑋𝑛)� ∨ 𝛿 𝑉𝑐�

𝑖𝑉𝐼𝑐(𝑋𝑛) ≥ {(𝐶𝐸(𝑋𝑛) − 𝑅𝑐) ∨ 0} + 𝑚

So,

𝑖𝑉𝐼𝑐(𝑋𝑛) ≥ 𝑖𝑉𝑐(𝑋𝑛) + 𝑚

Otherwise, we prefer to go with the deal without information. We buy the deal if

𝛿 𝑉𝑐−1 + 𝐶𝐸(𝑋𝑛) ≥ 𝛿 𝑉𝑐

𝑜𝑟 𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐

Proposition 4.1 Statements 2 and 3 We prove the two properties in two steps. First, we prove that it is true for the finite horizon

case when we introduce discounting. Then, we use successive approximations to show that

the infinite horizon V converges to that in the finite horizon as we push the deadline T to the

limit.

Step 1-1: Characterizing V

We prove this by induction. Take t = T, (1) becomes

𝑉𝑐𝑇|𝑋𝑛 = �[𝛿𝑉𝑐−1𝑇+1 + 𝐸(𝑋𝑛)] ∨ 𝛿𝑉𝑐𝑇+1 ∨

�𝛿𝑉𝑐−1𝑇+1 − 𝑚 + 𝐸[𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇]��

Since 𝑉𝑐𝑇+1 = 0 for all c, this equation reduces to

Accept

Seek info

Reject 𝑉𝑐|𝑋𝑛

𝛿 𝑉𝑐−1 + 𝐶𝐸(𝑋𝑛)

𝛿 𝑉𝑐

𝛿 𝑉𝑐 + 𝑖𝑉𝐼𝑐(𝑋𝑛)−𝑚

Page 150: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

136

𝑉𝑐𝑇|𝑋𝑛 =

{ 𝐸(𝑋𝑛) ∨ 0 ∨ 𝐸[{𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇}] −𝑚 }

𝑓𝑜𝑟 𝑐 ≥ 1

𝑉0𝑇|𝑋𝑛 = 0 𝑓𝑜𝑟 𝑐 = 0

Hence it is non decreasing in c at t = T

Now, assume true for t = k, or

𝑉𝑐𝑘 ≥ 𝑉𝑐−1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (2)

Now, prove that it is true for t = k-1, or

𝑉𝑐𝑘−1 ≥ 𝑉𝑐−1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (3)

Where:

𝑉𝑐𝑘−1|𝑋𝑛 = ��𝛿𝑉𝑐−1𝑘 + 𝐸(𝑋𝑛)� ∨ 𝛿𝑉𝑐𝑘 ∨

�𝛿𝑉𝑐−1𝑘 − 𝑚 + 𝐸[𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1]��

And

𝑉𝑐−1𝑘−1|𝑋𝑛 = ��𝛿𝑉𝑐−2𝑘 + 𝐸(𝑋𝑛)� ∨ 𝛿𝑉𝑐−1𝑘 ∨

�𝛿𝑉𝑐−2𝑘 − 𝑚 + 𝐸�𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐−1𝑘−1���

But since 𝑉𝑐𝑘 is non decreasing in c, (2), we have

𝑉𝑐−1𝑘−1|𝑋𝑛 ≤ ��𝛿𝑉𝑐−1𝑘 + 𝐸(𝑋𝑛)� ∨ 𝛿𝑉𝑐𝑘 ∨

�𝛿𝑉𝑐−1𝑘 − 𝑚 + 𝐸[𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1]�� = 𝑉𝑐𝑘−1|𝑋𝑛

Hence,

𝑉𝑐−1𝑘−1�𝑋𝑛 ≤ 𝑉𝑐𝑘−1�𝑋𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑋𝑛

Meaning (3) is true, namely

𝑉𝑐−1𝑘−1 ≤ 𝑉𝑐𝑘−1

And, finally, by induction

𝑉𝑐−1𝑡 ≤ 𝑉𝑐𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 𝑎𝑛𝑑 𝑎𝑙𝑙 𝑡

Page 151: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

137

Step 1-2: Characterizing R

We prove this by induction. Take the case when t = T, the statement becomes

𝑅𝑐𝑇 ≥ 𝑅𝑐+1𝑇

This is true since

𝑅𝑐𝑇 = 𝛿𝑉𝑐𝑇+1 − 𝛿𝑉𝑐−1𝑇+1 = 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1

Hence 𝑅𝑐𝑇 is non increasing in c at t = T

Now, assume this is true for t = k, lets prove it for t = k – 1.

So we know that

𝑅𝑐𝑘 ≥ 𝑅𝑐+1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (1)

→ 𝛿𝑉𝑐𝑘+1 − 𝛿𝑉𝑐−1𝑘+1 ≥ 𝛿𝑉𝑐+1𝑘+1 − 𝛿𝑉𝑐𝑘+1

2𝑉𝑐𝑘+1 ≥ 𝑉𝑐+1𝑘+1 + 𝑉𝑐−1𝑘+1

𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑡 = 𝑘 + 1

For the statement to be true at t = k – 1, the following must be true

𝑅𝑐𝑘−1 ≥ 𝑅𝑐+1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 (2)

2𝑉𝑐𝑘 ≥ 𝑉𝑐+1𝑘 + 𝑉𝑐−1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑡 = 𝑘 (3)

For each Xn, we can rewrite 𝑉𝑐𝑘 as:

𝑉𝑐𝑘|𝑋𝑛 = �𝐶𝐸(𝑋𝑛) + 𝛿𝑉𝑐−1𝑘+1 ∨ 𝛿𝑉𝑐𝑘+1 ∨

𝐶𝐸�{𝐶𝐸(𝑋𝑛|𝐼𝑖) + 𝛿𝑉𝑐−1𝑘+1 ∨ 𝛿𝑉𝑐𝑘+1} −𝑚��

Now, define 𝑄𝑐𝑘 as

𝑄𝑐𝑘(𝑋𝑛) = 𝐶𝐸�{𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘} −𝑚�

So, we have:

𝑉𝑐𝑘|𝑋𝑛 = 𝛿𝑉𝑐−1𝑘+1 + {𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

Now, rewrite (3) for each Xn as:

2{𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)} + 2 𝛿𝑉𝑐−1𝑘+1

≥ �𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐+1𝑘 ∨ 𝑄𝑐+1𝑘 (𝑋𝑛)� + 𝛿𝑉𝑐+1𝑘+1 +

Page 152: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

138

{𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑡1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)} + 𝛿𝑉𝑐−2𝑘+1 (4)

Which is equivalent to

𝑅𝑐𝑘 − 𝑅𝑐−1𝑘 + �𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐+1𝑘 ∨ 𝑄𝑐+1𝑘 (𝑋𝑛)�

−2{𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

+�𝐶𝐸(𝑋𝑛) ∨ 𝑅𝑐−1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)� ≤ 0 (5)

And the rest of the proof follows directly from the case without discounting.

Step 2: Iterative Approximations

Here we prove the infinite horizon case by iterative approximations.

Define the operator ℋ as

ℋ(𝑉𝑐|𝑋𝑛) = ��𝛿𝑉𝑐−1 + 𝐸(𝑋𝑛)� ∨ 𝛿𝑉𝑐 ∨

(𝛿𝑉𝑐−1 − 𝑚 + 𝐸[𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇 ])�

If we take the input of the first iteration to be 0, then the solution to the fixed point relation

of ℋ above is 𝑉𝑐𝑇. Following the same methodology as above we can show that the infinite

horizon 𝑉𝑐 converges to that of the finite horizon and the properties we proved for the finite

horizon extend to the infinite horizon.

Proposition 4.1 Statement 4 By definition, we have

𝑖𝑉𝑐(𝑋𝑛) = [𝐸(𝑋𝑛) − 𝑅𝑐]+

So, 𝑖𝑉𝑐 is inversely related to 𝑅𝑐 and hence 𝑖𝑉𝑐 is non decreasing in c

Similarly, we have

𝑖𝑉𝐼𝑐(𝑋𝑛) = 𝐸[(𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐 ∨ 0)] = 𝐸[𝑖𝑉𝑐(𝑋𝑛|𝐼𝑖)]

So, 𝑖𝑉𝐼𝑐 moves along the same direction as 𝑖𝑉𝑐 and hence is also non decreasing in c

Proposition 4.1 Statement 5 Recall that the IBP of information is defined as:

𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) = 𝑖𝑉𝐼𝑐(𝑋𝑛) − 𝑖𝑉𝑐(𝑋𝑛)

𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) =

𝐸(𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐 ∨ 0) − (𝐸(𝑋𝑛) − 𝑅𝑐 ∨ 0) (1)

Page 153: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

139

We take two cases as 𝐸(𝑋𝑛) relates to 𝑅𝑐

CASE1: 𝐸(𝑋𝑛) ≤ 𝑅𝑐

(1) reduces to

𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) = 𝐸(𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐 ∨ 0) − 0

So 𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) moves in the opposite direction as 𝑅𝑐

Thus, 𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) is increasing in c when 𝐸(𝑋𝑛) ≤ 𝑅𝑐

CASE2: 𝐸(𝑋𝑛) ≥ 𝑅𝑐

(1) reduces to

𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) =

𝐸(𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐 ∨ 0) − 𝐸(𝑋𝑛) + 𝑅𝑐

𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) = 𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐) − 𝐸(𝑋𝑛)

So 𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) moves in the same direction as 𝑅𝑐

Thus, 𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) is decreasing in t and c when 𝐸(𝑋𝑛) ≥ 𝑅𝑐

Now, since 𝑖𝑉𝑜𝐼𝑐(𝑋𝑛) increases in case1 and then decreases in case2, we can see that it

reaches a maximum when 𝑅𝑐 = 𝐸(𝑋𝑛)

Finally, we study the convergence of the term. At 𝑐 = 𝐶, (1) reduces to:

𝑖𝑉𝑜𝐼𝐶(𝑋𝑛) = 𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 0) − 𝐸(𝑋𝑛)

But, the value of the deal with free information outside the funnel, 𝐸(𝑋𝑛∗), equals:

𝐸(𝑋𝑛∗) = 𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 0)

Thus, 𝑖𝑉𝑜𝐼𝐶 = 𝐸(𝑋𝑛∗) − 𝐸(𝑋𝑛)

Proposition 4.1 Statement 6 Define 𝑄𝑐(𝑋𝑛) as

𝑄𝑐(𝑋𝑛) = 𝐸(𝐸(𝑋𝑛�|𝑖) ∨ 𝑅𝑐) −𝑚

Now, we have

𝑉𝑐|𝑋𝑛 = 𝑉𝑐−1 + {𝑅𝑐 ∨ 𝑄𝑐(𝑋𝑛) ∨ 𝐸(𝑋𝑛)}

To prove this statement, we need to show that:

Page 154: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

140

• If 𝐸(𝑋𝑛) ≥ {𝑅𝑐 ,𝑄𝑐(𝑋𝑛)}, then this is true for any 𝑐∗ > 𝑐

• If 𝑄𝑐(𝑋𝑛) ≥ 𝑅𝑐 , then this is true for any 𝑐∗ > 𝑐

To prove the first condition, we note that 𝑅𝑐 is decreasing in c. Hence, if the first condition is

true at any given c, then it must be true for larger values of c. Now, from the definition of

𝑄𝑐(𝑋𝑛) we see that it moves in the direction of 𝑅𝑐. Thus, 𝑄𝑐(𝑋𝑛) will decrease in c and the

first condition is true.

To prove the second condition we note that it is sufficient to prove

𝑄𝑐(𝑋𝑛) − 𝑅𝑐 ≥ 𝑄𝑐−1(𝑋𝑛)− 𝑅𝑐−1

We rewrite this as

𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐) − 𝑚 −𝑅𝑐 ≥

𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐−1) − 𝑚 −𝑅𝑐−1

𝐸(𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐) ≥ 𝐸(𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐−1)

𝑖𝑉𝐼𝑐(𝑋𝑛) ≥ 𝑖𝑉𝐼𝑐−1(𝑋𝑛)

Which we know is true from proposition 4.1 statement

A1.2.3 Section 6.5.1 Extensions – Multiple Cost Structures The problem with different cost structures is shown in the graph below.

Proposition 6.5.1a: Optimal Information Gathering Policy with Multiple Cost Structures When offered a deal 𝑋𝑛 the decision maker should buy information if and only if

𝑖𝑉𝐼𝑐−𝑘𝑡+𝑑(𝑋𝑛) ≥ 𝑖𝑉𝑐𝑡 (𝑋𝑛) + 𝑅𝑐𝑡(𝑘,𝑑 + 1) + 𝑚

Accept

Seek info

Reject

Reject

Accept

𝑉𝑐𝑡|𝑋𝑛

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1

𝑉𝑐−1−𝑘𝑡+1+𝑑 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚

𝑉𝑐−𝑘𝑡+1+𝑑 − 𝑚

Ii

Page 155: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

141

Proposition 6.5.1b: Optimal Allocation Policy with Multiple Cost Structures After information is received, the decision maker should accept the deal if and only if

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐−𝑘𝑡+𝑑(1,1)

Otherwise, the deal is worth accepting without information if and only if

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡

We first write the recursion as:

𝑉𝑐𝑡|𝑋𝑛 = {𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) ∨ 𝑉𝑐𝑡+1 ∨ [𝑉𝑐−𝑘𝑡+1+𝑑 − 𝑚 + 𝐶𝐸({𝐶𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡 (𝑘,𝑑 + 1)})]}

where

𝑅𝑐𝑡(𝑘,𝑑 + 1) = 𝑉𝑐𝑡+1 − 𝑉𝑐−𝑘𝑡+1+𝑑

This reduces to

𝑉𝑐𝑡|𝑋𝑛 = 𝑉𝑐𝑡+1 + �𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡 (1,1) ∨ 0

∨ �𝑉𝑐−𝑘𝑡+1+𝑑 − 𝑉𝑐𝑡+1 − 𝑚 + 𝐶𝐸({𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡(𝑘,𝑑 + 1) ∨ 0})��

𝑉𝑐𝑡|𝑋𝑛 = 𝑉𝑐𝑡+1 + {𝑖𝑉𝑐𝑡 (𝑋𝑛) ∨ 𝑉𝑐−𝑘𝑡+1+𝑑 − 𝑉𝑐𝑡+1 − 𝑚 + 𝑖𝑉𝑜𝐼𝑐−𝑘𝑡+𝑑 (𝑋𝑛)}

𝑉𝑐𝑡|𝑋𝑛 = 𝑉𝑐𝑡+1 + {𝑖𝑉𝑐𝑡 (𝑋𝑛) ∨ 𝑖𝑉𝑜𝐼𝑐−𝑘𝑡+𝑑 (𝑋𝑛)− 𝑅𝑐𝑡(𝑘,𝑑 + 1) −𝑚}

Hence we seek information when

𝑖𝑉𝑜𝐼𝑐−𝑘𝑡+𝑑 (𝑋𝑛)− 𝑅𝑐𝑡(𝑘,𝑑 + 1) −𝑚 ≥ 𝑖𝑉𝑐𝑡 (𝑋𝑛)

𝑖𝑉𝑜𝐼𝑐−𝑘𝑡+𝑑 (𝑋𝑛) ≥ 𝑖𝑉𝑐𝑡 (𝑋𝑛) + 𝑅𝑐𝑡(𝑘,𝑑 + 1) + 𝑚

After information is received, the decision maker should accept if and only if:

𝑉𝑐−1−𝑘𝑡+1+𝑑 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚 ≥ 𝑉𝑐−𝑘𝑡+1+𝑑 − 𝑚

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑉𝑐−𝑘𝑡+1+𝑑 − 𝑉𝑐−1−𝑘𝑡+1+𝑑

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐−𝑘𝑡+𝑑(1,1)

If information is not obtained, the decision maker should accept if and only if:

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) ≥ 𝑉𝑐𝑡+1

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡 (1,1)

Page 156: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

142

Corollary 6.5.1: Identifying Optimal Detector with Multiple Cost Structures Given the setup above, detector 1 will be optimal when:

𝑖𝑉𝐼1 − 𝑖𝑉𝐼2 ≥ 𝑅𝑐𝑡(𝑘2,𝑑2 + 1 ) − 𝑅𝑐𝑡 (𝑘1,𝑑1 + 1 ) + 𝑚1 −𝑚2

Where 𝑖𝑉𝐼1 = 𝑖𝑉𝐼𝑐−𝑘1𝑡+𝑑1𝑜𝑣𝑒𝑟 𝐼1,𝑎𝑛𝑑 𝑖𝑉𝐼2 = 𝑖𝑉𝐼𝑐−𝑘2

𝑡+𝑑2𝑜𝑣𝑒𝑟 𝐼2

Otherwise Detector 2 will be optimal.

From above we have that the value of the alternative of information for detector 1, A1, is

worth:

𝐴1 = 𝑉𝑐𝑡+1 + 𝑖𝑉𝑜𝐼1 − 𝑅𝑐𝑡(𝑘1,𝑑1 + 1) −𝑚1

We define A1 in the same manner.

So, we prefer detector 1 over detector 2 when 𝐴1 ≥ 𝐴2. Or

𝑉𝑐𝑡+1 + 𝑖𝑉𝑜𝐼1− 𝑅𝑐𝑡(𝑘1,𝑑1 + 1) −𝑚1 ≥ 𝑉𝑐𝑡+1 + 𝑖𝑉𝑜𝐼2 − 𝑅𝑐𝑡(𝑘2,𝑑2 + 1) −𝑚2

𝑖𝑉𝑜𝐼1− 𝑖𝑉𝑜𝐼2 ≥ 𝑅𝑐𝑡(𝑘2,𝑑2 + 1 ) − 𝑅𝑐𝑡 (𝑘1,𝑑1 + 1 ) + 𝑚1 −𝑚2

A1.2.4 Section 6.5.2 Extensions – Decision Reversibility The problem setup is represented below.

Proposition 6.5.2: Optimal Allocation Policy with an Option When offered a deal 𝑋𝑛 with an option on deal 𝑍, the decision maker should accept the deal

𝑋𝑛 and buy an option on it if and only if:

𝑖𝑉𝑂𝑐𝑡(𝑋𝑛,𝑍) ≥ 𝑖𝐶𝑐𝑡 (𝑋𝑛,𝑍) +𝑚 + 𝐶𝐸(𝑍)

Otherwise, the decision maker should accept if and only if:

𝐶𝐸(𝑋𝑛) > 𝑅𝑐𝑡(1,1,𝑍)

Where

Reject

Accept Don’t

Buy option

𝑉𝑐𝑡(𝑍)|𝑋𝑛

𝑉𝑐𝑡+1(𝑋𝑛) + 𝐶𝐸(𝑋𝑛)− 𝐶𝐸(𝑍) −𝑚

𝑉𝑐−1𝑡+1(𝑍) + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1(𝑍)

Page 157: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

143

𝑖𝑉𝑂𝑐𝑡(𝑋𝑛,𝑍) = 𝑉𝑐𝑡+1(𝑋𝑛)− 𝑉𝑐−1𝑡+1(𝑍)

𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍) = [𝑅𝑐𝑡(1,1,𝑍) − 𝐶𝐸(𝑋𝑛)]+

We have the following recursion

𝑉𝑐𝑡(𝑍)|𝑋𝑛 = {(𝑉𝑐𝑡+1(𝑋𝑛) + 𝐶𝐸(𝑋𝑛) − 𝐶𝐸(𝑍) −𝑚) ∨ �𝑉𝑐−1𝑡+1(𝑍) + 𝐶𝐸(𝑋𝑛)� ∨ �𝑉𝑐𝑡+1(𝑍)�}

𝑉𝑐𝑡(𝑍)|𝑋𝑛 − 𝑉𝑐−1𝑡+1(𝑍) − 𝐶𝐸(𝑋𝑛)

= {(𝑉𝑐𝑡+1(𝑋𝑛) − 𝑉𝑐−1𝑡+1(𝑍) − 𝐶𝐸(𝑍) −𝑚) ∨ (0)

∨ �𝑉𝑐𝑡+1(𝑍)−𝑉𝑐−1𝑡+1(𝑍) − 𝐶𝐸(𝑋𝑛)�}

𝑉𝑐𝑡(𝑍)|𝑋𝑛 − 𝑉𝑐−1𝑡+1(𝑍) − 𝐶𝐸(𝑋𝑛)

= {(𝑉𝑐𝑡+1(𝑋𝑛)− 𝑉𝑐−1𝑡+1(𝑍) − 𝐶𝐸(𝑍) −𝑚) ∨ (0) ∨ �𝑅𝑐𝑡(1,1,𝑍) − 𝐶𝐸(𝑋𝑛)�}

𝑉𝑐𝑡(𝑍)|𝑋𝑛 − 𝑉𝑐−1𝑡+1(𝑍) − 𝐶𝐸(𝑋𝑛) = {(𝑉𝑐𝑡+1(𝑋𝑛) − 𝑉𝑐−1𝑡+1(𝑍) − 𝐶𝐸(𝑍) −𝑚) ∨ 𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍)}

So we buy the option when:

(𝑉𝑐𝑡+1(𝑋𝑛)− 𝑉𝑐−1𝑡+1(𝑍) − 𝐶𝐸(𝑍) −𝑚) ≥ 𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍)

�𝑉𝑐𝑡+1(𝑋𝑛)− 𝑉𝑐−1𝑡+1(𝑍)� ≥ 𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍) + 𝐶𝐸(𝑍) +𝑚

𝑖𝑉𝑂𝑐𝑡(𝑋𝑛,𝑍) ≥ 𝑖𝐶𝑐𝑡 (𝑋𝑛,𝑍) +𝑚 + 𝐶𝐸(𝑍)

Otherwise, the deal is worth accepting if and only if:

𝑉𝑐𝑡+1(𝑍) ≤ 𝑉𝑐−1𝑡+1(𝑍) + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1(𝑍) ≤ 𝑉𝑐−1𝑡+1(𝑍) + 𝐶𝐸(𝑋𝑛)

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡+1(1,1,𝑍)

A1.2.5 Section 6.5.3 Extensions – Probability of Knowing Detectors For convenience, we present the structure of the problem with probability of knowing

detectors.

Page 158: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

144

Recursion Equation Define the following terms as before

𝑖𝑉𝑐𝑡 (𝑋𝑛) = [𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑡]+

𝑖𝑉𝑜𝑃𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝐸(𝑖𝑉𝑐𝑡(𝑋𝑛|𝐼𝑖))

By subtracting 𝑉𝑐𝑡+1 from all the end terms, the recursion above reduces to:

And finally reduces to

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject 𝑉𝑐𝑡|𝑋𝑛 − 𝑉𝑐𝑡+1

𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡

0

Reject

Accept 𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡 − 𝑚

−𝑚

Ii

Reject

Accept 𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡 − 𝑚

−𝑚

p

p

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject 𝑉𝑐𝑡|𝑋𝑛

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1

Reject

Accept 𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚

𝑉𝑐𝑡+1 − 𝑚

Ii

Reject

Accept 𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) −𝑚

𝑉𝑐𝑡+1 − 𝑚

Page 159: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

145

Proposition 6.5.3a: Optimal Information Gathering Policy with Probability of Knowing Detectors Given a detector defined as above with a probability of knowing p and price m, the decision

maker should buy information if and only if:

𝑢(𝑖𝑉𝑃𝐼𝑐𝑡 (𝑋𝑛)− 𝑖𝑉𝑐𝑡(𝑋𝑛)) >𝑢(𝑚)𝑝

We seek information when seeking information provides a higher value than not. We state

this relationship in terms of u-values.

𝑢�𝑖𝑉𝑐𝑡(𝑋𝑛)� ≤ 𝑝 𝑢(𝑖𝑉𝑜𝑃𝐼𝑐𝑡(𝑋𝑛) −𝑚) + (1 − 𝑝)𝑢(𝑖𝑉𝑐𝑡(𝑋𝑛) −𝑚)

For simplicity, we drop the terms in iV and iVoI

1 − 𝑒−𝛾(𝑖𝑉) ≤ 𝑝�1 − 𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚)� + (1 − 𝑝)(1 − 𝑒−𝛾(𝑖𝑉−𝑚))

𝑒−𝛾(𝑖𝑉) ≥ 𝑝 𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚) + (1 − 𝑝)𝑒−𝛾(𝑖𝑉−𝑚)

1 ≥ 𝑝 𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑖𝑉−𝑚) + (1 − 𝑝)𝑒𝛾(𝑚)

𝑒−𝛾𝑚 ≥ 𝑝 𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑖𝑉) + (1 − 𝑝)

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject

𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡

0

Reject

Accept 𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡 − 𝑚

−𝑚

Ii

Reject

Accept 𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡 − 𝑚

−𝑚

𝑖𝑉𝑐𝑡(𝑋𝑛)

(1)

(2)

𝑉𝑐𝑡|𝑋𝑛 − 𝑉𝑐𝑡+1

p

(2) = 𝑖𝑉𝑐𝑡(𝑋𝑛) −𝑚

Where: (1) = 𝑖𝑉𝑜𝑃𝐼𝑐𝑡(𝑋𝑛) −𝑚

Page 160: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

146

𝑒−𝛾𝑚 − 1 ≥ 𝑝 𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑖𝑉) − 𝑝

1 − 𝑒−𝛾𝑚 ≤ 𝑝(1 − 𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑖𝑉))

𝑢(𝑖𝑉𝑃𝐼𝑐𝑡(𝑋𝑛)− 𝑖𝑉𝑐𝑡(𝑋𝑛)) ≥𝑢(𝑚)𝑝

Proposition 6.5.3b: Optimal Allocation Policy with Probability of Knowing Detectors If clairvoyance is received, the decision maker should accept the deal if and only if:

𝐶𝐸(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, if no clairvoyance is received or the decision maker did not buy information; then

the decision maker should accept the deal if and only if:

𝐶𝐸(𝑋𝑛) ≥ 𝑅𝑐𝑡

The proof for this follows that of the earlier propositions.

Corollary 6.5.3: Identifying Optimal Detector with Probability of Knowing Detectors Given the setup above, detector 1 will be optimal when

𝑢(𝑚1)𝑝1

<𝑢(𝑚2)𝑝2

Otherwise Detector 2 will be optimal. In this setup, this optimality is myopic. So if we have

multiple irrelevant detectors we use them in the decreasing order of 𝑢(𝑚) 𝑝⁄

Page 161: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

147

Directly from Proposition 6.1 we have that the benefit of detectors is inversely related to

𝑢(𝑚)𝑝

.

In the case with two detectors, we have the following setup

(1)

(3)

No Clairvoyance

Accept 𝐶𝐸(𝑋𝑛) − 𝑅𝑐𝑡 − 𝑚1 − 𝑚2

−𝑚1 −𝑚2

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject 𝑉𝑐𝑡|𝑋𝑛 − 𝑉𝑐𝑡+1

𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡

0

Reject

Accept 𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡 − 𝑚1

−𝑚1

Ii

Reject

Accept 𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡 − 𝑚1

−𝑚1

Clairvoyance

Seek info

Reject

Accept

−𝑅𝑐𝑡 − 𝑚1 −𝑚2 𝐶𝐸(𝑋𝑛|𝐼𝑖)

−𝑚1 −𝑚2

Ii

Reject

𝑖𝑉𝑐𝑡(𝑋𝑛)

𝑖𝑉𝑐𝑡(𝑋𝑛)−𝑚1

(1) = 𝑖𝑉𝑜𝑃𝐼𝑐𝑡(𝑋𝑛) −𝑚1

(3) = 𝑖𝑉𝑐𝑡(𝑋𝑛) −𝑚1 −𝑚2

Where:

(2) = 𝑖𝑉𝑜𝑃𝐼𝑐𝑡(𝑋𝑛) −𝑚1 −𝑚2

(2)

p1

p2

Page 162: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

148

To prove the myopic feature, we reduce the above structure to the following:

Note that if a detector does not satisfy:

𝑢(𝑖𝑉𝑜𝐼 − 𝑖𝑉) ≥𝑢(𝑚)𝑝

Then it is useless. Hence we consider the case where both detectors satisfy the above

equation. This reduces the recursion to:

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject 𝑉𝑐𝑡|𝑋𝑛

𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛)

𝑉𝑐𝑡+1

Reject

Accept 𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚1

𝑉𝑐𝑡+1 − 𝑚1

Ii

Reject

Accept 𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) −𝑚1

𝑉𝑐𝑡+1 − 𝑚1

No Clairvoyance

Clairvoyance

Seek info

Reject

Accept (1)

(2)

Ii

Reject

Accept (3)

(4)

p1

p2

(2) = 𝑉𝑐𝑡+1 − 𝑚1 −𝑚2 (3) = 𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛) −𝑚1 −𝑚2 (4) = 𝑉𝑐𝑡+1 − 𝑚1 −𝑚2

Where: (1) = 𝑉𝑐−1𝑡+1 + 𝐶𝐸(𝑋𝑛|𝐼𝑖) −𝑚1 −𝑚2

Page 163: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

149

So, the value of the deal flow when using detector 1 before detector 2 is:

𝑢(𝑉𝑐𝑡|𝑋𝑛 − 𝑉𝑐𝑡+1)

= 𝑝1 𝑢(𝑖𝑉𝑜𝑃𝐼𝑐𝑡(𝑋𝑛) −𝑚1) + (1 − 𝑝1)[ 𝑝2 𝑢(𝑖𝑉𝑜𝑃𝐼𝑐𝑡(𝑋𝑛) −𝑚1 −𝑚2)

+ (1 − 𝑝2)𝑢(𝑖𝑉𝑐𝑡(𝑋𝑛)−𝑚1 −𝑚2)

Again, we drop the terms in iVoI and iV for clarity to get

𝑢(𝑉𝑐𝑡|𝑋𝑛 − 𝑉𝑐𝑡+1)

= 𝑝1𝑢(𝑖𝑉𝑜𝑃𝐼 − 𝑚1) + (1 − 𝑝1)[𝑝2𝑢(𝑖𝑉𝑜𝑃𝐼 − 𝑚1 −𝑚2)

+ (1 − 𝑝2)𝑢(𝑖𝑉 − 𝑚1 −𝑚2)]

After much algebra, this reduces to

𝑢(𝑉𝑐𝑡|𝑋𝑛 − 𝑉𝑐𝑡+1)

= 1 − 𝑝1𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1) − (𝑝2 − 𝑝1𝑝2)𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1−𝑚2)

− (1 − 𝑝1)(1 − 𝑝2)𝑒−𝛾(𝑖𝑉−𝑚1−𝑚2)

(2)

𝑉𝑐𝑡|𝑋𝑛 − 𝑉𝑐𝑡+1

𝑖𝑉𝑜𝐼𝑐𝑡(𝑋𝑛) −𝑚1

p1

No Clairvoyance

Clairvoyance Seek info

Reject

Accept 𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡 − 𝑚1

−𝑚1

Ii

p2

No Clairvoyance

Clairvoyance

Seek info

Reject

Accept 𝐶𝐸(𝑋𝑛|𝐼𝑖) − 𝑅𝑐𝑡 −𝑚1 −𝑚2

−𝑚1 −𝑚2

Ii

Reject

Accept 𝐶𝐸(𝑋𝑛)− 𝑅𝑐𝑡 − 𝑚1 −𝑚2

−𝑚1 −𝑚2

(2) = 𝑖𝑉𝑐𝑡(𝑋𝑛) −𝑚1 −𝑚2

Where: (1) = 𝑖𝑉𝑜𝑃𝐼𝑐𝑡(𝑋𝑛) −𝑚1 −𝑚2

(1)

Page 164: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

150

Denote this case by (I) and the case with detector 2 before detector 1 by II.

So in order to have detector 1 before detector 2 we must satisfy

𝑢(𝐼) > 𝑢(𝐼𝐼)

Hence,

1 − 𝑝1𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1) − (𝑝2 − 𝑝1𝑝2)𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1−𝑚2) − (1 − 𝑝1)(1 − 𝑝2)𝑒−𝛾(𝑖𝑉−𝑚1−𝑚2)

1 − 𝑝2𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚2) − (𝑝2 − 𝑝1𝑝2)𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1−𝑚2) − (1 − 𝑝1)(1 − 𝑝2)𝑒−𝛾(𝑖𝑉−𝑚1−𝑚2)

After canceling repeated terms, this inequality reduces to

−𝑝1𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1) − 𝑝2𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1−𝑚2) ≥ −𝑝2𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚2) − 𝑝1𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1−𝑚2)

Or

𝑝1𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1) + 𝑝2𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1−𝑚2) ≤ 𝑝2𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚2) + 𝑝1𝑒−𝛾(𝑖𝑉𝑜𝑃𝐼−𝑚1−𝑚2)

We multiply throughout by 𝑒𝛾(𝑖𝑉𝑜𝑃𝐼) to get

𝑝1𝑒𝛾(𝑚1) + 𝑝2𝑒𝛾(𝑚1+𝑚2) ≤ 𝑝2𝑒𝛾(𝑚2) + 𝑝1𝑒𝛾(𝑚1+𝑚2)

Now we multiply throughout by 𝑒−𝛾(𝑚1+𝑚2) to get

𝑝1𝑒−𝛾(𝑚2) + 𝑝2 ≤ 𝑝2𝑒−𝛾(𝑚1) + 𝑝1

𝑝1𝑒−𝛾(𝑚2) − 𝑝1 ≤ 𝑝2𝑒−𝛾(𝑚1) − 𝑝2

𝑝1(𝑒−𝛾(𝑚2) − 1) ≤ 𝑝2(𝑒−𝛾(𝑚1) − 1)

𝑝1𝑢(𝑚2) ≥ 𝑝2𝑢(𝑚1)

And finally,

𝑢(𝑚1)𝑝1

≤𝑢(𝑚2)𝑝2

Page 165: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

151

A1.3 Chapter 7 Proofs

A1.3.1 Section 7.3 Main Results For convenience, we reproduce the recursion here:

The recursion can be rewritten as:

𝑀𝑐𝑡|𝑋𝑛 = �[𝑀𝑐−1

𝑡+1 ∙ 𝐶𝑀(𝑋𝑛)] ∨ 𝑀𝑐𝑡+1 ∨ �𝑀𝑐−1

𝑡+1 ∙ (1 − 𝑓) ∙ 𝐶𝑀[𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡]�� (1)

Proposition 7.3.1a: Optimal Information Gathering Policy When offered a deal 𝑋𝑛 the decision maker should buy information if and only if

𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) ≥𝑖𝑀𝑐

𝑡(𝑋𝑛)1 − 𝑓

Proposition 7.3.1b: Optimal Allocation Policy After information is received, the decision maker should accept the deal if and only if

𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, the deal is worth accepting without information if and only if

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

Using the definition of the incremental value of information, the recursion simplifies to

We seek information when:

Accept

Seek info

Reject 𝑀𝑐𝑡|𝑋𝑛

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛)

𝑀𝑐𝑡+1

𝑀𝑐𝑡+1 ∙ 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) ∙ (1 − 𝑓)

Accept

Seek info/ Control Reject

Reject

Accept

𝑀𝑐𝑡|𝑋𝑛 ∙ 𝑊0

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∙ 𝑊0

𝑀𝑐𝑡+1 ∙ 𝑊0

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓) ∙ 𝑊0

𝑀𝑐𝑡+1 ∙ (1 − 𝑓) ∙ 𝑊0

Ii

Page 166: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

152

𝑀𝑐𝑡+1 ∙ 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) ∙ (1 − 𝑓) ≥ ��𝑀𝑐−1

𝑡+1 ∙ 𝐶𝑀(𝑋𝑛)� ∨ 𝑀𝑐𝑡+1�

𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) ≥ {(𝐶𝑀(𝑋𝑛)/𝑅𝑐𝑡) ∨ 1}/(1− 𝑓)

So,

𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) ≥ 𝑖𝑀(𝑋𝑛)1 − 𝑓

After receiving information, the deal is worth accepting if and only if:

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓) ∙ 𝑊0 ≥ 𝑀𝑐

𝑡+1 ∙ (1 − 𝑓) ∙ 𝑊0

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑀𝑐

𝑡+1

𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, we prefer to go with the deal without information if and only if:

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∙ 𝑊0 ≥ 𝑀𝑐

𝑡+1 ∙ 𝑊0

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ≥ 𝑀𝑐

𝑡+1

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

Proposition 7.3.2: Characterizing Deal Flow Certain Multiplier (𝑴𝒄𝒕)

1. 𝑀𝑐𝑡 is non decreasing in c for all t

2. 𝑀𝑐𝑡 is non increasing in t for all c

Proposition 7.3.2 Statement 1 We prove this by induction. Take t = T, (1) becomes

𝑀𝑐𝑇|𝑋𝑛 = �[𝑀𝑐−1

𝑇+1 ∙ 𝐶𝑀(𝑋𝑛)] ∨ 𝑀𝑐𝑇+1 ∨ �𝑀𝑐−1

𝑇+1 ∙ (1 − 𝑓) ∙ 𝐶𝑀[𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇]��

Since 𝑀𝑐𝑇+1 = 1 for all c, this equation reduces to

𝑀𝑐𝑇|𝑋𝑛 = { 𝐶𝑀(𝑋𝑛) ∨ 1 ∨ 𝐶𝑀[{𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇}] ∙ (1 − 𝑓) } 𝑓𝑜𝑟 𝑐 ≥ 1

𝑀0𝑇|𝑋𝑛 = 1 𝑓𝑜𝑟 𝑐 = 0

Hence it is non decreasing in c at t = T

Now, assume true for t = k, or

𝑀𝑐𝑘 ≥ 𝑀𝑐−1

𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (2)

Now, prove that it is true for t = k-1, or

Page 167: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

153

𝑀𝑐𝑘−1 ≥ 𝑀𝑐−1

𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (3)

Where:

𝑀𝑐𝑘−1|𝑋𝑛 = �[𝑀𝑐−1

𝑘 ∙ 𝐶𝑀(𝑋𝑛)] ∨ 𝑀𝑐𝑘 ∨ �𝑀𝑐−1

𝑘 ∙ (1 − 𝑓) ∙ 𝐶𝑀[𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1]��

And

𝑀𝑐−1𝑘−1|𝑋𝑛 = �[𝑀𝑐−2

𝑘 ∙ 𝐶𝑀(𝑋𝑛)] ∨ 𝑀𝑐−1𝑘 ∨ �𝑀𝑐−2

𝑘 ∙ (1 − 𝑓) ∙ 𝐶𝑀�𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐−1𝑘−1���

But since 𝑀𝑐𝑘 is non decreasing in c, (2), we have

𝑀𝑐−1𝑘−1|𝑋𝑛 ≤ ��𝑀𝑐−1

𝑘 ∙ 𝐶𝑀(𝑋𝑛)� ∨ 𝑀𝑐𝑘 ∨ �𝑀𝑐−1

𝑘 ∙ (1 − 𝑓) ∙ 𝐶𝑀[𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1]��

= 𝑀𝑐𝑘−1|𝑋𝑛

Hence,

𝑀𝑐−1𝑘−1�𝑋𝑛 ≤ 𝑀𝑐

𝑘−1�𝑋𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑋𝑛

Meaning (3) is true, namely

𝑀𝑐−1𝑘−1 ≤ 𝑀𝑐

𝑘−1

And, finally, by induction

𝑀𝑐−1𝑡 ≤ 𝑀𝑐

𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 𝑎𝑛𝑑 𝑎𝑙𝑙 𝑡

Proposition 7.3.2 Statement 2 This statement follows directly from the recursion. We have

𝑀𝑐𝑡|𝑋𝑛 = {𝑀𝑐−1

𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∨ 𝑀𝑐𝑡+1 ∨ 𝑀𝑐−1

𝑡+1 ∙ (1 − 𝑓) ∙ 𝐶𝑀({𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡})}

Hence

𝑀𝑐𝑡|𝑋𝑛 ≥ 𝑀𝑐

𝑡+1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑋𝑛𝑎𝑛𝑑 𝑐

Leading to

𝑀𝑐𝑡 ≥ 𝑀𝑐

𝑡+1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐

So, 𝑀𝑐𝑡 is non increasing in t for all c

Proposition 7.3.3: Characterizing Threshold (𝑹𝒄𝒕) 1. 𝑅𝑐𝑡 is non increasing in c for all t 2. 𝑅𝑐𝑡 is non increasing in t for all c

Page 168: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

154

Proposition 7.3.3 Statement 1 We prove this by induction.

Take the case when t = T, the statement becomes

𝑅𝑐𝑇 ≥ 𝑅𝑐+1𝑇

This is true since

𝑅𝑐𝑇 = 𝑀𝑐𝑇+1/𝑀𝑐−1

𝑇+1 = 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1

Hence 𝑅𝑐𝑇 is non increasing in c at t = T

Now, assume this is true for t = k, lets prove it for t = k – 1.

So we know that

𝑅𝑐𝑘 ≥ 𝑅𝑐+1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (1)

→ 𝑀𝑐𝑘+1/ 𝑀𝑐−1

𝑘+1 ≥ 𝑀𝑐+1𝑘+1/ 𝑀𝑐

𝑘+1

�𝑀𝑐𝑘+1�2 ≥ 𝑀𝑐+1

𝑘+1 ∙ 𝑀𝑐−1𝑘+1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑡 = 𝑘 + 1

For the statement to be true at t = k – 1, the following must be true

𝑅𝑐𝑘−1 ≥ 𝑅𝑐+1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 (2)

�𝑀𝑐𝑘�2 ≥ 𝑀𝑐+1

𝑘 ∙ 𝑀𝑐−1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑡 = 𝑘 (3)

For each Xn, we can rewrite 𝑀𝑐𝑘 as:

𝑀𝑐𝑘|𝑋𝑛 = {𝐶𝑀(𝑋𝑛) ∙ 𝑀𝑐−1

𝑘+1 ∨ 𝑀𝑐𝑘+1 ∨ 𝐶𝑀��𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ 𝑀𝑐−1

𝑘+1 ∨ 𝑀𝑐𝑘+1� ∙ (1 − 𝑓)�}

Now, define 𝑄𝑐𝑘 as

𝑄𝑐𝑘(𝑋𝑛) = 𝐶𝑀�{𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘} ∙ (1 − 𝑓)�

So, we have:

𝑀𝑐𝑘|𝑋𝑛 = 𝑀𝑐−1

𝑘+1 ∙ {𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

Now, rewrite (3) for each Xn as:

Page 169: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

155

{𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}2 ∙ �𝑉𝑐−1𝑘+1�2

≥ �𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐+1𝑘 ∨ 𝑄𝑐+1𝑘 (𝑋𝑛)�

∙ 𝑀𝑐+1𝑘+1 ∙ {𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑡1𝑘

∨ 𝑄𝑐−1𝑘 (𝑋𝑛)} ∙ 𝑀𝑐−2𝑘+1 (4)

Which is equivalent to

𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙�𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐+1𝑘 ∨ 𝑄𝑐+1𝑘 (𝑋𝑛)�

�𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)�2∙ �𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐−1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)� ≤ 1 (5)

Note that from (1) we have

𝑅𝑐−1𝑘 ≥ 𝑅𝑐𝑘 ≥ 𝑅𝑐+1𝑘 (6)

This directly leads to:

𝑄𝑐−1𝑘 (𝑋𝑛) ≥ 𝑄𝑐𝑘(𝑋𝑛) ≥ 𝑄𝑐+1𝑘 (𝑋𝑛) (7)

From (6), we know that if

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑘

Then we cannot reject the deal for any higher c. The same goes for buying information, if

𝐶𝑀(𝑋𝑛) ≥ 𝑄𝑐𝑘(𝑋𝑛)

Then we cannot accept information for any higher c.

We first prove the following relation:

𝑄𝑐−1𝑘 (𝑋𝑛)/ 𝑅𝑐−1𝑘 ≤ 𝑄𝑐𝑘(𝑋𝑛)/ 𝑅𝑐𝑘 (8)

𝐿𝐻𝑆 = 𝐶𝑀��𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐−1𝑘 �

𝑚 � /𝑅𝑐−1𝑘 = 𝐶𝑀�𝐶𝑀(𝑋𝑛|𝐼𝑖)𝑅𝑐−1𝑘 ∨ 1� / 𝑚

≤ 𝐶𝑀�𝐶𝑀(𝑋𝑛|𝐼𝑖)

𝑅𝑐𝑘∨ 1� / 𝑚 = 𝐶𝑀�

{𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘}𝑚 � / 𝑅𝑐𝑘 = 𝑅𝐻𝑆

Note that (8) means that if Rejecting a deal was better than getting information on that deal

for a given c, then that will be the case for any lower c’s.

Note that (8) means that if Rejecting a deal was better than getting information on that deal

for a given c, then that will be the case for any lower c’s.

In the following, we drop (Xn) from the Q term for clarity.

Page 170: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

156

After we reject cases that contradict (6), (7), or (8), we consider the following 10 cases:

Maximum Term Case (c+1,t) (c,t) (c-1,t) 1 R R R 2 Q R R 3 Q Q R 4 Q Q Q 5 CM(Xn) R R 6 CM(Xn) Q R 7 CM(Xn) Q Q 8 CM(Xn) CM(Xn) R 9 CM(Xn) CM(Xn) Q 10 CM(Xn) CM(Xn) CM(Xn)

CASE 1:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝑅𝑐+1𝑘

�𝑅𝑐𝑘�2 ∙ 𝑅𝑐−1

𝑘 =𝑅𝑐+1𝑘

𝑅𝑐𝑘

𝑏𝑢𝑡 𝑅𝑐+1𝑡 ≤ 𝑅𝑐𝑡 𝑏𝑦 𝑡ℎ𝑒 𝑖𝑛𝑑𝑢𝑐𝑡𝑖𝑜𝑛 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, (1),→ (5) ≤ 1

CASE 2:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝑄𝑐+1𝑘

�𝑅𝑐𝑘�2 ∙ 𝑅𝑐−1

𝑘 =𝑄𝑐+1𝑘

𝑅𝑐𝑘

𝑏𝑦 (7), (5) ≤𝑄𝑐𝑘

𝑅𝑐𝑘

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, (5) ≤ 1

CASE 3:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝑄𝑐+1𝑘

�𝑄𝑐𝑘�2 ∙ 𝑅𝑐−1

𝑘 =𝑅𝑐𝑘 ∙ 𝑄𝑐+1𝑘

�𝑄𝑐𝑘�2 =

𝑅𝑐𝑘

𝑄𝑐𝑘∙𝑄𝑐+1𝑘

𝑄𝑐𝑘

𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑖𝑛𝑑𝑢𝑐𝑡𝑖𝑜𝑛 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, (1), 𝑄𝑐+1𝑘 ≤ 𝑄𝑐𝑘

𝑎𝑛𝑑 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛,𝑅𝑐𝑘 ≤ 𝑄𝑐𝑘 → (5) ≤ 1

CASE 4:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝑄𝑐+1𝑘

�𝑄𝑐𝑘�2 ∙ 𝑄𝑐−1

𝑘

Page 171: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

157

𝑏𝑦 (8),𝑤𝑒 ℎ𝑎𝑣𝑒𝑄𝑐−1𝑘 (𝑋𝑛)𝑅𝑐−1𝑘 ≤

𝑄𝑐𝑘(𝑋𝑛)𝑅𝑐𝑘

, 𝑠𝑜

(5) ≤𝑅𝑐𝑘

𝑅𝑐𝑘∙𝑄𝑐+1𝑘

�𝑄𝑐𝑘�2 ∙ 𝑄𝑐

𝑘 =𝑄𝑐+1𝑘

𝑄𝑐𝑘

𝑏𝑦 𝑡ℎ𝑒 𝑖𝑛𝑑𝑢𝑐𝑡𝑖𝑜𝑛 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, (1),𝑄𝑐+1𝑘 ≤ 𝑄𝑐𝑘 → (5) ≤ 1

CASE 5:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝐶𝑀(𝑋𝑛)

�𝑅𝑐𝑘�2 ∙ 𝑅𝑐−1𝑘 =

𝐶𝑀(𝑋𝑛)𝑅𝑐𝑘

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝐶𝑀(𝑋𝑛) ≤ 𝑅𝑐𝑘 → (5) ≤ 1

CASE 6:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝐶𝑀(𝑋𝑛)

�𝑄𝑐𝑘�2 ∙ 𝑅𝑐−1𝑘 = 𝑅𝑐𝑘 ∙

𝐶𝑀(𝑋𝑛)

�𝑄𝑐𝑘�2

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝐶𝑀(𝑋𝑛),𝑅𝑐𝑘 ≤ 𝑄𝑐𝑘 → (5) ≤ 1

CASE 7:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝐶𝑀(𝑋𝑛)

�𝑄𝑐𝑘�2 ∙ 𝑄𝑐−1𝑘

𝑏𝑦 (8),𝑤𝑒 ℎ𝑎𝑣𝑒𝑄𝑐−1𝑘 (𝑋𝑛)𝑅𝑐−1𝑘 ≤

𝑄𝑐𝑘(𝑋𝑛)𝑅𝑐𝑘

, 𝑠𝑜

(5) ≤𝑅𝑐𝑘

𝑅𝑐𝑘∙𝐶𝑀(𝑋𝑛)

�𝑄𝑐𝑘�2 ∙ 𝑄𝑐𝑘 =

𝐶𝑀(𝑋𝑛)𝑄𝑐𝑘

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝐶𝑀(𝑋𝑛) ≤ 𝑄𝑐𝑘 → (5) ≤ 1

CASE 8:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝐶𝑀(𝑋𝑛)

�𝐶𝑀(𝑋𝑛)�2∙ 𝑅𝑐−1𝑘 =

𝑅𝑐𝑘

𝐶𝑀(𝑋𝑛)

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝑅𝑐𝑘 ≤ 𝐶𝑀(𝑋𝑛) → (5) ≤ 1

CASE 9:

Page 172: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

158

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝐶𝑀(𝑋𝑛)

�𝐶𝑀(𝑋𝑛)�2∙ 𝑄𝑐−1𝑘

𝑏𝑦 (8),𝑤𝑒 ℎ𝑎𝑣𝑒𝑄𝑐−1𝑘 (𝑋𝑛)𝑅𝑐−1𝑘 ≤

𝑄𝑐𝑘(𝑋𝑛)𝑅𝑐𝑘

, 𝑠𝑜

(5) ≤ 𝑅𝑐𝑘 ∙𝐶𝑀(𝑋𝑛)

�𝐶𝑀(𝑋𝑛)�2∙𝑄𝑐𝑘

𝑅𝑐𝑘=

𝑄𝑐𝑘

𝐶𝑀(𝑋𝑛)

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝑄𝑐𝑘 ≤ 𝐶𝑀(𝑋𝑛) → (5) ≤ 1

CASE 10:

(5) =𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙𝐶𝑀(𝑋𝑛)

(𝐶𝑀(𝑋𝑛) )2∙ 𝐶𝑀(𝑋𝑛) =

𝑅𝑐𝑘

𝑅𝑐−1𝑘

𝑏𝑦 𝑡ℎ𝑒 𝑖𝑛𝑑𝑢𝑐𝑡𝑖𝑜𝑛 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, (1), 𝑅𝑐𝑘 ≤ 𝑅𝑐−1𝑘 → (5) ≤ 1

So since (5) is true for every Xn, we know that (3) is true and:

𝑅𝑐𝑘−1 ≥ 𝑅𝑐+1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (1)

Finally, by induction, we know that this is true for all t, or

𝑅𝑐𝑡 ≥ 𝑅𝑐+1𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 𝑎𝑡 𝑎𝑛𝑦 𝑡

𝑅𝑐𝑘−1 ≥ 𝑅𝑐+1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (1)

Finally, by induction, we know that this is true for all t, or

𝑅𝑐𝑡 ≥ 𝑅𝑐+1𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 𝑎𝑡 𝑎𝑛𝑦 𝑡

Proposition 7.3.3 Statement 2 We prove this by induction.

Take the case when t = T, the statement becomes

𝑅𝑐𝑇−1 ≥ 𝑅𝑐𝑇

This is true since

𝑅𝑐𝑇 = 𝑀𝑐𝑇+1/𝑀𝑐−1

𝑇+1 = 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1

While, since V is non decreasing in c, we have:

𝑅𝑐𝑇−1 = 𝑀𝑐𝑇/𝑀𝑐−1

𝑇 ≥ 1

Hence 𝑅𝑐𝑇 is non increasing in t at t = T-1

Page 173: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

159

Assume this is true for t=k and show this statement is true for t=k-1

So we know:

𝑅𝑐𝑘−1 ≥ 𝑅𝑐𝑘 (1)

Meaning:

𝑀𝑐𝑘

𝑀𝑐−1𝑘 ≥

𝑀𝑐𝑘+1

𝑀𝑐−1𝑘+1 (2)

And we want to show

𝑅𝑐𝑘−2 ≥ 𝑅𝑐𝑘−1 (3)

Alternatively,

𝑀𝑐𝑘−1

𝑀𝑐−1𝑘−1 ≥

𝑀𝑐𝑘

𝑀𝑐−1𝑘 (4)

As with Proposition 7.3.3 Statement 1, or each Xn, we can rewrite 𝑀𝑐𝑡 as:

𝑀𝑐𝑡|𝑋𝑛 = {𝐶𝑀(𝑋𝑛) ∙ 𝑀𝑐−1

𝑡+1 ∨ 𝑀𝑐𝑡+1 ∨ 𝐶𝑀({𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ 𝑀𝑐−1

𝑡+1 ∨ 𝑀𝑐𝑡+1} ∙ (1 − 𝑓))}

Recall that 𝑄𝑐𝑡 is defined as

𝑄𝑐𝑡(𝑋𝑛) = 𝐶𝑀({𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡} ∙ (1 − 𝑓))

So, we have:

𝑀𝑐𝑡|𝑋𝑛 = 𝑀𝑐−1

𝑡+1 ∙ {𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑡 ∨ 𝑄𝑐𝑡(𝑋𝑛)}

Now, rewrite (4) for each Xn as:

𝑀𝑐−1𝑘 ∙{𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘−1 ∨ 𝑄𝑐𝑘−1(𝑋𝑛)} /𝑀𝑐−2

𝑘 /�𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐−1𝑘−1 ∨ 𝑄𝑐−1𝑘−1(𝑋𝑛)�…

…/𝑀𝑐−1𝑘+1 /{𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)} ∙ 𝑀𝑐−2

𝑘+1 ∙�𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐−1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)� ≥ 1 (5)

Which is equivalent to

𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ {𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

�𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐−1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)�∙ �𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐−1𝑘−1 ∨ 𝑄𝑐−1𝑘−1(𝑋𝑛)��𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘−1 ∨ 𝑄𝑐𝑘−1(𝑋𝑛)�

≤ 1 (5)

Note that from Proposition 7.3.3 statement 1 we have

𝑅𝑐𝑡 ≥ 𝑅𝑐+1𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 𝑎𝑛𝑑 𝑐 ≥ 1 (6)

Leading to:

𝑄𝑐𝑡(𝑋𝑛) ≥ 𝑄𝑐+1𝑡 (𝑋𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 𝑎𝑛𝑑 𝑐 ≥ 1 (7)

Also, (1) gives us

Page 174: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

160

𝑄𝑐𝑘−1(𝑋𝑛) ≥ 𝑄𝑐𝑘(𝑋𝑛) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (8

In similar lines as in Proposition 7.3.3 statement 1, we prove:

𝑄𝑐𝑘−1(𝑋𝑛)/ 𝑅𝑐𝑘−1 ≤ 𝑄𝑐𝑘(𝑋𝑛)/ 𝑅𝑐𝑘 (9)

𝐿𝐻𝑆 = 𝐶𝑀�{𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1} ∙ (1 − 𝑓)�/𝑅𝑐𝑘−1 = 𝐶𝑀��𝐶𝑀(𝑋𝑛|𝐼𝑖)𝑅𝑐𝑘−1

∨ 1� ∙ (1 − 𝑓)�

𝑏𝑦 𝑡ℎ𝑒 𝑖𝑛𝑑𝑢𝑐𝑡𝑖𝑜𝑛 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛,

𝐿𝐻𝑆 ≤ 𝐶𝑀��𝐶𝑀(𝑋𝑛|𝐼𝑖)

𝑅𝑐𝑘∨ 1� ∙ (1 − 𝑓)� =

𝐶𝑀�[𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘] ∙ (1 − 𝑓)�𝑅𝑐𝑘

= 𝑅𝐻𝑆

Recall, from the proof of Proposition 7.3.3 statement 1, that:

𝑄𝑐−1𝑡 (𝑋𝑛)/ 𝑅𝑐−1𝑡 ≤ 𝑄𝑐𝑡(𝑋𝑛)/𝑅𝑐𝑡 (10)

In the following, we drop (Xn) from the Q term for clarity.

In the same manner as in the proof for Property 7.3.2 statement 1, we reject all the cases

that contradict statements (6), (7), and (8) and end up with the following 20 cases.

Maximum Term Case (c,k-1) (c-1,k-1) (c,k) (c-1,k) 1 R R R R 2 R R Q R 3 R R Q Q 4 R R CM(Xn) R 5 R R CM(Xn) Q 6 R R CM(Xn) CM(Xn) 7 Q R Q R 8 Q R Q Q 9 Q R CM(Xn) R 10 Q R CM(Xn) Q 11 Q R CM(Xn) CM(Xn) 12 Q Q Q Q 13 Q Q CM(Xn) Q 14 Q Q CM(Xn) CM(Xn) 15 CM(Xn) R CM(Xn) R 16 CM(Xn) R CM(Xn) Q 17 CM(Xn) R CM(Xn) CM(Xn) 18 CM(Xn) Q CM(Xn) Q 19 CM(Xn) Q CM(Xn) CM(Xn) 20 CM(Xn) CM(Xn) CM(Xn) CM(Xn)

Page 175: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

161

Now, let us consider the cases and evaluate relation (5):

CASE 1:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝑅𝑐𝑘

𝑅𝑐−1𝑘 ∙ 𝑅𝑐−1𝑘−1

𝑅𝑐𝑘−1=

𝑅𝑐𝑘

𝑅𝑐𝑘−1

𝑏𝑢𝑡,𝑓𝑟𝑜𝑚 (1),𝑅𝑐𝑘 ≤ 𝑅𝑐𝑘−1 → (5) ≤ 1

CASE 2:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝑄𝑐𝑘

𝑅𝑐−1𝑘 ∙ 𝑅𝑐−1𝑘−1

𝑅𝑐𝑘−1=

𝑄𝑐𝑘

𝑅𝑐𝑘−1

𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑤𝑒 𝑘𝑛𝑜𝑤:𝑅𝑐𝑘−1 ≥ 𝑄𝑐𝑘−1

𝑏𝑢𝑡,𝑓𝑟𝑜𝑚 (8),𝑤𝑒 ℎ𝑎𝑣𝑒 𝑄𝑐𝑘−1 ≥ 𝑄𝑐𝑘

𝑡ℎ𝑢𝑠, (5) ≤ 1

CASE 3:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝑄𝑐𝑘

𝑄𝑐−1𝑘 ∙ 𝑅𝑐−1𝑘−1

𝑅𝑐𝑘−1=𝑅𝑐−1𝑘

𝑅𝑐𝑘−1 ∙𝑄𝑐𝑘

𝑄𝑐−1𝑘

𝑛𝑜𝑡𝑒 𝑡ℎ𝑎𝑡, as with case 2, 𝑅𝑐𝑘−1 ≥ 𝑄𝑐𝑘

𝑎𝑛𝑑 𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝑅𝑐−1𝑘 ≤ 𝑄𝑐−1𝑘 → (5) ≤ 1

CASE 4:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝑅𝑐−1𝑘 ∙

𝑅𝑐−1𝑘−1

𝑅𝑐𝑘−1=𝐶𝑀(𝑋𝑛)𝑅𝑐𝑘−1

𝑏𝑦 𝑡ℎ𝑒 𝑖𝑛𝑑𝑢𝑐𝑡𝑖𝑜𝑛 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛 (1), (5) =𝐶𝐸(𝑋𝑛)𝑅𝑐𝑘−1

≤𝐶𝐸(𝑋𝑛)𝑅𝑐𝑘

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝐶𝑀(𝑋𝑛) ≤ 𝑅𝑐𝑘 → (5) ≤ 1

CASE 5: (5) = 𝑅𝑐−1𝑘 − 𝑅𝑐−1𝑘−1 + 𝐶𝐸(𝑋𝑛) − 𝑄𝑐−1𝑘 − 𝑅𝑐𝑘−1 + 𝑅𝑐−1𝑘−1 = 𝑅𝑐𝑘 − 𝑅𝑐𝑘−1

𝑏𝑦 𝑡ℎ𝑒 𝑖𝑛𝑑𝑢𝑐𝑡𝑖𝑜𝑛 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, (1), 𝑅𝑐𝑘−1 ≥ 𝑅𝑐𝑘 → (5) ≤ 1

CASE 6:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝐶𝑀(𝑋𝑛)

∙ 𝑅𝑐−1𝑘−1

𝑅𝑐𝑘−1=𝐶𝑀(𝑋𝑛)𝑅𝑐𝑘−1

∙𝑅𝑐−1𝑘

𝐶𝑀(𝑋𝑛)

𝑏𝑢𝑡 𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛𝑠,𝑤𝑒 ℎ𝑎𝑣𝑒

Page 176: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

162

𝑅𝑐−1𝑘 ≤ 𝐶𝐸(𝑋𝑛) 𝑎𝑛𝑑 𝐶𝐸(𝑋𝑛) ≤ 𝑅𝑐𝑘−1 → (5) ≤ 1

CASE 7:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝑄𝑐𝑘

𝑅𝑐−1𝑘 ∙ 𝑅𝑐−1𝑘−1

𝑄𝑐𝑘−1=

𝑄𝑐𝑘

𝑄𝑐𝑘−1

𝑏𝑦 (8), 𝑄𝑐𝑘 ≤ 𝑄𝑐𝑘−1 → (5) ≤ 1

CASE 8:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝑄𝑐𝑘

𝑄𝑐−1𝑘 ∙ 𝑅𝑐−1𝑘−1

𝑄𝑐𝑘−1=𝑅𝑐−1𝑘

𝑄𝑐−1𝑘 ∙𝑄𝑐𝑘

𝑄𝑐𝑘−1

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝑅𝑐−1𝑘 ≤ 𝑄𝑐−1𝑘

𝑎𝑛𝑑 𝑏𝑦 (8), 𝑄𝑐𝑘 ≤ 𝑄𝑐𝑘−1 → (5) ≤ 1

CASE 9:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝑅𝑐−1𝑘 ∙

𝑅𝑐−1𝑘−1

𝑄𝑐𝑘−1=𝐶𝑀(𝑋𝑛)𝑄𝑐𝑘−1

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝑄𝑐𝑘−1 ≥ 𝐶𝑀(𝑋𝑛) → (5) ≤ 1

CASE 10:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝑄𝑐−1𝑘 ∙

𝑅𝑐−1𝑘−1

𝑄𝑐𝑘−1=𝑅𝑐−1𝑘

𝑄𝑐−1𝑘 ∙𝐶𝑀(𝑋𝑛)𝑄𝑐𝑘−1

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛𝑠,𝑤𝑒 ℎ𝑎𝑣𝑒:

𝑅𝑐−1𝑘 ≤ 𝑄𝑐−1𝑘 𝑎𝑛𝑑 𝐶𝑀(𝑋𝑛) ≤ 𝑄𝑐𝑘−1 → (5) ≤ 1

CASE 11:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝐶𝑀(𝑋𝑛)

∙ 𝑅𝑐−1𝑘−1

𝑄𝑐𝑘−1=𝑅𝑐−1𝑘

𝑄𝑐𝑘−1

𝑖𝑛 𝑠𝑖𝑚𝑖𝑙𝑎𝑟 𝑚𝑎𝑛𝑛𝑒𝑟 𝑡𝑜 𝐶𝐴𝑆𝐸 6,𝑤𝑒 𝑎𝑑𝑑 𝑎𝑛𝑑 𝑠𝑢𝑏𝑡𝑟𝑎𝑐𝑡 𝐶𝑀(𝑋𝑛)

(5) = �𝑅𝑐−1𝑘 /𝐶𝑀(𝑋𝑛)� + {𝐶𝑀(𝑋𝑛)/𝑄𝑐𝑘−1}

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛𝑠 𝑤𝑒 ℎ𝑎𝑣𝑒:

𝑅𝑐−1𝑘 ≤ 𝐶𝑀(𝑋𝑛) 𝑎𝑛𝑑 𝐶𝑀(𝑋𝑛) ≤ 𝑄𝑐𝑘−1 → (5) ≤ 1

CASE 12:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝑄𝑐𝑘

𝑄𝑐−1𝑘 ∙ 𝑄𝑐−1𝑘−1

𝑄𝑐𝑘−1

𝑏𝑦 (8), 𝑄𝑐𝑘 ≤ 𝑄𝑐𝑘−1

Page 177: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

163

𝑏𝑦 (9), 𝑄𝑐−1𝑘−1/𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 /𝑅𝑐−1𝑘 → (5) ≤ 1

CASE 13:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝑄𝑐−1𝑘 ∙

𝑄𝑐−1𝑘−1

𝑄𝑐𝑘−1

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛 𝑤𝑒 ℎ𝑎𝑣𝑒 𝑄𝑐−1𝑘 ≥ 𝐶𝑀(𝑋𝑛)

𝑎𝑛𝑑 𝑏𝑦 (9), 𝑄𝑐−1𝑘−1/𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 /𝑅𝑐−1𝑘 → (5) ≤ 1

CASE 14:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝐶𝑀(𝑋𝑛)

∙ 𝑄𝑐−1𝑘−1

𝑄𝑐𝑘−1

𝑏𝑦 (9), 𝑄𝑐−1𝑘−1/ 𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 /𝑅𝑐−1𝑘 𝑠𝑜,

(5) = �𝑄𝑐−1𝑘−1

𝑅𝑐−1𝑘−1� / � 𝑄𝑐𝑘−1

𝑅𝑐−1𝑘 � ≤ �𝑄𝑐−1𝑘

𝑅𝑐−1𝑘 � / � 𝑄𝑐𝑘−1

𝑅𝑐−1𝑘 � =𝑄𝑐−1𝑘

𝑄𝑐𝑘−1

𝑎𝑑𝑑 𝑎𝑛𝑑 𝑑𝑖𝑣𝑖𝑑𝑒 𝑎𝑛𝑑 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑦 𝑏𝑦 𝐶𝑀(𝑋𝑛)

(5) = 𝑄𝑐−1𝑘

𝐶𝑀(𝑋𝑛) ∙𝐶𝑀(𝑋𝑛)

𝑄𝑐𝑘−1

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛𝑠,

𝑄𝑐−1𝑘 ≤ 𝐶𝑀(𝑋𝑛), 𝑎𝑛𝑑 𝐶𝑀(𝑋𝑛) ≤ 𝑄𝑐𝑘−1 → (5) ≤ 1

CASE 15:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝑅𝑐−1𝑘 ∙

𝑅𝑐−1𝑘−1

𝐶𝑀(𝑋𝑛)= 1

CASE 16:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝑄𝑐−1𝑘 ∙

𝑅𝑐−1𝑘−1

𝐶𝑀(𝑋𝑛) =𝑅𝑐−1𝑘

𝑄𝑐−1𝑘

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛,𝑅𝑐−1𝑘 ≤ 𝑄𝑐−1𝑘 → (5) ≤ 1

CASE 17:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝐶𝑀(𝑋𝑛)

∙ 𝑅𝑐−1𝑘−1

𝐶𝑀(𝑋𝑛)=

𝑅𝑐−1𝑘

𝐶𝑀(𝑋𝑛)

𝑎𝑛𝑑 𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛,𝑅𝑐−1𝑘 ≤ 𝐶𝑀(𝑋𝑛) → (5) ≤ 1

CASE 18:

Page 178: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

164

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝑄𝑐−1𝑘 ∙

𝑄𝑐−1𝑘−1

𝐶𝑀(𝑋𝑛)=𝑄𝑐−1𝑘−1

𝑅𝑐−1𝑘−1 ∙ 𝑅𝑐−1𝑘

𝑄𝑐−1𝑘

𝑏𝑦 (9), 𝑄𝑐−1𝑘−1/𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 /𝑅𝑐−1𝑘 → (5) ≤ 1

CASE 19:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝐶𝑀(𝑋𝑛)

∙ 𝑄𝑐−1𝑘−1

𝐶𝑀(𝑋𝑛)=𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙𝑄𝑐−1𝑘−1

𝐶𝑀(𝑋𝑛)

𝑏𝑦 (9), 𝑄𝑐−1𝑘−1/ 𝑅𝑐−1𝑘−1 ≤ 𝑄𝑐−1𝑘 / 𝑅𝑐−1𝑘

𝑠𝑜, (5) ≤𝑅𝑐−1𝑘

𝐶𝑀(𝑋𝑛) ∙𝑄𝑐−1𝑘

𝑅𝑐−1𝑘 =𝑄𝑐−1𝑘

𝐶𝑀(𝑋𝑛)

𝑏𝑦 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, 𝑄𝑐−1𝑘 ≤ 𝐶𝑀(𝑋𝑛) , → (5) ≤ 1

CASE 20:

(5) = 𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1 ∙ 𝐶𝑀(𝑋𝑛)𝐶𝑀(𝑋𝑛)

∙ 𝐶𝑀(𝑋𝑛)𝐶𝑀(𝑋𝑛)

=𝑅𝑐−1𝑘

𝑅𝑐−1𝑘−1

𝑏𝑦 𝑡ℎ𝑒 𝑖𝑛𝑑𝑢𝑐𝑡𝑖𝑜𝑛 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛, (1), 𝑅𝑐−1𝑘−1 ≥ 𝑅𝑐−1𝑘 → (5) ≤ 1

So, (5) ≤ 1 is true for all Xn and hence must be true for the certain equivalent over Xn. So, (3)

is true, or 𝑅𝑐𝑘−1 ≥ 𝑅𝑐𝑘

Finally, by induction, we know that 𝑅𝑐𝑡−1 ≥ 𝑅𝑐𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑎𝑛𝑦 𝑡

Corollary 7.3.1: Characterizing IBF of Deals (𝑖𝑀𝑐𝑡(𝑋𝑛) and 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛))

The IBF multiples of a deal 𝑋𝑛 with and without information exhibit the following two

properties

I. 𝑖𝑀𝑐𝑡(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) are non decreasing in c for all t

II. 𝑖𝑀𝑐𝑡(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) are non decreasing in t for all c

Recall the definition of 𝑖𝑀𝑐𝑡(𝑋𝑛)

𝑖𝑀𝑐𝑡(𝑋𝑛) = �

𝐶𝑀(𝑋𝑛)𝑅𝑐𝑡

∨ 1�

From the definition we see that 𝑖𝑀𝑐𝑡(𝑋𝑛) moves in the opposite direction of 𝑅𝑐𝑡 and thus it is

non decreasing in c for all t and non decreasing in t for all c.

Page 179: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

165

Recall that 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀(𝑖𝑀𝑐𝑡(𝑋𝑛|𝐼𝑖)). Note that the indication is not a function of the

state (c,t) and hence 𝑖𝑀𝐼𝑐𝑡 follows 𝑖𝑀𝑐𝑡and is also non decreasing in c for all t and non

decreasing in t for all c.

Proposition 7.3.4: Characterizing the IBF of Information �𝒊𝑴𝒐𝑰𝒄𝒕(𝑿𝒏)� The IBF of information exhibits the following properties

I. For a given c, IBF of information is increasing in t and reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝑀(𝑋𝑛) then it decreases in t until it converges at 𝐶𝑀(𝑋𝑛∗)/𝐶𝑀(𝑋𝑛)

II. For a given t, IBF of information is increasing in c and reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝑀(𝑋𝑛) then it decreases in c until it converges at 𝐶𝑀(𝑋𝑛∗)/𝐶𝑀(𝑋𝑛)

Recall that the IBP of information is defined as:

𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛)/ 𝑖𝑀𝑐𝑡(𝑋𝑛)

𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀(𝐶𝑀(𝑋𝑛|𝐼𝑖)/𝑅𝑐𝑡 ∨ 1)/(𝐶𝑀(𝑋𝑛)/𝑅𝑐𝑡 ∨ 1) (1)

We take two cases, namely, as 𝐶𝑀(𝑋𝑛) relates to 𝑅𝑐𝑡

CASE1: 𝐶𝑀(𝑋𝑛) ≤ 𝑅𝑐𝑡

(1) reduces to 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀(𝐶𝑀(𝑋𝑛|𝐼𝑖)/𝑅𝑐𝑡 ∨ 1)

So 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) moves in the opposite direction as 𝑅𝑐𝑡

Thus, 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) is increasing in t and c when 𝐶𝑀(𝑋𝑛) ≤ 𝑅𝑐𝑡

CASE2: 𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

(1) reduces to 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀�𝐶𝑀(𝑋𝑛|𝐼𝑖)𝑅𝑐𝑡

∨ 1� /(𝐶𝐸(𝑋𝑛)𝑅𝑐𝑡

)

𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀(𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡)/ 𝐶𝑀(𝑋𝑛)

So 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) moves in the same direction as 𝑅𝑐𝑡

Thus, 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) is decreasing in t and c when 𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

Now, since 𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) increases in case1 and then decreases in case2, we can see that it

reaches a maximum when 𝑅𝑐𝑡 = 𝐶𝑀(𝑋𝑛)

Finally, we study the convergence of the term. At 𝑡 = 𝑇, (1) reduces to:

𝑖𝑀𝑜𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀(𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 1)/𝐶𝑀(𝑋𝑛)

But, the value of the deal with free information outside the funnel, 𝐶𝑀(𝑋𝑛∗), equals:

Page 180: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

166

𝐶𝑀(𝑋𝑛∗) = 𝐶𝑀(𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 1)

Thus, 𝑖𝑀𝑜𝐼𝑐𝑡 = 𝐶𝑀(𝑋𝑛∗)/𝐶𝑀(𝑋𝑛)

The same is true when c = C.

Proposition 7.3.5: Characterizing of the Optimal Policy The optimal policy for a given deal 𝑋𝑛 is characterized as follows

I. For a given c, the optimal policy can only change over time from rejecting, to buying information, and finally to accepting.

II. For a given t, the optimal policy can only change over capacity from rejecting, to buying information, and finally to accepting.

Proposition 7.3.5 Statement 1 Recall

𝑀𝑐𝑡|𝑋𝑛 = 𝑀𝑐−1

𝑡+1 ⋅ {𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑡 ∨ 𝑄𝑐𝑡(𝑋𝑛)}

Where 𝑄𝑐𝑡(𝑋𝑛) = 𝐶𝑀({𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑡}/𝑚)

Proposition 7.3.3 states:

𝑅𝑐−1𝑡 ≥ 𝑅𝑐𝑡 ≥ 𝑅𝑐+1𝑡

Meaning 𝑄𝑐−1𝑡 (𝑋𝑛) ≥ 𝑄𝑐𝑡(𝑋𝑛) ≥ 𝑄𝑐+1𝑡 (𝑋𝑛)

So if 𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

Then we cannot reject the deal for any higher c. The same goes for buying information, if

𝐶𝑀(𝑋𝑛) ≥ 𝑄𝑐𝑡(𝑋𝑛)

Then we cannot accept information for any higher c.

We also showed

𝑄𝑐−1𝑡 (𝑋𝑛)/ 𝑅𝑐−1𝑡 ≤ 𝑄𝑐𝑡(𝑋𝑛)/ 𝑅𝑐𝑡

This means that if buying information is better than rejecting for a given c, then it will be the

case for higher c.

Proposition 7.3.5 Statement 2 The proof follows in the same lines as statement 1.

Proposition 7.3.6: Identifying Optimal Detector Given the setup above, detector 1 will be optimal when:

Page 181: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

167

𝑖𝑀𝐼1𝑖𝑀𝐼2

>(1 − 𝑓2)(1 − 𝑓1)

Otherwise, detector 2 will be optimal.

let 𝐼𝑖1, 𝐼𝑖2 be the indications associated with detectors 1 and 2 respectively. The alternative of

buying information is worth

�𝑀𝑐−1𝑡+1 ∙ (1 − 𝑓𝑗) ∙ 𝐶𝑀�𝐶𝑀�𝑋𝑛�𝐼𝑖

𝑗� ∨ 𝑅𝑐𝑡��

where j is the number of the detector. For detector 1 to be preferable over detector 2, the

following needs to be true

�𝑀𝑐−1𝑡+1 ∙ (1 − 𝑓1) ∙ 𝐶𝑀�𝐶𝑀�𝑋𝑛�𝐼𝑖1� ∨ 𝑅𝑐𝑡�� > �𝑀𝑐−1

𝑡+1 ∙ (1 − 𝑓2) ∙ 𝐶𝑀�𝐶𝑀�𝑋𝑛�𝐼𝑖2� ∨ 𝑅𝑐𝑡��

𝐶𝑀�𝐶𝑀�𝑋𝑛�𝐼𝑖1� ∨ 𝑅𝑐𝑡� ∙ (1 − 𝑓1) > 𝐶𝑀�𝐶𝑀�𝑋𝑛�𝐼𝑖2� ∨ 𝑅𝑐𝑡� ∙ (1 − 𝑓2)

𝐶𝑀�𝐶𝑀�𝑋𝑛�𝐼𝑖1�/𝑅𝑐𝑡 ∨ 1� ∙ 𝑅𝑐𝑡 ∙ (1 − 𝑓1) > 𝐶𝑀�𝐶𝑀�𝑋𝑛�𝐼𝑖2�/𝑅𝑐𝑡 ∨ 1� ∙ 𝑅𝑐𝑡 ∙ (1 − 𝑓2)

𝐶𝑀�𝑖𝑀𝑐𝑡�𝑋𝑛�𝐼𝑖1�� ∙ (1 − 𝑓1) > 𝐶𝑀�𝑖𝑀𝑐

𝑡�𝑋𝑛�𝐼𝑖2�� ∙ (1 − 𝑓2)

𝑙𝑒𝑡 𝑖𝑀𝐼𝑗(𝑋𝑛) = 𝐶𝑀�𝑖𝑀𝑐𝑡�𝑋𝑛�𝐼𝑖

𝑗��

𝑖𝑀𝐼1(𝑋𝑛) ∙ (1 − 𝑓1) > 𝑖𝑀𝐼2(𝑋𝑛) ∙ (1 − 𝑓2)

𝑖𝑀𝐼1𝑖𝑀𝐼2

>(1 − 𝑓2)(1 − 𝑓1)

A1.3.2 Section 7.4 The Long-Run Problem

Accept

Seek info Reject

Reject

Accept

𝑀𝑐|𝑋𝑛 ∙ 𝑊0

𝛿 𝑀𝑐−1 ∙ 𝐸(𝑋𝑛) ∙ 𝑊0

𝛿 𝑀𝑐 ∙ 𝑊0

𝛿 𝑀𝑐−1 ∙ 𝐸(𝑋𝑛|𝐼𝑖) ∙ 𝑊0 ∙ (1 − 𝑓)

𝛿 𝑉𝑐 ∙ 𝑊0 ∙ (1 − 𝑓)

Ii

Page 182: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

168

Proposition 7.4.1: Characterizing Long-Run Problem I. When offered the deal n, the decision maker should buy information if and only

if 𝑖𝑀𝐼𝑐(𝑋𝑛) ≥ 𝑖𝑀𝑐(𝑋𝑛)/(1− 𝑓). After information is received, the decision maker should buy the deal if and only if 𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐 . Otherwise, the deal is worth buying without information if and only if 𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐.

II. 𝑀𝑐 is non-decreasing in c. III. 𝑅𝑐 is non-increasing in c. IV. 𝑖𝑀𝑐(𝑋𝑛) 𝑎𝑛𝑑 𝑖𝑀𝐼𝑐(𝑋𝑛) are non-decreasing in c. V. 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) is non-decreasing in c and reaches a maximum when 𝑅𝑐 = 𝐶𝑀(𝑋𝑛)

then it decreases to 𝐶𝑀(𝑋𝑛∗)/𝐶𝑀(𝑋𝑛), where 𝑋𝑛∗ is the deal with free information.

VI. The optimal policy can only change over c from rejecting to buying information, and from buying information to accepting.

Proposition 4.1 Statement 1 The recursion can be rewritten as:

𝑀𝑐|𝑋𝑛 = �[𝛿 𝑀𝑐−1 ∙ 𝐶𝑀(𝑋𝑛)] ∨ 𝛿 𝑀𝑐 ∨

�𝛿 𝑀𝑐−1 ∙ (1 − 𝑓) ∙ 𝐶𝑀[𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐]�� (1)

Using the definition of the incremental value of the deal with information, the recursion

simplifies to

We seek information when:

𝛿 𝑀𝑐 ∙ 𝑖𝑀𝐼𝑐(𝑋𝑛) ∙ (1 − 𝑓) ≥ ��𝛿 𝑀𝑐−1 ∙ 𝐶𝑀(𝑋𝑛)� ∨ 𝛿 𝑀𝑐�

𝑖𝑀𝐼𝑐(𝑋𝑛) ≥ ��𝐶𝑀(𝑋𝑛)𝑅𝑐

� ∨ 1� /(1 − 𝑓)

So,

𝑖𝑀𝐼𝑐(𝑋𝑛) ≥ 𝑖𝑀𝑐(𝑋𝑛)/(1− 𝑓)

Otherwise, we prefer to go with the deal without information. We buy the deal if

Accept

Seek info

Reject 𝑀𝑐|𝑋𝑛

𝛿 𝑀𝑐−1 ∙ 𝐶𝑀(𝑋𝑛)

𝛿 𝑀𝑐

𝛿 𝑀𝑐 ∙ 𝑖𝑀𝐼𝑐𝑡(𝑋𝑛) ∙ (1 − 𝑓)

Page 183: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

169

𝛿 𝑀𝑐−1 ∙ 𝐶𝑀(𝑋𝑛) ≥ 𝛿 𝑀𝑐

𝑜𝑟 𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐

Proposition 4.1 Statements 2 and 3 We prove the two properties in two steps. First, we prove that it is true for the finite horizon

case when we introduce discounting. Then, we use successive approximations to show that

the infinite horizon M converges to that in the finite horizon as we push the deadline T to the

limit.

Step 1-1: Characterizing M

We prove this by induction. Take t = T, (1) becomes

𝑀𝑐𝑇|𝑋𝑛 = �

[𝛿𝑀𝑐−1𝑇+1 ∙ 𝐸(𝑋𝑛)] ∨ 𝛿𝑀𝑐

𝑇+1 ∨�𝛿𝑀𝑐−1

𝑇+1 ∙ (1 − 𝑓) ∙ 𝐸[𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇]��

Since 𝑀𝑐𝑇+1 = 1 for all c, this equation reduces to

𝑀𝑐𝑇|𝑋𝑛 =

{ 𝐸(𝑋𝑛) ∨ 1 ∨ 𝐸[{𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇}] ∙ (1 − 𝑓) }

𝑓𝑜𝑟 𝑐 ≥ 1

𝑀0𝑇|𝑋𝑛 = 0 𝑓𝑜𝑟 𝑐 = 0

Hence it is non decreasing in c at t = T

Now, assume true for t = k, or

𝑀𝑐𝑘 ≥ 𝑀𝑐−1

𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (2)

Now, prove that it is true for t = k-1, or

𝑀𝑐𝑘−1 ≥ 𝑀𝑐−1

𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (3)

Where:

𝑀𝑐𝑘−1|𝑋𝑛 = �

�𝛿𝑀𝑐−1𝑘 ∙ 𝐸(𝑋𝑛)� ∨ 𝛿𝑀𝑐

𝑘 ∨

�𝛿𝑀𝑐−1𝑘 ∙ (1 − 𝑓) ∙ 𝐸[𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1]�

And

Page 184: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

170

𝑀𝑐−1𝑘−1|𝑋𝑛 = �

�𝛿𝑀𝑐−2𝑘 ∙ 𝐸(𝑋𝑛)� ∨ 𝛿𝑀𝑐−1

𝑘 ∨

�𝛿𝑀𝑐−2𝑘 ∙ (1 − 𝑓) ∙ 𝐸�𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐−1𝑘−1��

But since 𝑀𝑐𝑘 is non decreasing in c, (2), we have

𝑀𝑐−1𝑘−1|𝑋𝑛 ≤ �

�𝛿𝑀𝑐−1𝑘 ∙ 𝐸(𝑋𝑛)� ∨ 𝛿𝑀𝑐

𝑘 ∨

�𝛿𝑀𝑐−1𝑘 ∙ (1 − 𝑓) ∙ 𝐸[𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘−1]�

� = 𝑀𝑐𝑘−1|𝑋𝑛

Hence,

𝑀𝑐−1𝑘−1�𝑋𝑛 ≤ 𝑀𝑐

𝑘−1�𝑋𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑋𝑛

Meaning (3) is true, namely

𝑀𝑐−1𝑘−1 ≤ 𝑀𝑐

𝑘−1

And, finally, by induction

𝑀𝑐−1𝑡 ≤ 𝑀𝑐

𝑡 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 𝑎𝑛𝑑 𝑎𝑙𝑙 𝑡

Step 1-2: Characterizing R

We prove this by induction. Take the case when t = T, the statement becomes

𝑅𝑐𝑇 ≥ 𝑅𝑐+1𝑇

This is true since

𝑅𝑐𝑇 = 𝛿𝑀𝑐𝑇+1/𝛿𝑀𝑐−1

𝑇+1 = 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1

Hence 𝑅𝑐𝑇 is non increasing in c at t = T

Now, assume this is true for t = k, lets prove it for t = k – 1.

So we know that

𝑅𝑐𝑘 ≥ 𝑅𝑐+1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 ≥ 1 (1)

→ 𝛿𝑀𝑐𝑘+1/ 𝛿𝑀𝑐−1

𝑘+1 ≥ 𝛿𝑀𝑐+1𝑘+1/ 𝛿𝑀𝑐

𝑘+1

�𝑀𝑐𝑘+1�2 ≥ 𝑀𝑐+1

𝑘+1 ∙ 𝑀𝑐−1𝑘+1

𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑡 = 𝑘 + 1

For the statement to be true at t = k – 1, the following must be true

𝑅𝑐𝑘−1 ≥ 𝑅𝑐+1𝑘−1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 (2)

Page 185: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

171

�𝑀𝑐𝑘�2 ≥ 𝑀𝑐+1

𝑘 ∙ 𝑀𝑐−1𝑘 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑐 𝑎𝑡 𝑡 = 𝑘 (3)

For each Xn, we can rewrite 𝑀𝑐𝑘 as:

𝑀𝑐𝑘|𝑋𝑛 = �

𝐶𝑀(𝑋𝑛) ∙ 𝛿𝑀𝑐−1𝑘+1 ∨ 𝛿𝑀𝑐

𝑘+1 ∨ 𝐶𝑀��𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ 𝛿𝑀𝑐−1

𝑘+1 ∨ 𝛿𝑀𝑐𝑘+1� ∙ (1 − 𝑓)�

Now, define 𝑄𝑐𝑘 as

𝑄𝑐𝑘(𝑋𝑛) = 𝐶𝑀�{𝐶𝑀(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑘} ∙ (1 − 𝑓)�

So, we have:

𝑀𝑐𝑘|𝑋𝑛 = 𝛿𝑀𝑐−1

𝑘+1 ∙ {𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}

Now, rewrite (3) for each Xn as:

��𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)� ∙ 𝛿𝑉𝑐−1𝑘+1�2

≥ �𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐+1𝑘 ∨ 𝑄𝑐+1𝑘 (𝑋𝑛)� ∙ 𝛿𝑀𝑐+1𝑘+1 ∙

{𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑡1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)} ∙ 𝛿𝑀𝑐−2𝑘+1 (4)

Which is equivalent to

𝑅𝑐𝑘/ 𝑅𝑐−1𝑘 ∙ �𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐+1𝑘 ∨ 𝑄𝑐+1𝑘 (𝑋𝑛)�

/{𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐𝑘 ∨ 𝑄𝑐𝑘(𝑋𝑛)}2

∙ �𝐶𝑀(𝑋𝑛) ∨ 𝑅𝑐−1𝑘 ∨ 𝑄𝑐−1𝑘 (𝑋𝑛)� ≤ 1 (5)

And the rest of the proof follows directly from the case without discounting.

Step 2: Iterative Approximations

Here we prove the infinite horizon case by iterative approximations.

Define the operator ℋ as

ℋ(𝑀𝑐|𝑋𝑛) = ��𝛿𝑀𝑐−1 ∙ 𝐸(𝑋𝑛)� ∨ 𝛿𝑀𝑐 ∨

(𝛿𝑀𝑐−1 ∙ (1 − 𝑓) + 𝐸[𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐𝑇 ])�

If we take the input of the first iteration to be 0, then the solution to the fixed point relation

of ℋ above is 𝑀𝑐𝑇. Following the same methodology as above we can show that the infinite

horizon 𝑀𝑐 converges to that of the finite horizon and the properties we proved for the finite

horizon extend to the infinite horizon.

Page 186: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

172

Proposition 4.1 Statement 4 By definition, we have

𝑖𝑀𝑐(𝑋𝑛) = [𝐸(𝑋𝑛)𝑅𝑐

∨ 1]

So, 𝑖𝑀𝑐 is inversely related to 𝑅𝑐 and hence 𝑖𝑀𝑐 is non decreasing in c

Similarly, we have

𝑖𝑀𝐼𝑐(𝑋𝑛) = 𝐸[(𝐸(𝑋𝑛|𝐼𝑖)/ 𝑅𝑐 ∨ 1)] = 𝐸[𝑖𝑀𝑐(𝑋𝑛|𝐼𝑖)]

So, 𝑖𝑀𝐼𝑐 moves along the same direction as 𝑖𝑀𝑐 and hence is also non decreasing in c

Proposition 4.1 Statement 5 Recall that the IBP of information is defined as:

𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) = 𝑖𝑀𝐼𝑐(𝑋𝑛)/ 𝑖𝑀𝑐(𝑋𝑛)

𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) =

𝐸(𝐸(𝑋𝑛|𝐼𝑖)/𝑅𝑐 ∨ 1)/(𝐸(𝑋𝑛)/𝑅𝑐 ∨ 1) (1)

We take two cases as 𝐸(𝑋𝑛) relates to 𝑅𝑐

CASE1: 𝐸(𝑋𝑛) ≤ 𝑅𝑐

(1) reduces to 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) = 𝐸(𝐸(𝑋𝑛|𝐼𝑖)/𝑅𝑐 ∨ 1)

So 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) moves in the opposite direction as 𝑅𝑐

Thus, 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) is increasing in c when 𝐸(𝑋𝑛) ≤ 𝑅𝑐

CASE2: 𝐸(𝑋𝑛) ≥ 𝑅𝑐

(1) reduces to 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) =

𝐸(𝐸(𝑋𝑛|𝐼𝑖)/𝑅𝑐 ∨ 1)/ 𝐸(𝑋𝑛) ∙ 𝑅𝑐

𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) = 𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐)/ 𝐸(𝑋𝑛)

So 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) moves in the same direction as 𝑅𝑐

Thus, 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) is decreasing in t and c when 𝐸(𝑋𝑛) ≥ 𝑅𝑐

Now, since 𝑖𝑀𝑜𝐼𝑐(𝑋𝑛) increases in case1 and then decreases in case2, we can see that it

reaches a maximum when 𝑅𝑐 = 𝐸(𝑋𝑛)

Page 187: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

173

Finally, we study the convergence of the term. At 𝑐 = 𝐶, (1) reduces to:

𝑖𝑀𝑜𝐼𝐶(𝑋𝑛) = 𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 1)/𝐸(𝑋𝑛)

But, the value of the deal with free information outside the funnel, 𝐸(𝑋𝑛∗), equals:

𝐸(𝑋𝑛∗) = 𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 1)

Thus, → 𝑖𝑀𝑜𝐼𝐶 = 𝐸(𝑋𝑛∗)/𝐸(𝑋𝑛)

Proposition 4.1 Statement 6 Define 𝑄𝑐(𝑋𝑛) as

𝑄𝑐(𝑋𝑛) = 𝐸(𝐸(𝑋𝑛�|𝑖) ∨ 𝑅𝑐) ∙ (1 − 𝑓)

Now, we have

𝑀𝑐|𝑋𝑛 = 𝑀𝑐−1 ∙ {𝑅𝑐 ∨ 𝑄𝑐(𝑋𝑛) ∨ 𝐸(𝑋𝑛)}

To prove this statement, we need to show that:

• If 𝐸(𝑋𝑛) ≥ {𝑅𝑐 ,𝑄𝑐(𝑋𝑛)}, then this is true for any 𝑐∗ > 𝑐

• If 𝑄𝑐(𝑋𝑛) ≥ 𝑅𝑐 , then this is true for any 𝑐∗ > 𝑐

To prove the first condition, we note that 𝑅𝑐 is decreasing in c. Hence, if the first condition is

true at any given c, then it must be true for larger values of c. Now, from the definition of

𝑄𝑐(𝑋𝑛) we see that it moves in the direction of 𝑅𝑐. Thus, 𝑄𝑐(𝑋𝑛) will decrease in c and the

first condition is true.

To prove the second condition we note that it is sufficient to prove

𝑄𝑐(𝑋𝑛)/𝑅𝑐 ≥ 𝑄𝑐−1(𝑋𝑛)/𝑅𝑐−1

We rewrite this as

𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐) ∙ (1 − 𝑓) ∙ 𝑅𝑐 ≥

𝐸(𝐸(𝑋𝑛|𝐼𝑖) ∨ 𝑅𝑐−1) ∙ (1 − 𝑓) ∙ 𝑅𝑐−1

𝐸(𝐸(𝑋𝑛|𝐼𝑖)/𝑅𝑐) ≥ 𝐸(𝐸(𝑋𝑛|𝐼𝑖)/𝑅𝑐−1)

→ 𝑖𝑉𝐼𝑐(𝑋𝑛) ≥ 𝑖𝑉𝐼𝑐−1(𝑋𝑛)

Which we know is true from proposition 4.1 statement.

Page 188: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

174

A1.3.3 Section 7.5.1 Extensions – Multiple Cost Structures

Proposition 7.5.1a: Optimal Information Gathering Policy with Multiple Cost Structures When offered a deal 𝑋𝑛 the decision maker should buy information if and only if

𝑖𝑀𝐼𝑐−𝑘𝑡+𝑑(𝑋𝑛) ≥ 𝑖𝑀𝑐𝑡 (𝑋𝑛) ∙

𝑅𝑐𝑡(𝑘,𝑑 + 1)1 − 𝑓

Proposition 7.5.1b: Optimal Allocation Policy with Multiple Cost Structures After information is received, the decision maker should buy the deal if and only if

𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐−𝑘𝑡+𝑑(1,1)

Otherwise, the deal is worth buying without information if and only if

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

The recursion is reproduced here:

Proposition 7.5.1a We first write the recursion as:

𝑀𝑐𝑡|𝑋𝑛 = �𝑀𝑐−1

𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∨ 𝑀𝑐𝑡+1 ∨ �𝑀𝑐−𝑘

𝑡+1+𝑑 ∙ (1 − 𝑓) ∙ 𝐶𝑀��𝐶𝑀(𝑋𝑛|𝐼𝑖)𝑅𝑐𝑡(𝑘,𝑑 + 1)

∨ 1����

where

𝑅𝑐𝑡(𝑘,𝑑 + 1) = 𝑉𝑐𝑡+1/ 𝑉𝑐−𝑘𝑡+1+𝑑

This reduces to

𝑀𝑐𝑡|𝑋𝑛 = 𝑀𝑐

𝑡+1 �𝐶𝑀(𝑋𝑛)𝑅𝑐𝑡

∨ 1 ∨ �𝑀𝑐−𝑘𝑡+1+𝑑

𝑀𝑐𝑡+1 ∙ (1 − 𝑓) ∙ 𝐶𝑀 ��

𝐶𝑀(𝑋𝑛|𝐼𝑖)𝑅𝑐𝑡(𝑘,𝑑 + 1)

∨ 1����

Accept

Seek info

Reject

Reject

Accept

𝑀𝑐𝑡|𝑋𝑛 ∙ 𝑊0

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛)

𝑀𝑐𝑡+1 ∙ 𝑊0

𝑀𝑐−1−𝑘𝑡+1+𝑑 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ 𝑊0 ∙ (1 − 𝑓)

𝑀𝑐−𝑘𝑡+1+𝑑 ∙ 𝑊0 ∙ (1 − 𝑓)

Ii

Page 189: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

175

𝑀𝑐𝑡|𝑋𝑛 = 𝑀𝑐

𝑡+1 �𝑖𝑀𝑐𝑡 (𝑋𝑛) ∨ �

𝑀𝑐−𝑘𝑡+1+𝑑

𝑀𝑐𝑡+1 ∙ (1 − 𝑓) ∙ 𝑖𝑀𝐼𝑐𝑡 (𝑋𝑛)��

𝑀𝑐𝑡|𝑋𝑛 = 𝑀𝑐

𝑡+1 �𝑖𝑀𝑐𝑡 (𝑋𝑛) ∨ �

𝑖𝑀𝐼𝑐−𝑘𝑡+𝑑 (𝑋𝑛)𝑅𝑐𝑡(𝑘,𝑑 + 1)

∙ (1 − 𝑓)��

Hence we seek information when

𝑖𝑀𝐼𝑐−𝑘𝑡+𝑑 (𝑋𝑛)𝑅𝑐𝑡(𝑘,𝑑 + 1)

∙ (1 − 𝑓) ≥ 𝑖𝑀𝑐𝑡 (𝑋𝑛)

→ 𝑖𝑀𝐼𝑐−𝑘𝑡+𝑑 (𝑋𝑛) ≥ 𝑖𝑀𝑐𝑡 (𝑋𝑛) ∙

𝑅𝑐𝑡(𝑘,𝑑 + 1)1 − 𝑓

After information is received, the deal is worth accepting if and only if:

𝑀𝑐−1−𝑘𝑡+1+𝑑 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ 𝑊0 ∙ (1 − 𝑓) ≥ 𝑀𝑐−𝑘

𝑡+1+𝑑 ∙ 𝑊0 ∙ (1 − 𝑓)

𝑀𝑐−1−𝑘𝑡+1+𝑑 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑀𝑐−𝑘

𝑡+1+𝑑

→ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐−𝑘𝑡+𝑑(1,1)

Corollary 7.5.1: Identifying Optimal Detector with Multiple Cost Structures Given the setup above, detector 1 will be optimal when:

𝑖𝑀𝐼1𝑖𝑀𝐼2

≥𝑅𝑐𝑡(𝑘2,𝑑2 + 1 )𝑅𝑐𝑡(𝑘1,𝑑1 + 1 )

∙1 − 𝑓21 − 𝑓1

Where 𝑖𝑀𝐼1 = 𝑖𝑀𝐼𝑐−𝑘1𝑡+𝑑1𝑜𝑣𝑒𝑟 𝐼1,𝑎𝑛𝑑 𝑖𝑀𝐼2 = 𝑖𝑀𝐼𝑐−𝑘2

𝑡+𝑑2𝑜𝑣𝑒𝑟 𝐼2

Otherwise Detector 2 will be optimal.

From Proposition 7.5.1 we have that the value multiple of the alternative of information for

detector 1, A1, is worth:

𝐴1 =𝑀𝑐𝑡+1 ∙ 𝑖𝑀𝐼1

𝑅𝑐𝑡(𝑘1,𝑑1 + 1)∙ (1 − 𝑓1)

We define A1 in the same manner.

So, we prefer detector 1 over detector 2 when 𝐴1 > 𝐴2. Or

𝑀𝑐𝑡+1 ∙ 𝑖𝑀𝐼1

𝑅𝑐𝑡(𝑘1,𝑑1 + 1)∙ (1 − 𝑓1) ≥

𝑀𝑐𝑡+1 ∙ 𝑖𝑀𝐼2

𝑅𝑐𝑡(𝑘2,𝑑2 + 1)∙ (1 − 𝑓2)

Page 190: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

176

→ 𝑖𝑀𝐼1𝑖𝑀𝐼2

≥𝑅𝑐𝑡(𝑘2,𝑑2 + 1)𝑅𝑐𝑡(𝑘1,𝑑1 + 1 )

∙1 − 𝑓21 − 𝑓1

A1.3.4 Section 7.5.2 Extensions – Decision Reversibility The recursion is reproduced here:

Proposition 7.5.2 Optimal Allocation Policy with an Option When offered a deal 𝑋𝑛 with an option on deal 𝑍, the decision maker should accept the deal

𝑋𝑛 and buy an option on it if and only if:

𝑖𝑀𝑂𝑐𝑡(𝑋𝑛,𝑍) ≥ 𝑖𝐶𝑐𝑡 (𝑋𝑛,𝑍) ∙𝐶𝑀(𝑍)1 − 𝑓

Otherwise, the decision maker should accept if and only if:

𝐶𝑀(𝑋𝑛) > 𝑅𝑐𝑡(1,1,𝑍)

We have the following recursion

𝑀𝑐𝑡(𝑍)|𝑋𝑛 = ��

𝑀𝑐𝑡+1(𝑋𝑛) ∙ 𝐶𝑀(𝑋𝑛)

𝐶𝑀(𝑍) ∙ (1 − 𝑓)� ∨ �𝑀𝑐−1𝑡+1(𝑍) ∙ 𝐶𝑀(𝑋𝑛)� ∨ �𝑀𝑐

𝑡+1(𝑍)��

𝑀𝑐𝑡(𝑍)|𝑋𝑛

𝑀𝑐−1𝑡+1(𝑍) ∙ 𝐶𝑀(𝑋𝑛)

=

𝑀𝑐𝑡(𝑍)|𝑋𝑛

𝑀𝑐−1𝑡+1(𝑍) ∙ 𝐶𝑀(𝑋

𝑀𝑐𝑡(𝑍)|𝑋

𝑀𝑐−1𝑡+1(𝑍) ∙ 𝐶𝑀

So we buy the option when:

�𝑀𝑐𝑡+1(𝑋𝑛)

𝑀𝑐−1𝑡+1(𝑍) ∙ 𝐶𝑀(𝑍)

∙ (1 − 𝑓)� ≥ 𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍)

Reject

Accept Don’t

Buy option

𝑀𝑐𝑡(𝑍)|𝑋𝑛 ∙ 𝑊0

𝑀𝑐𝑡+1(𝑋𝑛) ∙ 𝐶𝑀(𝑋𝑛) ∙ 𝑊0/(𝐶𝑀(𝑍) ∙ 𝐹)

𝑀𝑐−1𝑡+1(𝑍) ∙ 𝐶𝑀(𝑋𝑛) ∙ 𝑊0

𝑀𝑐𝑡+1(𝑍) ∙ 𝑊0

Page 191: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

177

�𝑀𝑐𝑡+1(𝑋𝑛)𝑀𝑐−1𝑡+1(𝑍)

� ≥ 𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍) ∙ 𝐶𝑀(𝑍)/(1 − 𝑓)

→ 𝑖𝑀𝑂𝑐𝑡(𝑋𝑛,𝑍) ≥ 𝑖𝐶𝑐𝑡(𝑋𝑛,𝑍) ∙𝐶𝑀(𝑍)1 − 𝑓

Otherwise, the deal is worth accepting if and only if:

𝑀𝑐−1𝑡+1(𝑍) ∙ 𝐶𝑀(𝑋𝑛) ∙ 𝑊0 ≥ 𝑀𝑐

𝑡+1(𝑍) ∙ 𝑊0

𝐶𝑀(𝑋𝑛) ≥𝑀𝑐𝑡+1(𝑍)

𝑀𝑐−1𝑡+1(𝑍)

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡(1,1,𝑍)

A1.3.5 Section 7.5.3 Extensions – Probability of Knowing Detectors The recursion is reproduced here

Recursion Equation Define the following terms as before

𝑖𝑀𝑐𝑡 (𝑋𝑛) = �

𝐶𝑀(𝑋𝑛)𝑅𝑐𝑡

∨ 1�

𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛) = 𝐶𝑀(𝑖𝑀𝑐𝑡(𝑋𝑛|𝐼𝑖))

By dividing all the end terms by 𝑀𝑐𝑡+1 ∙ 𝑊0, the recursion above reduces to:

𝑀𝑐𝑡|𝑋𝑛 ∙ 𝑊0

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∙ 𝑊0

𝑀𝑐𝑡+1 ∙ 𝑊0

Reject

Accept 𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓) ∙ 𝑊0

𝑀𝑐𝑡+1 ∙ (1 − 𝑓) ∙ 𝑊0

Ii

Reject

Accept 𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∙ (1 − 𝑓) ∙ 𝑊0

𝑀𝑐𝑡+1 ∙ (1 − 𝑓) ∙ 𝑊0

p

Page 192: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

178

And finally reduces to

Proposition 7.5.3a: Optimal Information Gathering Policy with Probability of Knowing Detectors Given a detector defined as above with a probability of knowing p and cost fraction f, the

decision maker should buy information if and only if:

�𝑢 �𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛)𝑖𝑀𝑐

𝑡(𝑋𝑛) � − 1� ≥�𝑢 � 1

1 − 𝑓� − 1�

𝑝

𝑖𝑀𝑐𝑡(𝑋𝑛) ∙ (1 − 𝑓)

𝑀𝑐𝑡|𝑋𝑛/𝑀𝑐

𝑡+1

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject

𝐶𝑀(𝑋𝑛)/𝑅𝑐𝑡

1

Reject

𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓)/𝑅𝑐𝑡

(1 − 𝑓)

Ii

Reject

Accept 𝐶𝑀(𝑋𝑛) ∙ (1 − 𝑓)/𝑅𝑐𝑡

(1 − 𝑓)

𝑖𝑀𝑐𝑡(𝑋𝑛)

(1)

p

Accept

Where (1) = 𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛) ∙ (1 − 𝑓)

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject 𝑀𝑐𝑡|𝑋𝑛/𝑀𝑐

𝑡+1

𝐶𝑀(𝑋𝑛)/𝑅𝑐𝑡

1

Reject

Accept 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓)/𝑅𝑐𝑡

(1 − 𝑓)

Ii

Reject

Accept 𝐶𝑀(𝑋𝑛) ∙ (1 − 𝑓)/𝑅𝑐𝑡

(1 − 𝑓)

p

Page 193: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

179

Proposition 7.5.3b: Optimal Allocation Policy with Probability of Knowing Detectors If clairvoyance is received, the decision maker should buy the deal if and only if:

𝐶𝑀(𝑋𝑛|𝐼𝑖) ≥ 𝑅𝑐𝑡

Otherwise, if no clairvoyance is received or the decision maker did not buy information; then

the decision maker should buy the deal if and only if:

𝐶𝑀(𝑋𝑛) ≥ 𝑅𝑐𝑡

We seek information when seeking information provides a higher value than not. We state

this relationship in terms of u-values.

𝑢�𝑖𝑀𝑐𝑡(𝑋𝑛)� ≤ 𝑝 𝑢(𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛) ∙ (1 − 𝑓)) + (1 − 𝑝)𝑢(𝑖𝑀𝑐

𝑡(𝑋𝑛) ∙ (1 − 𝑓))

For simplicity, we drop the terms in iM and iMI

(𝑖𝑀)𝜆 ≤ 𝑝 (𝑖𝑀𝑃𝐼 ∙ (1 − 𝑓))𝜆 + (1 − 𝑝)(𝑖𝑀 ∙ (1 − 𝑓))𝜆

(𝑖𝑀

1 − 𝑓)𝜆 ≤ 𝑝 (𝑖𝑀𝑃𝐼)𝜆 + (1 − 𝑝)(𝑖𝑀)𝜆

(𝑖𝑀

1 − 𝑓)𝜆 ≤ 𝑝 (𝑖𝑀𝑃𝐼)𝜆 − 𝑝(𝑖𝑀)𝜆 + (𝑖𝑀)𝜆 = 𝑝[(𝑖𝑀𝑃𝐼)𝜆 − (𝑖𝑀)𝜆] + (𝑖𝑀)𝜆

(1

1 − 𝑓)𝜆 ≤ 𝑝[(

𝑖𝑀𝑃𝐼𝑖𝑀

)𝜆 − 1] + 1

(1

1 − 𝑓)𝜆 − 1 ≤ 𝑝[(

𝑖𝑀𝑃𝐼𝑖𝑀

)𝜆 − 1] = 𝑝(𝑖𝑀𝑃𝐼𝑖𝑀

)𝜆 − 𝑝

→ (𝑖𝑀𝑃𝐼𝑖𝑀

)𝜆 − 1 ≥ �𝐹𝜆 − 1�/𝑝

Corollary 7.5.3: Identifying Optimal Detector with Probability of Knowing Detectors Given the setup above, detector 1 will be optimal when

�𝑢 � 11 − 𝑓1

� − 1�

𝑝1<�𝑢 � 1

1 − 𝑓2� − 1�

𝑝2

Otherwise Detector 2 will be optimal. In this setup, this optimality is myopic. So if we have

multiple irrelevant detectors we use them in the decreasing order of �𝑢 � 11−𝑓

� − 1� 𝑝�

Page 194: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

180

Directly from Proposition 7.5.3 we have that the benefit of detectors is inversely related

to�𝑢 � 11−𝑓

� − 1� 𝑝� .

Page 195: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

181

In the case with two detectors, we have the following setup(after dividing through by W0)

In the same manner as before, we reduce this to the following:

𝑀𝑐𝑡|𝑋𝑛

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject

𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛)

𝑀𝑐𝑡+1

Reject

Accept 𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖) ∙ (1 − 𝑓1)

𝑀𝑐𝑡+1 ∙ (1 − 𝑓1)

Ii

Reject

Accept 𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∙ (1 − 𝑓1)

𝑀𝑐𝑡+1 ∙ (1 − 𝑓1)

No Clairvoyance

Clairvoyance

Seek info

Reject

Accept

∙ (1 − 𝑓1) ∙ (1 − 𝑓2) 𝑀𝑐−1

𝑡+1 ∙ 𝐶𝑀(𝑋𝑛|𝐼𝑖)

𝑀𝑐𝑡+1 ∙ (1 − 𝑓1) ∙ (1 − 𝑓2)

Ii

Reject

Accept 𝑀𝑐−1𝑡+1 ∙ 𝐶𝑀(𝑋𝑛) ∙ (1 − 𝑓1)

∙ (1 − 𝑓2)

𝑀𝑐𝑡+1 ∙ (1 − 𝑓1) ∙ (1 − 𝑓2)

p1

p2

Page 196: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

182

𝑀𝑐𝑡|𝑋𝑛/𝑀𝑐

𝑡+1

No Clairvoyance

Clairvoyance

Seek info

Reject

Accept Ii

No Clairvoyance

Clairvoyance

Seek info

Reject

Accept Ii

Reject

Accept

𝐶𝑀(𝑋𝑛|𝐼𝑖)𝑅𝑐𝑡∙𝐹1

1/𝐹1

𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛)𝐹1

𝐶𝑀(𝑋𝑛|𝐼𝑖)𝑅𝑐𝑡∙𝐹1∙𝐹2

1𝐹1∙𝐹2

(1)

𝐶𝑀(𝑋𝑛)𝑅𝑐𝑡 ∙ 𝐹1 ∙ 𝐹2

1𝐹1∙𝐹2

(2)

(2) = 𝑖𝑀𝑐𝑡(𝑋𝑛)/(𝐹1 ∙ 𝐹2)

Where: (1) = 𝑖𝑀𝑜𝑃𝐼𝑐𝑡(𝑋𝑛)/(𝐹1 ∙ 𝐹2)

p1

p2

Page 197: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

183

Note that if a detector does not satisfy:

(𝑖𝑀𝑃𝐼𝑖𝑀

)𝜆 − 1 ≥ �𝐹𝜆 − 1�/𝑝

Then it is useless. Hence we consider the case where both detectors satisfy the above

equation. This reduces the recursion to:

So, the value of the deal flow when using detector 1 before detector 2 is:

𝑢(𝑀𝑐𝑡|𝑋𝑛/𝑀𝑐

𝑡+1)

= 𝑝1 𝑢(𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛)/𝐹1) + (1 − 𝑝1)[ 𝑝2 𝑢(𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛)/(𝐹1 ∙ 𝐹2))

+ (1 − 𝑝2)𝑢(𝑖𝑀𝑐𝑡(𝑋𝑛)/(𝐹1 ∙ 𝐹2))]

(2)

𝐶𝑀(𝑋𝑛)𝑅𝑐𝑡 ∙ 𝐹1 ∙ 𝐹2

(1)

𝑀𝑐𝑡|𝑋𝑛/𝑀𝑐

𝑡+1

No Clairvoyance

Clairvoyance

Seek info

Accept

Reject

𝐶𝑀(𝑋𝑛)/𝑅𝑐𝑡

1

Ii

Reject

Accept 𝐶𝑀(𝑋𝑛)/(𝑅𝑐𝑡 ∙ 𝐹1)

1/𝑚1

No Clairvoyance

Clairvoyance

Seek info

Reject

Accept 𝐶𝑀(𝑋𝑛|𝐼𝑖)𝑅𝑐𝑡∙𝐹1∙𝐹2

1𝐹1∙𝐹2

Ii

Reject

Accept

1𝐹1∙𝐹2

𝑖𝑀𝑐𝑡(𝑋𝑛)

𝐶𝑀(𝑋𝑛|𝐼𝑖)/(𝑅𝑐𝑡 ∙ 𝐹1)

1/𝐹1

𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛)𝐹1

𝐹 = 1/(1 − 𝑓)

(2) = 𝑖𝑀𝑐𝑡(𝑋𝑛)/(𝐹1 ∙ 𝐹2)

Where:

(1) = 𝑖𝑀𝑃𝐼𝑐𝑡(𝑋𝑛)/(𝐹1 ∙ 𝐹2)

p1

p2

Page 198: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

184

Again, we drop the terms in iMI and iM for clarity to get

𝑢(𝑀𝑐𝑡|𝑋𝑛/𝑀𝑐

𝑡+1)

= 𝑝1𝑢(𝑖𝑀𝑃𝐼/𝐹1) + (1 − 𝑝1)[𝑝2 𝑢(𝑖𝑀𝑃𝐼/(𝐹1 ∙ 𝐹2))

+ (1 − 𝑝2)𝑢(𝑖𝑀/(𝐹1 ∙ 𝐹2))]

𝑢(𝑀𝑐𝑡|𝑋𝑛/𝑀𝑐

𝑡+1) = 𝑝1 �𝑖𝑀𝑃𝐼𝐹1

�𝜆

+ (1 − 𝑝1)[𝑝2 �𝑖𝑀𝑃𝐼𝐹1 ∙ 𝐹2

�𝜆

+ (1 − 𝑝2) �𝑖𝑀

𝐹1 ∙ 𝐹2�𝜆

]

𝑢(𝑀𝑐𝑡|𝑋𝑛/𝑀𝑐

𝑡+1)

= 𝑝1 �𝑖𝑀𝑃𝐼𝐹1

�𝜆

+ 𝑝2 �𝑖𝑀𝑃𝐼𝐹1 ∙ 𝐹2

�𝜆− 𝑝1𝑝2 �

𝑖𝑀𝑃𝐼𝐹1 ∙ 𝐹2

�𝜆

+ �𝑖𝑀

𝐹1 ∙ 𝐹2�𝜆

− 𝑝1 �𝑖𝑀

𝐹1 ∙ 𝐹2�𝜆− 𝑝2 �

𝑖𝑀𝐹1 ∙ 𝐹2

�𝜆

+ 𝑝1𝑝2 �𝑖𝑀

𝐹1 ∙ 𝐹2�𝜆

Denote this case by (I) and the case with detector 2 before detector 1 by II.

So in order to have detector 1 before detector 2 we must satisfy

𝑢(𝐼) − 𝑢(𝐼𝐼) ≥ 0

Hence, after eliminating canceling terms,

𝑝1 �𝑖𝑀𝑃𝐼𝐹1

�𝜆− 𝑝2 �

𝑖𝑀𝑃𝐼𝐹2

�𝜆

+ (𝑝2 − 𝑝1) �𝑖𝑀𝑃𝐼𝐹1 ∙ 𝐹2

�𝜆≥ 0

We multiply throughout by �𝐹1∙𝐹2𝑖𝑀𝑃𝐼

�𝜆

to get

𝑝1(𝐹2)𝜆 − 𝑝2(𝐹1)𝜆 + (𝑝2 − 𝑝1) ≥ 0

And finally,

→ � 1

1 − 𝑓2�𝜆− 1

𝑝2≥� 1

1 − 𝑓1�𝜆− 1

𝑝1

Page 199: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

185

Appendix A2 – A Generic Decision Diagram Example

A2.1 Introduction Within a Venture Capital context, early stage decisions are difficult to make for several

reasons. The uncertainties are rather complex and span different areas. To account for

biases, firms resort to the use of due diligence lists. The lists fall short of specifically

accounting for uncertainties and the connections between them. Decision diagrams

accurately capture the effects of uncertainties, but they are not commonplace, due to the

perceived difficulty of applying them.

This appendix gives a sample generic decision diagram for Venture Capitalists evaluating an

Internet consumer business startup. We faced two challenges when developing the diagram.

The first is the trade-off between simplicity and generality. The second is to define the nodes

so as to pass the clarity test. As stated in Chapter 5, Richman (2009) vetted this model with

20 Venture Capitalists from Silicon Valley.

The rest of the chapter is organized as follows. In Section 2, we present the generic diagram

and discuss its setup and frame. We elaborate on the different nodes in detail in Section 3.

We discuss how to incorporate a higher level of detail into the diagram in Section 4. In

Section 5 we present some possible extensions.

A2.2 Generic Diagram Setup and Frame

Decision Maker Our decision maker is a potential investor in the company. His/her involvement is limited to

funding and influencing the hiring of the management team. The decision maker is taken to

have no influence on the business decisions of the company.

Frame Some firms only evaluate companies in specific areas or those that are recommended by a

specific group. The frame of the decision allows the decision maker to filter the opportunities

under discussion based on her preferences. Hence, outside of the diagram, the frame should

be well defined within the firm to incorporate their policies.

Page 200: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

186

Setup The diagram presented below gives a high-level representation to allow for a quick analysis.

More detail can be added for the specific problem under consideration. We incorporate time

evolution by representing each node as a time series. We also allow feedback through the

‘Observation’ node. This node captures all that is observable and it is relevant, with a single

step delay, to some of the other nodes in the diagram.

We divide the nodes into five groups, namely, Initial Analysis, Execution, Results, Liquidation,

and Additional Considerations. The grouping is based on the chronological order of

assessments and exists solely to simplify the representation of the diagram.

We adopt the notation discussed in Howard (1989) to represent uncertainties, decisions, and

evocative nodes. We represent delay with a ‘Z’ on the arrow. The color-coding is there for

ease of identification only.

Diagram

Figure 71 - Example of a Generic Decision Diagram

Invest?

INITIAL EXECUTION RESULTS LIQUIDATION

Market Size & Growth

Technology

Team

Competitors

Possible Applications

Business Model

Revenue

Cost Future Financing

Exit

Dilution

ValueProf it

Cash Balance

Observables

PartnershipsHiring Barriers to Entry

Z

Market Share & Growth

Page 201: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

187

where

Figure 72 - Generic Diagram Node Key

The ‘Invest’ decision node represents the alternatives available to the decision maker. These

include the decision whether to invest or not, and if so, how much to invest and on what

conditions. The conditions that can be made are on the choice of the team and on the

milestones set on future investment decisions. The decision node is also a time series.

Invest The decision maker decides how much to invest, if any, and on what conditions. The

conditions include influencing the company’s team and setting conditions for future

investments if appropriate.

A2.3 Generic Diagram Node Definitions

First Step: Initial Analysis Here we consider the aspects of the investment that are usually first presented to the

decision makers. Here we consider three nodes, namely, Team, Market Size and Growth, and

Technology. The following is a detailed discussion of the nodes.

Team Given a set of characteristics X, Y, and Z that the firm deems important in the team, this node

gives the distribution over the decision maker’s belief of whether his/her partners will find

the team to be X, Y, and Z and to what degree. This node is conditioned on the decision

maker’s investment decision.

Nodes

uncertainty

Decision

Value

Observation

Arrows

Possible Dependence

Optional assessment

Feedback assessmentZ

Page 202: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

188

The management should consider the characteristics that matter most for the business at

hand. An example might be their technical know-how in the case of a proprietary business.

This distinction is influenced by the ‘Invest’ decision node. Some of the main considerations

for assessing the team, important at this node, are their capabilities, prior experience,

education, and personalities.

Market Size and Growth This node gives a join distribution over the size and growth of the product’s addressable

market. All the potential applications of the product should be considered when assessing

the market size and growth. Also, the possibility of applications newer than those currently

considered should be included. The timing of the company and the current trends are

important considerations when assessing the size and growth of the available market.

Technology This node gives a distribution over the features of the technology to be developed. This is

conditioned on the Team node.

Failure to develop the technology is incorporated as a probability of failure within this node.

Hiring, especially hiring difficulties, should be considered when assessing how the technology

evolves over time, so added it as an optional node to emphasize its importance. The

conditioning on the team includes the know-how and training of the team members.

Other considerations include the state of the art of the technology used in the product, the

relevant patents, and the scalability and maintainability of the technology. The code

platform, and whether it is open-source or not, is relevant to how the team develops the

technology.

Second Step: Execution Here we consider the company’s business model, the competitive landscape, and the

company’s market share and number of users.

Competitors This node gives a distribution over the set of possible actions by competitors. The distinction

over competitors evolves with time given the observable aspects of the company. This node

also includes the possibility of new competitors emerging. Barriers to entry should be

considered when assessing this node. Barriers to entry include patents and the cost and

Page 203: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

189

delay in customer acquisition. We also included this as an optional node to emphasize its

importance.

Business Model This node gives a distribution over the possible variations of the business model that the

company will apply to its customers given the actions of its competitors.

There are uncertainties around people’s acceptance of the business model and of the prices

the company can charge. This node also considers the effectiveness of the management

team in evolving their pricing to suit the market. The following are some of the

considerations around the Business Model.

Market Share & Growth This node gives the distribution over the company’s market share and its growth rate. This is

conditioned on the business model, team, technology, and competitors.

The market share node can be substituted with a node for the number of users, which will

also be conditioned on the Market Size. This node will be the most difficult to assess for

many reasons. For one, it is conditioned on several other nodes and the assessments

required might be too many for the decision maker to comprehend. This node is a good

example of where the inclusion of a more detailed layer is important.

Third Step: Results This section tracks the financial results of the company. It section includes three nodes:

revenue, cost and profit.

Revenue This node gives the distribution over the company’s revenue given the business model,

market size & growth, and market share & growth.

Cost This node gives the distribution over the company’s cost given the team, market size &

growth, and market share & growth. When assessing cost, the decision maker should

consider hiring costs. The uncertainties around the variable and fixed costs incurred by the

company should also be included here. Other considerations include the costs to acquire and

maintain a customer.

Page 204: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

190

Profit This node gives is a deterministic function of the company’s profits given its revenue and

costs.

Fourth Step: Liquidation

Exit This node gives the distribution over the possible exit scenarios for the company given its

level of profit. This node can be conditioned on the market share and growth instead of

profit for early-stage exit. The exit strategy includes the variations on the exit terms

acceptable to the company.

Future Financing This node gives the distribution over the available future financing (in dollars) given the

manner in which the company evolves. Macroeconomic considerations such as the

availability of capital and the credit market should be including when assessing this

distinction. The team’s financial strength and its financial network are also relevant to the

availability of future financing.

Cash Balance This node gives the distribution over the cash balance of the company conditioned on its

profit and future financing. The cash balance affects the valuation and exit strategy available

to the company. If the company is pressed for cash, they will be at a disadvantage when

negotiating the valuation.

Dilution This node gives the distribution over the effects of future financing on the investor’s share of

the company. This node reflects the price of the future financing obtained by the company.

Value and Feedback In this section, we discuss the value of this company to the decision maker. We also describe

the feedback in the diagram through the observation node.

Value This node gives the distribution of the decision maker’s financial return (in dollars) from this

company, which is conditioned on the company’s profit, cash balance, and exit strategy. It is

also conditioned on the dilution incurred on the decision maker’s interest in the company.

Page 205: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

191

This node also includes the assessed value (in dollars) of the decision maker’s partnerships in

other firms.

Partnerships This node gives the distribution over the possible relationship scenarios with other firms

through this company and their assessed value (in dollars).

Observations This is a deterministic node that captures all the observable factors within this diagram. In

order to simplify this diagram we omitted the arrows from the observable nodes to the

observations node. Any node that can be observed after time as passed is connected to this

node. In this way, this node gives a summary report on all that was observable about the

evolution of the company.

Feedback is modeled by conditioning some of the distinctions to the observations node in

the previous time period. We can extend this view by conditioning the nodes on different

subsets of the observations by having multiple observation nodes that are connected to

different distinctions.

A2.4 Deeper Layers The diagram can be made more comprehensive and/or specific by adding new layers of

detail. We recommend the firm uses the first layer described above for quick analysis and

develops a second layer for in-depth analysis. The second layer can also cater to the specific

preferences of the firm. A third layer can be added to include the specifics of the venture

under consideration.

In the following we discuss a more detailed representation of the relationship between

competition and the market share. Such a representation can be used to build a more

detailed decision diagram.

Page 206: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

192

Competitors The competitive landscape can be represented in the following Venn diagram:

Figure 73 - Venn Diagram of the Competitive Landscape

We represent two metrics: the overlap between our own addressable markets and those of

each competitor, and the competitor’s captured market from each overlap. In the diagram

above, the background orange box is the company’s addressable market. We represent how

close the offering of a competitor is to that of the company by the overlap of their

addressable markets. The silver box, for example, shows the overlap between competitor C

and the company. We can see that competitor C’s offering is distant from that of A, as its

addressable markets do not overlap. Within each of the competitors’ addressable markets

we represent their market share within that overlap with an orange box outline. This metric

represents the strength of the competitor’s market presence.

Following this representation we can elaborate the competitors’ node in the decision

diagram above into the cluster show in Figure 74.

Competitor A

Competitor B

Competitor C

Addressable market

Captured Markets

Page 207: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

193

Figure 74 - Cluster Diagram of the Competitors’ Node

Market Share Here we follow the same representation as that shown in Figure 73. We note three distinct

areas. First, there is an area of the market that is only addressed by us. This includes any

markets created by our company. The second type is the share of the market that is

addressed by other competitors but still not captured by anyone. For a growing market, this

includes the customers who are yet not aware of the offerings. It also includes new entrants

to the market (teenagers above a certain age, etc.). The third type, the most common, is the

market that is addressable by our company’s product but is already captured by another

competitor. Figure 75 illustrates the different types.

Figure 75 - Types of Market Share

The market share can be represented by a decision diagram that includes the conversion

ratios of each of these areas. The following diagram uses only on conversion ratio for each

Actions by Current

Competitors

Actions by New

Competitors

Competitors’ Addressable

Market

Competitors’ Captured Market

Competitors

Observables

Barriers to Entry

Competitor A

Competitor B

Competitor C

Area 1

Area 3

Area 2

Page 208: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

194

area. However, one can elaborate the diagram even further by specifying a conversion ratio

to each specific combination of the competitors.

Figure 76 - Example of an internal diagram for Market Share

The conversion cost node represents the cost incurred by a customer to switch from a

competitor’s solution to that of our company. This cost includes the financial price and the

time delay.

A2.5 Extensions In this section we discuss two extensions to the investment problem that are based on the

generic diagram. The first relates to the incorporation of future decisions, while the second

address the study of investment milestones.

Future Decisions It is common in VC practice to divide the investment in a venture into multiple phases. In this

way, the decision makers are able to track the progress of the company before committing

further funds. This strategy can be analyzed as an options problem. Consider the base case

in which the investor is asked to commit, in a single decision, all the funds needed to take the

company to an exit. The decision maker finds value in the option to delay part of the

investment to a later period, thereby obtaining some information about the venture before

committing the rest of the funds. Investment rounds can be thought of as options on future

investment decisions and their value can be calculated by following the method in Howard

(1995).

% conversion from Area 2

% conversion from Area 1

Conversion Cost

% conversion from Area 3

Market Share

Technology

Team

Competitors

Business Model

Page 209: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

195

Milestones Venture firms usually set certain milestones with each investment phase. These milestones

are commonly set on the basis of common practice or to alleviate a certain concern. As an

extension to this work, milestones can be studied as imperfect tests. Strict milestones can be

represented as tests with high specificity and low sensitivity. Relaxed milestones are the

opposite; they have low specificity and high sensitivity. The tradeoffs between the specificity

and sensitivity of the test might shed some light on the factors that come into play in

designing milestones.

Page 210: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

196

Appendix A3 – Venture Capital Valuation In this appendix we give a brief introduction to Venture Capital valuation in the literature for

practitioners and from direct interviews. From the literature, we give an overview of

valuation models and then focus on Venture Capital valuation as discussed by the HBR article

The Venture Capital Method. We also interviewed several Venture Capitalists in Silicon

Valley. While this overview does not rise to meet academic requirements, we believe the

methods we assessed give an interesting insight on actual Venture Capitalists’ decision

making.

A5.1 Literature Review In the following we give a high-level overview of valuation models and then focus on the

valuation within a Venture Capital context.

A5.1.1 Valuation Overview We consider three, not mutually exclusive, general frameworks for valuation. First, and most

notable, is the discounted cash flow model. The second is the relative valuation and the third

is the option pricing valuation framework. We refer the interested reader to Damodaran

(2001) and (2002), Copeland et al. (2005), and Cornell (1993).

The discounted cash flow framework, in short, is based on adjusting the estimated value of

future cash flow projections downwards by an appropriate discount rate. The discount rate

should reflect the characteristics of the investment, including, mainly, its ‘riskiness.’ The

relative valuation model derives the value of an investment from the known market value of

comparable investments. In this framework, the analyst uses specific, standard measures of

value, including the price/earning ratio. The option pricing framework is similar to the DA

understanding of options and I will use it when applicable.

In general, the valuation of a firm considers the following aspects. First, the projections of

the future cash flow of the company are set. These projections are point estimates of the

future that are rationalized through a deterministic model. At the end of the projection

period, a terminal value for the investment is projected. Then the appropriate discounted

Page 211: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

197

cash flow model is chosen given the specifics of the investment. After that, the discount rate

appropriate for this investment is determined, taking into account its individual

characteristics. Finally, other characteristics of the investment that are not taken into

account by the projected cash flows are considered. These characteristics may include the

marketability (liquidity) of the investment, the managerial flexibility, and the control rights

associated with the investment.

Relative valuation determines the value of an asset by comparing it to similar assets that

have an established market price. Similarity is considered largely in terms of risk and return.

Other factors include sector, liquidity, seasonality, and business models.

Option pricing models define the investments in terms of options and then evaluate the

options in terms of market prices. These are best suited to specific investment opportunities

that have embedded or real options.

A5.1.2 The Venture Capital Method

Overview The Venture Capital Method (VCM) was nitroduced by Sahlman (1986) and it focuses on

determining the appropriate discount rate to be used by the investor and the best way to use

this discount rate to determine what percentage the investor acquires. The latter part is

mainly algebraic manipulation that makes sure that the investor’s share in the company is

sufficient to entail a satisfactory return and/or degree of control with a minimum of risk.

Projections for Future Cash Flow VCM avoids this part of the analysis by assuming that the investor takes future cash flow as a

given and actually recommends asking the entrepreneur to give his estimate based on the

best-case scenario.

Setting the Discount Rate The main part of VCM is devoted to determining the appropriate discount rate to apply to

the risky investment. VCM suggests the following components that determine the discount

rate:

Base Rate of Return: This is the base rate available from risk-free investments. It

compensates for inflation and is usually set by the government bonds.

Page 212: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

198

Risk Premium: Here the VC considers a premium for both unsystematic risk and systemic

risk. Unsystematic risk, by the VCM definition, is the non-market risk, which includes all the

uncertainties that are specific to the investment and are irrelevant to the market conditions.

VCM is based on the premise that the investor can diversify such risks and hence should not

require a premium for taking on such risks. Systematic risk is the market risk, which includes

all the uncertainties that are relevant to market conditions. Startup-like investments are

usually highly vulnerable to market conditions, and therefore VCM assigns them a high risk

parameter.

Liquidity Premium: The marketability of equity in startups and privately held businesses is

limited due to a number of reasons, including legal restrictions. VCM accounts for this

limitation by requiring a higher discount rate

Value Added Premium: This premium is relevant when the investors are actively engaged in

the business. VCM suggests requiring an increase in the discount rate that accounts for the

time the investors spend with the company.

Cash Flow Adjustment: Here the VCM takes into account the prior experience of the

investor. VCM adjusts the terminal value of the investment by taking into account how

similar investments evolve. To illustrate, VCM states that the investor usually faces three

prospects, namely, success, lateral movement, and loss. Here, ‘success’ means that the

company will meet or exceed its expectations; ‘lateral movement’ means that the investor

will only be able to retrieve his investment; and, finally, ‘loss’ means that the company is

liquidated. Using VCM, the investor takes the projections to be the mean given the three

prospects.

Estimating the Terminal Value VCM suggests that the investor uses the price-to-earning (PER) ratio to estimate the value of

the investment. The PER that is applied to a given investment can be approximated from

similar publicly traded firms. VCM also suggests checking the following points when

estimating the PER for the investment (taken directly from the paper):

1. Are the company’s revenue forecasts consistent with:

a. The overall industry projections?

b. The level of entry and competition?

c. The interval ability of the company to sustain the growth rate?

Page 213: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

199

2. Are the company’s margin forecasts consistent with:

a. The level of entry barriers now and expected in the future?

b. The relative bargaining power of suppliers and customers?

c. The threat of substitutes?

d. The intensity and form of current and projected competition?

3. Is the terminal valuation, given agreement on terminal sales and margin forecasts,

consistent with the current level of valuations in the market for:

a. Liquidation?

b. Initial public offering?

c. Acquisition by another company?

It should be noted, however, that VCM does not suggest a framework of incorporating these

questions into the estimation of the PER.

A5.1.3 The First Chicago Method (FCM) The VCM paper also gives a ‘state-of-the-art’ method that differs from the one described

above, as it takes explicitly into account the different scenarios discussed in the ‘Cash flow

adjustment’ process described above. Therefore, with FCM the investor lays down the cash

flow projections associated with the different scenarios and their respective probabilities. As

expected, the investor will reduce the required discount rate after explicitly accounting for

some of the risk associated with the investment.

FCM allows the investor to account for some of the specifics of the company. For example,

with FCM the investor can differentiate between companies with different liquidation rates

in the loss scenario.

A5.2 The ‘Real VC’ View The following is a summary of the thoughts of Venture Capitalists based on conversations

with a number of representative VC. The discussions covered four aspects: their preferences,

the investments profile, due diligence, and valuation.

Preferences Investors are either market first or team first.

Page 214: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

200

Market First These VC believe that great markets are the key to success and that they can accommodate

bad teams and command great talent.

Team First These VC believe that great people are the key to success. In their belief, great people can do

great things, create great ideas, and drive great markets.

Investments Profile VC focus on investments that can go public (IPO); such investments satisfy the following

criteria after about five years. The investments should have the potential to achieve

revenues between $50M and $100M. They should also exhibit annual growth rates of 50%-

100%. In this view, the investment decisions are binary; deals either fit the profile or they do

not.

VC classify opportunities into three categories. Opportunities can have IPO potential,

acquisition potential, and those that will be “out”. The classifications are based on the annual

revenues and growth rates of the opportunities. The following graph shows the different

possible scenarios:

Only firms that are believed to follow the green curve are considered as investment options by VC.

Figure 76 - Venture Capital Opportunity Classifications

Arrows show passage of time __ : IPO

__ : Acquisition

__ : Out

Revenue ($)

Growth (%)

$100M $50M

50%

100%

Page 215: VALUATION OF FLEETING OPPORTUNITIESqz978wj4057...wide spectrum of classes and appreciate the insights of the full range of professors. I owe much gratitude to all the people who made

201

Due Diligence The goal of this part of the process is to decide whether a certain investment fits the VC’s

criteria or not. They do this by studying the market, management team, value proposition,

and company’s competence. VC leverage their area expertise and connections to be able to

assess the value of the deals and add value to the ones in which they decide to invest.

Valuation The VC we interviewed did not put much emphasis on valuation. In their view, if they believe

an investment has what it takes to reach the upper right quadrant, then the investment

required is irrelevant. To decide on the investment, they consider three factors.

First, they consider the amount of investment the company needs to achieve its goals and

mitigate the risks through the multiple rounds. Second, they set the valuation in a way that

allows them to acquire enough percentage in the company to justify their commitment.

Finally, they require the option to maintain the same percentage in future rounds.

A5.3 Summary The section above is intended to provide context for the environment in which VC make their

decisions and the factors involved. This description provides a background in which DA can

be considered as a tool for decision-making.