Capacity Optimization under Uncertain Demand in Service...
Transcript of Capacity Optimization under Uncertain Demand in Service...
i
Designing and Managing Online Service Quality and Product Mix
Under Uncertain Demand and Fixed Short-term Capacity♦
Ravi Bapna
Paulo Goes
Department of Operations and Information Management School of Business Administration
368 Fairfield Road, U-41 IM University of Connecticut
Storrs, CT 06269
Alok Gupta*
Information and Decision Sciences Department Carlson School of Management
University of Minnesota 3-365 Carlson School of Management
321 - 19th Avenue South Minneapolis, MN 55455
(Under Revision, May 2001)
Subject Classification: Information systems, management: of quality-of-service oriented network services. Games/group decisions, bidding/auctions: to reveal valuations of one-time services. Programming, integer, heuristics: for a non-standard knapsack formulation. ♦ This research was supported in part by TECI - the Treibick Electronic Commerce Initiative, OPIM/SBA, University of Connecticut. Third author’s research is supported by NSF CAREER grant # IIS-0092780. * Author names are in alphabetical order.
ii
Designing and Managing Online Service Quality and Product Mix
Under Uncertain Demand and Fixed Short-term Capacity
Abstract
In this paper we explore the problem of maximizing revenue for an Internet service firm
that operates in a market of unique short-lived products that have uncertain demand. We
look at the short-term problem where the firm has a capacity constraint and has to choose
an appropriate product mix such that customers' desired quality-of-service requirements
are fulfilled. For example, converging digital technologies of broadcasting and telephony
on the Internet require a certain level of service quality for acceptable performance.
Internet content providers offering webcasts of special events are examples of such firms
in business-to-consumer markets. Companies that deploy commercial video-on-demand
servers in high-bandwidth digital cable markets are examples of such firms in business-
to-business markets. We model such firms as fixed capacity servers that make pricing
and product mix decisions to maximize revenue while satisfying service quality
constraints. Our model integrates auction theory, to facilitate dynamic price-discovery,
with the knapsack problem. This results in a non-standard knapsack formulation. We
analyze the economic incentives of customers and present solution techniques ranging
from pseudo-polynomial optimal computations to a class of heuristics that perform well
in computational experiments. We provide both, the theoretical worst-case bounds and
the computational performance of our heuristics.
1
1. Introduction
The exponential growth in the number of Internet users has provided a large and
accessible customer base for firms. An increasingly diverse range of products and
services are now being sold electronically. While immense strides have been made in
developing and utilizing technical advancements of Internet technology, the analysis and
understanding of the economic impact of its operations is still nascent.
The creation of mercantile processes for real-time unique (one-time) services is
potentially one of the largest areas of Internet expansion that has been overlooked.
Interactive webcasts of concerts, high profile interviews and sporting events such as
international soccer and cricket matches or championship boxing are examples of such
services in the business-to-consumer space1. In an environment where servers can
simultaneously support only a finite number of connections, these services are usually
provided free with no quality-of-service (QoS) guarantees and often delivered extremely
poorly with frequent breakdowns and overloads. We believe that such problems largely
stem from the fact that there are no established mercantile processes that rationally
allocate the provider’s resources for the delivery of services to customers with guaranteed
quality.
Due to the unique, one-time nature of events in consideration and since transmission
rights on the Internet have to be obtained from the primary holder of the media rights, the
provider typically does not face market competition for the delivery of such services.
Consumers can potentially have the option of purchasing their services at different quality
levels, such as text only, video only, video and sound, etc. However, due to lack of
proper mercantile processes (for example, see Petrie and Wiggins, 1997), different levels
of services are not available to consumers. Instead, a single (best effort) level of service is
the only option available even when the desired quality of service could potentially be
delivered in a given service class. Further, the uncertainty in demand for such services
coupled with the lack of economic incentives at the providers’ end makes it difficult to
have enough server capacity to serve all customers at the desired service level.
The issue of optimally managing the server capacity, or the “content shelf-space” as it
is referred to in the industry, is also very important for firms in the business of developing
2
and deploying video-on-demand (VOD) servers in digital cable markets. Firms, in these
markets, serve as intermediaries between content providers such as ESPN and Disney,
and consumers in specialized, high-bandwidth digital cable markets. Given broadband
delivery technologies, the VOD server’s capacity is the critical link in maintaining the
desirable service quality level. For example, wwww.divatv.com has operationalized such
a delivery mechanism as the number of 3.4 Mbps MPEG2 streams that can be delivered
with less than 0.8 second latency by each of their servers. This is required for acceptable
audio-video quality. Such firms currently lack the ability to make short-term dynamic
pricing contracts with the content-providers that could enhance their returns by optimally
managing the VOD server capacity.
In this paper we design and evaluate a mercantile process that can be used by real-
time service providers offering unique, one-time digital products on the Internet and by
VOD server providers. The suggested process jointly incorporates:
(a) a demand collection mechanism that allows the customers (consumers or other
firms) to bid for the services;
(b) a pricing mechanism that determines the final prices at each service level;
(c) a capacity allocation mechanism that allocates the necessary server resources to
customers in each service level;
(d) the determination of the service mix to be provided with guaranteed quality levels
at the server side; and
(e) an optimization mechanism to maximize the total revenue from the available
service mix for a given server capacity.
Specifically, we design a revenue maximizing pricing scheme that allocates the
available server capacity among various competing services requiring different quality
metrics. Note that in an environment where QoS is not important simply having a larger
customer base maximizes the revenue. This is the approach taken by almost all Internet
content providers at present. However, Internet services that require dedicated resources
with various quality requirements cannot be priced in a similar fashion. Such a pricing
scheme works in environments where there is little variability in resource requirements,
such as the telephone industry.
3
We formulate such a resource allocation problem as a knapsack problem with an
additional constraint that imposes the structure of an auction-based pricing mechanism.
As a result of this formulation, the contribution of an individual item to the knapsack is
no longer static and independent of the other items. While knapsack problems have been
applied to a wide variety of scenarios such as capital budgeting, cargo loading, and
cutting stock, to our knowledge, ours’ is the first application that integrates the knapsack
problem with a dynamic resource allocation while maximizing a provider’s revenue.
It is important to note that in this paper we do not explicitly consider the data
transmission delays over the network. Our analysis and proposed process focuses on the
server side. As pointed out in Petrie and Wiggins (1997), the lack of appropriate
mercantile processes is the single most important limiting factor in making large amounts
of bandwidth readily available. We posit that the first fundamental step towards
delivering guaranteed QoS Internet services is to provide economic incentives and
rationale for service providers to make appropriate amounts of bandwidth and server
capacity available. From a consumer’s perspective, if enough capacity is available their
QoS requirement can be satisfied. Thus, in our models we require capacity to be reserved
for each customer, thereby ensuring that consumers receive the promised QoS from the
service providers’ end.
Although in this paper we concentrate on the server’s side, there are obvious
applications of our research for pricing network streams, especially in the cases such as
last-mile ATM connection pricing. From a broader perspective, an underlying tenet of
our research is the answer to the question, “can bandwidth be reserved on the Internet?”
The advent of ATM and RSVP (see Zhang et al, 1993) protocols answers the
aforementioned question in the affirmative only from a technical perspective; these
technologies will be implemented in future realizations of the Internet such as Internet22.
We are seeking economic rationale and appropriate mercantile processes for such
services. In the interest of clear exposition of the ideas presented in this paper we
concentrate on looking at a server as a constrained resource that has to provide a
multitude of services with guaranteed QoS.
4
The major contributions of our work are threefold. Firstly, we introduce and model a
novel mercantile process that maximizes the revenues of content providers who provide
the webcast of unique, real-time services, with unknown demand. The model has
applicability to other similar service environments such as VOD server capacity
management. Secondly, while applying auction-based approaches to the proposed
process, we derive new theoretical results in the area of multi-item auctions without using
the usual "known value distribution" assumption as is standard in the game theoretic
treatment of auctions. Finally, we propose a unique knapsack formulation to the revenue
maximization problem and develop a host of fast solution procedures for this modified
knapsack problem that make our approach well suited for online environments.
The paper is organized as follows. In Section 2, a review of key related literature
from economics is presented. In Section 3, we present our model formulation, followed
by Section 4 where we develop a portfolio of solution procedures. Section 5 presents
results from extensive simulations. Finally, Section 6 concludes with discussion on
extensions and further research.
2. Background
There are three primary types of economic resource allocation mechanisms: (i)
capacity allocation mechanisms, (ii) posted price mechanisms, and (iii) auctions and
negotiations. Capacity allocation mechanisms usually are the most efficient mechanisms
if the type of individual customers, and thus their needs, can be identified by the
controlling entity. In general, posted price mechanisms can be considered as the
mathematical dual of capacity allocation mechanisms (Greenwood and McAfee, 1991).
Under this mechanism the general distribution of customer types is known, but individual
customer type is not identifiable. In other words, the demand curve is known. Even
though some recent research has been devoted to developing posted price mechanisms
that can dynamically compute the prices based on changing demand (see, for example,
Gupta, Stahl, and Whinston, 1997), these mechanisms are more effective when a
relatively long-term demand-trend is available.
5
In this paper, we use mechanisms based on auctions and negotiations because we are
considering the allocation of resources for dynamic and unique one-time products and
services and their associated demands. The demand for such products may not be
assessable in advance and thus computing posted prices may be extremely difficult.
Typically, ignorance of what price to post is a reason for negotiating or holding an
auction. Rothkopf and Harstad (1994) provide a behavioral reason for holding auctions
by asserting that one of the critical reasons for the use of bidding is that the formality of
the auction process provides legitimacy in a way that the other economic means cannot.
Wang (1993) compares auctions with posted prices in a simplified setting under the
assumptions of the independent private values model. Her central result is that auctions
are preferable if the marginal revenue curve is steep. The global steepness of the marginal
revenue curve is found to coincide with the dispersion around the mean for a number of
standard distributions. This confirms the intuition that the more dispersed the value of an
object or a service to the potential buyers the more auctions are preferred.
The nature of the Internet-based services under consideration suggests that
individuals’ valuations of webcast events are likely to be highly dispersed. For instance,
a die-hard fan of a particular rock star or a sport such as cricket, would have a different
valuation for a particular webcast than a moderately interested individual whose valuation
in turn would be entirely different from a person who is nonchalant about the events
under consideration. This distribution of valuations might be expected to change
drastically across time and events. An increase in dispersion of these values might occur
if service quality differentiation is present. For instance, the monetary valuation of a
moderate fan -- who does not want to spoil a working day at the office, preferring a
textual ticker tape relaying running commentary -- would not be equal to that of the die-
hard fan for whom nothing but live audio and video would suffice. Technologically,
there is little doubt the provider can offer this kind of flexibility in service definition. The
question is can we structure efficient customer-driven mechanisms that integrate the
issues of pricing, quality and service-mix? We answer this question in the next section.
6
3. Model Formulation
3.1 Assumptions and Scope
We first present a general model of allocating capacity under an uncertain, widely
dispersed, and dynamic demand structure where the content providers' objective is to
maximize its revenue. Note that while the same webcast event might be available later, its
product and demand characteristics will be different due to change in at least one of the
service dimensions such as live versus rebroadcast or time and day.
We assume that customers have a value for service that is unknown to the provider.
Further, we assume that the provider will use some price-setting mechanism and some
customers may be excluded from receiving the service based on the prices they are
willing to pay. We first examine the general model in detail and subsequently discuss
specific price setting mechanisms and their properties.
Let there be i = 1, …, I consumers in the market for j = 1,…,J different services
offered by a provider. Let Vij denote customer i’s valuation for service j. We assume that
consumer’s valuations Vij are independently distributed.
Let C represent the total bandwidth, or capacity, and let Cj be the capacity consumed
by service j. Finally, let xij represent the decision variables where xij = 1 if customer i
receives service j, and xij = 0 otherwise.
3.2 The General Model
We assume that, given a price-setting mechanism and the negligible marginal cost of
offering the type of digital services under consideration, a provider’s objective is to
maximize revenue. Let pij be the price charged to customer i for a particular service j,
where pij is unknown and determined by the price-setting mechanism. The capacity
allocation, revenue maximization problem is formulated as
Maximize pxz iji j
ij∑∑= (1)
subject to Vxp ijijij ≤ Ii ,...,1=∀ , Jj ,...,1=∀ (2)
CCx ji j
ij ≤∑∑ (3)
}{ 1,0=xij (4)
7
3.3 Model Characteristics
Note that this is a special case of the general 0-1 multiple products Knapsack problem
with an additional constraint (2) that represents the participation constraint. Equation 2
ensures that xij = 1 if and only if the price pij charged to a customer i for service j, is less
than or equal to her value for that service Vij. Equation 3 is a typical Knapsack capacity
constraint.
A unique feature of this model, which disallows the application of conventional
optimization procedures, is that it requires a price setting mechanism. Atypically, there
are two unknown quantities xij and pij in the revenue maximizing objective function
represented by equation (1). Furthermore, the customer valuations Vij are private
information. Therefore, to satisfy the participation constraint the provider has to create a
mechanism that reveals this information
Given the one-time nature of the services and the associated unknown and widely
dispersed demand for such services a posted pricing mechanism may not be optimal
(Wang, 1993). Furthermore, since there is a need to reveal the customer’s private
valuations Vij, an auction mechanism seems to be an appropriate choice for price setting.
In an auction, a customer i will bid V ijBij ≤ , where Bij is customer i's declared bid for
service j.
3.4 Auction Mechanisms
In our search for an optimal auction mechanism we restrict our attention to a special
class of such mechanisms: the direct revelation mechanisms. In direct revelation
mechanisms bidders are asked to announce their valuations directly and the seller
commits herself to using rules for allocating the object and charging the buyers. The
direct revelation mechanisms ensure both that the buyers will be willing to participate and
that each will find it in his interest to announce his true valuation. Nobel laureate
William Vickrey (1961) proposed one such mechanism. In his seminal work he noted
that when a second-price auction is used, that is, the high bidder wins but pays only the
8
price of the second highest bidder, each bidder has a dominant strategy of bidding his true
valuation.
In this section we evaluate two types of auctions as price-setting mechanisms for our
general model: (i) Second price auctions, known to be incentive compatible in many
instances under certain conditions (Vickrey, 1961), and (ii) English auctions, the most
popular and well known auction mechanism for the single item case.
3.4.1 Second Price Auction as a Price Setting Mechanism
Here we consider a multi-unit analogue of the second-price sealed-bid Vickrey
auction for single items (Vickrey,1961) (hereafter referred to as the Multiple Vickrey
Auction, or the MVA). The Vickrey auction adopts a uniform pricing scheme in which
each accepted customer is charged a price equal to the value of the highest rejected
customer for a particular service j. Let Vj represent an ordered list of values for service j
such that VVV Ijjj ≥≥ ...21 . Further, let Bj represent an ordered list of bids for service j
such that BBB Ijjj ≥≥ ...21 .
Under the MVA pricing mechanism, the general model (1-4) can be formulated as:
Maximize pxz ji j
ij∑∑= (1A)
subject to
>+= 0 | ,1min ijxBxij jii
p j (2A)
CCx ji j
ij ≤∑∑ (3)
}{ 1,0=xij (4)
The objective function is the same as in the general case except that we drop the
subscript i from pij because MVA is a uniform pricing mechanism. The participation
constraint (2A) ensures that the price accepted for a particular service class is equal to the
largest rejected bid for that class - hence the term Bi+1,j. This equality introduces
dependencies between the bids of different individuals in the form of positive network
externalities. Each new lower value customer who is accepted into a particular service
class lowers the price for all previously accepted customers of that class. Hence, the
consumer’s surplus (Vij - pj) becomes a non-decreasing quantity. Additionally, because
9
we charge uniformly the price of the last rejected bid for a particular service class, the
impact of a Vickrey based pricing scheme on consumer surplus could be significant and
may derive competitive advantage for any provider adopting it.
Now consider the implementation of an MVA mechanism in which each customer i
is only interested in one service level j. Within this service level, each customer can only
bid for one product. The special pricing structure thus obtained has incentive
compatibility property for customers as noted by Vickrey (1961). That is, truth telling
(bidding one’s true valuation of the product) is a dominant strategy on the part of the
consumers. Because of this desirable property, MVA has been suggested as a price
setting mechanism for online environments by several researchers recently (for example,
see MacKie-Mason and Varian, 1995 and Lazar and Semret, 1999). However, to
successfully implement such a system, the impact on a provider’s revenues have to be
carefully considered. Such analysis has been largely ignored in the literature. Theorem 1
provides an important result regarding the revenue curve of a seller under the MVA.
Theorem 1: The revenue curve generated by the Multiple-item Vickrey Auction is non-
monotonic.
Proof of Theorem 1
To prove Theorem 2, it is sufficient to show that there exists an instance for which a
content provider may actually realize less revenue by selling Q+1 items than selling Q
items.
Consider only one level of service requiring one unit of capacity, a total of N=4 units
to be sold and a final sorted bid sequence of: B, B/2, B/5, B/11, B/11.
The provider receives a total revenue of B/2 if Q=1, 2B/5 for Q=2, 3B/11, for Q=3,
and so forth. Clearly, the provider's revenue is decreasing with increasing number of
units sold. Therefore, MVA does not necessarily generate a non-decreasing revenue
curve.
10
Q.E.D.
Theorem 1 indicates that it is unlikely that any revenue-maximizing content provider
would directly use the MVA. In other words, the provider will not commit to selling the
maximum possible units of service. Instead, the provider may choose not to utilize all its
available capacity. In our model where several services share a common capacity, the
provider does not have to commit to a specific number of acceptances for any service.
Below we develop a modified version of the MVA in which product-mix is decided based
on bids and the resulting revenue.
In the rest of this section we present some theoretical results and solution procedures
for the case where there is just one service for expository purposes. We therefore drop
the subscript j in our analysis. In Section 4 we consider the general case of multiple
services and associated theoretical results, heuristics, and computational procedures.
Example 1 illustrates the non-monotonic nature of the provider’s revenue curve.
Numerical example 1: Assume there is just one service class, a total capacity of 18 and
we receive the following 10 bids - each requiring the same 2 units of capacity - arranged
in descending order in Table 1. Assume a reservation price of $1.
Insert Table 1 Here
Row 2 of the table has the corresponding revenues that a provider would obtain if that
customer was the last to be accepted under the MVA scheme and row 3 has the
corresponding marginal revenues (MR). MR is the impact on a provider's profit if a
particular bid is accepted. As the table indicates, selling 8 units at a price of 1.9
maximizes the revenue.
As shown in Table 1, a simple scan procedure that stops at first negative swing in the
marginal revenue is not appropriate to determine the optimal number of items to be sold.
While for a single scan of the entire total revenue curve will result in an optimal decision,
when there are more than one service such an approach will result in exponential
combinations to be evaluated. Therefore, instead, we chose to make the following key
11
observation that will help in creating an efficient method in searching for an optimal
solution with multiple services.
Observation 1 : If providing service to N' ≤ N consumers maximizes a providers revenue
for a given service under the MVA, it does not imply that the provider will be willing to
provide service to any number of 1 and N'.
For example, in Table 1 it is not in the providers best interest to offer service to all
customers between 1 and 8. It is in the provider’s interest tp provide service to
customers 1, 3, 4, 5, 6, or 8, those that have a non-negative marginal revenue
contribution.. The service provider will never choose to provide service to only the first 2
or the first 7 customers, even if there is available capacity. In these cases the provider’s
revenue is less than that obtained from providing service to just 1 or 6 customers
respectively. An alternative way to characterize this property is that customer 2 will
receive service if and only if customer 3 receives the service and customer 7 will receive
the service if and only if customer 8 receives the service. This motivates us to introduce a
simple and intuitive approach that bundles the critical customers together when some
customers can not independently receive a service because of a given bid structure. Table
2 shows the two bundles that could be created for the example presented earlier and the
associated marginal revenues after creating these bundles.
Insert Table 2 Here
Bundling of customers implies that either all of the bundled customers are considered
for service provision or none. Note that bundling customers does not change the
characteristics of the optimal solution. On the other hand, bundling customers provides a
way to create a marginal revenue structure such that as soon as the first negative marginal
revenue is encountered, a simple scanning algorithm can stop and find the optimal
solution. Such an algorithm (procedure rev_max) is presented in Illustration 1.
Insert Illustration 1 here
12
We define the type of customer bundles created in Table 2 as the minimal cardinality
bundle. It is a bundle that starts at the first customer index where the customer's marginal
revenue became less than or equal to zero and ends at the first customer index where the
marginal revenue becomes positive. Illustration 2 provides a pseudo code for procedure
bundle that can create the minimal cardinality bundles as a part of data preprocessing.
Include illustration 2 here.
Proposition 1, below, proves that minimum cardinality bundling is an optimal
bundling scheme for a single service class.
Proposition 1: A bundling strategy that creates a minimal cardinality bundle, such that
the bundle has a strictly positive marginal revenue contribution, is optimal for a single
service class.
Proof: See the Appendix
We can now formalize the results regarding capacity utilization in the following
proposition and associated corollaries.
Proposition 2: The revenue maximization problem, where prices are determined using
the Multiple Vickrey Auction, yields optimal capacity allocation.
Proof: See the Appendix
Corollary 2.1: In capacity unconstrained environments the MVA along with the revenue
maximization problem allocates capacity based on consumers' marginal values derived
from the usage of the capacity.
Corollary 2.2: Under the MVA, it is not optimal to always use all the available capacity.
In the next subsection we provide a comparative analysis of revenue between
MVA and the most commonly known auction mechanism -- the English auction (or open-
bid ascending auction).
13
3.4.2 English Auction as a Price Setting Mechanism
In this auction mechanism each accepted customer is charged the price he bids.
Additionally, the auction is progressive, that is, the provider solicits successively higher
bids until no bidder is willing to make a better bid. Hence, we further qualify the
definition of Bij in the case of an English auction as the final bid of customer i for service
j. We also assume that there is a minimum increment amount, denoted by α, imposed by
the provider for each successive bid that a participant may make. Rothkopf and Harstad
(1994) have considered the role of discrete bid levels in oral auctions for a single object.
However, they use assumptions regarding distribution of bidders to assess the expected
revenue and auction efficiency and derive results for some special cases. As they note,
the auction literature ignores the discrete nature of allowable bids and the sequential
nature of bidding. To our knowledge, ours are the first set of results for multiple-item
English auctions that are purely based on the discrete and sequential nature of bids. We
derive powerful and intuitive results for the winning bid sequence and for the lower and
upper bounds on providers' revenue. These results are completely independent of any
distributional assumptions.
First we introduce the model (ENAP), which is a special case of the general revenue
maximization model (1-4) for our knapsack problem with multiple-item English auction
as a price setting mechanism:
Maximize Bxz iji j
ij∑∑= (1B)
Subject to
CCx ji j
ij ≤∑∑ (3)
}{ 1,0=xij (4)
Here, the objective function is modified with the final bid Bij substituting for the price
pij representing the discriminatory pricing mechanism. The solution to ENAP will
determine the optimal product mix for the provider by determining a set of transaction
14
prices at the level of (or marginally better than) the best price acceptable to any losing
bidder.
To get an understanding of how the revenue generated is bounded we present the
following observations and propositions. For expositional clarity we drop the subscript j,
i.e. we treat our available capacity to be divisible into a maximum of N items for sale.
Observation 2: The maximum bid a person having a value of Vi will make is (Vi - α),
where α represents the bid increment. Bidding higher than that will result in a non-
positive surplus.3
Observation 3: A bidder i will bid higher than the current maximum bid (Vc), if and only
if (Vi-α) > Vc ≥ (Bi + Nα), where Bi was person i’s last bid.
In other words a bidder i will bid higher only if their true value is higher than the current
maximum bid and there are at least N higher bids than his last bid. This is true since if
there are less than N higher bids than her last bid then she may receive the item at her last
bid Bi and has no incentive to bid higher.4
Observation 4: The winning bids will have the following structure:
BN+1+α, BN+1+2α,…, BN+1+Nα where BN+1 ≤ VN+1 - α, is the last bid posted by customer
(N+1).
The (N+1)th customer is defined as the marginal customer, or the customer with the
highest bid among the bidders not receiving the item.
Proposition 3: The upper bound on the revenue generated by the English auction is
2)1(
1−++
NNNV N
α .
Proof: By observation 2 the person having a value of VN+1 (the marginal customer) will
have a maximum bid BN+1 = VN+1 - α. Using observation 4 the revenue in such a best
case scenario, denoted by RBC = VN+1 + (VN+1+ α)+…+ (VN+1 +(N-1)α).
Upon simplification we obtain:
2)1(
1−+= +
NNNVRBC Nα
15
Proposition 4: The lower bound on the revenue generated by the English auction is
2)1(
1+−+
NNNV N
α .
Proof: In the worst case the last bid (BN+1) by the bidder having a value of VN+1 would be
VN+1-(N+1)α. The worst case happens when the marginal customer does not get the
chance to bid his highest value. More specifically, in the worst case the bidding sequence
is such that BN+1 + Nα = VN+1 - α. Hence, the marginal customer can only bid his true
valuation. However, since this results in a non-positive surplus, she has no incentive to
bid that. Using observation 4 the bids by the last N persons having a value greater than or
equal to VN+1 would be VN+1 -Nα, VN+1- (N-1)α,…, VN+1 -α. Thus the revenue in the
worst case scenario is RWC = NVN+1 - (α+2α+…+Nα). Upon simplification we obtain
2)1(
1+−= +
NNNVRWC Nα
Figure 1 provides an example comparison of MVA and English auction for a case
where we have 4 bidders vying for 3 units of service. For English auction, a minimum
bid increment value of 1 is assumed, i.e., α=1. The top-most box represents the true
valuations of these bidders and the bottom-most box represents the decision under the
MVA structure. The values in the left and right boxes are derived from propositions 3 and
4, which in turn are based on observations 1-4, regarding the best and worst case revenue
scenarios for the provider. To compare the two auction mechanisms under consideration
from a revenue perspective we present below the general form of the revenue obtained
under the MVA scheme.
Insert figure 1 here.
Proposition 5: The total revenue generated by a Vickrey auction of N items, where the
top N bids receive the items at the price BN+1, Bi ≥ Bi+1, and BN+1 = VN+1 , is N(VN+1).
Thus, [ ]RRR BCWCV ,∈ .
While on average the two auction mechanisms may generate comparable revenues,
the MVA implementation proposed here has a significant advantage that it does not
16
discriminate among the winners with respect to price they pay. This makes the
mechanism more attractive for bidders and, as Vickrey (1961) noted, reduces the
probability that a bidder's own bid will affect their own price, making the revelation of
values more likely. Additionally, since the capacity utilization decision is based on
marginal customer's valuation, it will be more efficient than the ascending auction.
Finally, the implementation cost of MVA are significantly lower than that of the English
auction.
In the next section we expand the discussion to multiple service classes and examine
the consequent impact on the solution methodology. A portfolio of solution
methodologies developed provides a decision-maker with a wide-ranging choice of
solution techniques.
4. Solution Methodologies
In a single service environment procedure rev_max along with procedure bundle can
be used to construct an algorithm to optimally solve all cases. However, when capacity
has to be allocated among several different services, the knapsack structure of the
problem becomes apparent. The problem in the multiple-service case can be seen as a
product mix problem where a decision-maker has to decide which services to offer and
how many customers receive each of the services at what price.
Before we discuss the solution techniques, it is useful to review the preprocessing
steps, i.e., collecting, sorting, and bundling of the bids in each service class. The idea
behind applying these preprocessing steps is that if it is not optimal to consider a bid in a
certain service class when that class is, hypothetically, the only service class, then that bid
will not be considered when multiple service classes are being offered. Since the
computational costs of solving the knapsack problem are dependent on the number of
bids, preprocessing helps reduce such costs. Thus, an essential step is to apply
procedure rev_max and bundle to each individual service class to obtain a revised,
truncated row bid matrix. The combined effect of bundling and truncation is to give us a
ranked i~ X j matrix that contains ‘bundles’ derived from the individual bids.5 An overall
view of the pre-processing and bundling process is shown in Figure 2.
17
Insert figure 2 here
It is well known that the knapsack problem is a NP-hard. However, in our
framework, it is reasonable to expect that the number of consumers bidding will far out
number the number of different service classes that a provider offers, i.e., I >> J.
Combining this with the special structure of the bid (knapsack elements) values in each
service class allows us to construct some fast solution techniques, including a pseudo-
polynomial optimal solution technique.
Proposition 6: Given that I >> J, the MVA pricing structure allows the knapsack
problem to be solved optimally in O(IJ) time, where I represents the total number of
consumers for all services, and J represents the total number of service classes.
Proof: See the Appendix
While we have presented at the outset a pseudo-polynomial mechanism for optimally
computing the product mix and prices, the computational costs of such a technique may
be unacceptable for real-time applications in electronic markets. Therefore, we further
explore heuristics in order to provide a decision-maker with a portfolio of fast solution
techniques that trade off accuracy and computational time.
4.1 Heuristic Solution Techniques
We begin by considering the most popular approach to knapsack problems, namely
the greedy algorithm, which is based on choosing the elements in non-increasing value-
to-weight (bids-to-capacity capacity) ratio (Martello and Toth, 1989). This heuristic
exploits the fact that the solution x of the continuous relaxation of the problem has only
one fractional variable. To obtain a feasible solution we just set this fractional variable to
0. It is well known that in the worst case this procedure can be arbitrarily bad. For
instance, suppose we have two objects, a with a value and weight of 1 and b with a value
and weight of 100 and a knapsack capacity of 100. The greedy algorithm then might
arbitrarily chose object a resulting in a greedy solution with a knapsack value of z' (the
18
knapsack value by applying a heuristic) that is 100 times worse than the optimal. We can
reduce this worst case performance ratio to ½ if we choose the greater of z' and z( x ={b}),
the value considering the critical object alone. This worst case scenario would arise if
both objects were equally valued.
Under our formulation, we have to operationalize the value-to-weight ratio of bids
that are not independent of each other, unlike in traditional knapsack settings. For
example, with MVA each accepted lower bid lowers the price for all preceding higher
bids and hence exhibits positive network externalities. The marginal congestion it causes
is the counter-acting negative network externality. Since the impact of each bid is not
solely dependent upon the bid itself, as discussed earlier, we consider the Marginal
Revenue (MR) contribution of each bid, which is essentially the gain in a provider's profit
if a bid is accepted. Once computed for all the bids, a revenue-maximizing provider
would be concerned with accepting only those bids that have a strictly positive MR
contribution.
Let us refer back to numerical example 1. A typical greedy procedure in a typical
knapsack setting would ignore customer 2’s bid and move on to customer 3. However,
the MVA structure imposes an additional ordering constraint that forbids us from doing
just that. To smoothen the effect of non-monotonic MR values of individual bids, we
created bundles of individual bids, thereby providing a positive MR contribution
whenever possible. The value of creating bundles is amplified in the multiple service
cases while applying Greedy or Greedy-like procedures to solve the knapsack problem.
Creating the bundles during preprocessing reduces the computational load of Greedy that
would have to otherwise do a complete search in each service class every time a negative
MR value is encountered.
For completeness, the pseudo-code for the greedy procedure that fills a knapsack of
capacity C taking into account the presence of multiple service classes, is presented in
Illustration 3. The basic input for this procedure is the ranked i~ X j matrix created after
preprocessing. In the illustration, j represent a vector indicating the number of bids
accepted for service class j.
19
It is instructive to pause and differentiate between a general knapsack greedy
algorithm and the implementation of greedy under our formulation. Let a current element
be defined as the smallest indexed element in a service class that has not been included in
the knapsack. Then the Greedy procedure in our formulation selects the largest marginal
revenue from the current elements in all the service classes. This is, again, due to the
dependency of bids in a service class and the resulting restriction that a higher indexed
element in a service class cannot be included unless all the lower indexed elements have
been included. Therefore, as opposed to the greedy implementation in a general knapsack
problem where all the elements are considered in non-increasing value-to-weight ratio, in
our framework, the highest available MR-to-capacity ratios in all the service classes are
considered in each step. In other words, in each step there are a maximum of J elements
from which the highest MR-to-capacity is included in the knapsack.
Insert Illustration 3 Here.
Let x g denote the solution obtained from procedure greedy. The question arises,
“can we improve upon procedure greedy?” While the worst case scenario is well
known, it is unlikely to occur in most practical situations. However, there are certain
tough cases where a greedy procedure will not perform well. Next, we illustrate these
tough cases both analytically and numerically. Based on the characteristics of these worst
cases we provide an improvement of Greedy technique.
4.2 Worst Case Scenarios
Without loss of generality, suppose there are two service classes, each with an
individual capacity requirement of 1. The tough cases will occur if the bid pattern
presented in table 3a is observed for any δ > 2ε. The associated MR that the procedure
greedy would utilize are presented in Table 3b.
Insert Tables 3a and 3b Here.
20
In this case, if we have N units of capacity then the optimal solution is NK/2 whereas
the greedy solution is K + (N-1)δ , which tends towards K as δ → 0. Thus the greedy
bound is 2/N.
What if we were to utilize a procedure where after running the greedy, we modify
the greedy solution by including one more element in each class than the greedy solution
and then run greedy with the remaining capacity. It is straightforward to show that the
worst case bound of this procedure improves to 3/N (consider the bid sequence: K, K/2,
K/3, K/3, ... for the first service class). Let us call this procedure greedy_plus_1.6
Intuitively, the procedure greedy_plus_n can be derived from the worst case
behavior of greedy approaches to the knapsack problem described above. On such cases n
is the number of items that are included beyond the best solution obtained so far. The last
item(s) may also have been arbitrarily left out in preference for an equivalent MR/Weight
ratio bid having a higher value. The procedure greedy_plus_n essentially iterates
through the various service classes, fixing the number of bids accepted for class j, and
then re-applying procedure greedy with the reduced capacity to the remaining classes
( j~ -j), until there is no improvement. Illustration 4 presents the pseudo-code for
procedure greedy_plus_n and Proposition 7 presents its worst case time complexity.
Insert Illustration 4 Here.
Proposition 7: In the worst case, greedy_plus_n takes O(I3) time, where I is total number
of consumers who place a bid.
Proof: see the Appendix.
To illustrate the solution using greedy_plus_n consider the following example:
Numerical Example 2: The provider offers three different kinds of service classes - A, B,
and C. Assume, for expositional simplicity, all classes to have an equal weight of unity.
The total available capacity is 9 units. Table 4 displays the complete solution process
including starting (sorted) bids, MR's, ranked and bundled i~ X j matrix with MR's,
greedy solution, and the solutions achieved by the application on greedy_plus_1. In table
4, the notation ab is used to denote a bundle with value-to-weight ratio a, and cardinality
21
b. The plain shaded portions of the table indicate the consumers chosen by greedy while
diagonally shaded areas indicate the elements that are fixed during the application of
greedy_plus_1.
Insert Table 4 Here
To start with, 4 bundles are created: 1 for service A, 2 for service B, and 1 for
service C. The application of greedy results in choosing 1 customer for service A, 3 for
service B, and 5 for service C with the picking order of A1-B1-C1-B2-B3-C2-C3-C4-C5.
This yields the total revenue of 23.5. In our implementation, heuristic greedy_plus_n is
applied to all the service classes based on the solution obtained in the previous step.
Therefore, in this case, greedy_plus_1 is applied to each service class one-by-one. First,
we take the number of customers chosen in service class A by the greedy, add one
additional customer, and reduce the available capacity by the space taken up by these
elements. Then, greedy is applied to rest of the customers with the adjusted capacity for
the knapsack. The same process is repeated for service classes B and C. In this case,
customers 1 and 2 in service class A are chosen first. Since customer 2 is a bundle of 3
customers, the total available capacity is reduced to 5 units. When we apply greedy for
rest of the customers, the solution is to choose 7 customers for service class A and 1 each
from service classes B and C with the greedy picking order of B1-C1-A5-A6-A7. The
resulting revenue is 26.5, i.e., higher than greedy. Next we apply greedy_plus_1 to
service class B, again starting from the greedy solution. The revenue from this is also
greater than greedy. Finally we apply greedy_plus_1 to class C, which also results in
higher revenue than greedy. However, the best solution from overall application of
greedy_plus_1 is obtained when we apply it to class A. Therefore, at this stage, that
solution is designated best solution so far for successive application of greedy_plus_1 or
higher order of n.
In our example, the solution obtained by application of greedy_plus_1 to service
class A is the optimal solution, however, in general that may not be the case. Let the
current solution obtained after the application of greedy_plus_1 be denoted by xc .
Because of the non-monotonic nature of the marginal revenue curves for each of the
22
service classes, we cannot be sure that it is sufficient to eliminate the optimality gap by
including just one extra item into a service class. Horowitz and Sahni (1974) describe a
forward move consisting of inserting the largest possible set of new consecutive items
into the current solution. When such a forward move is exhausted the current solution
obtained is compared with the best solution so far and a choice is made whether to make
further forward moves or to backtrack. Using a similar approach, we further exploit the
special structure of our problem, to create procedure greedy_plus_n which is essentially
same as greedy_plus_1, with xtemp = x ctrki ,+ in equation (I1).
In general, since optimality of a given solution cannot be verified, several
stopping criteria can be used. Furthermore, several other observations can be used to
minimize computational load. While discussing the implementation details is beyond the
scope of this paper, we present some observations.
First, note that if a fixed customer set at any point is a subset of a previously
obtained solution then fixing those elements and applying greedy for the rest of the
capacity will duplicate the previously obtained solution. Therefore, such steps can be
skipped. For example, if we tried to do another iteration of greedy_plus_1 on the solution
obtained above, we will start with service class B (since all customers of service class A
have already been chosen). Based on the current solution, we will fix the first 2
customers in class B and apply greedy for the rest of the customers with a capacity of 7.
The resulting solution will exactly be the same as obtained by the application of greedy to
all the customers and capacity. This is no surprise since B1 and B2 were part of the
original greedy solution and hence choosing them and applying greedy to the rest of the
customers and capacity reproduces the result. Thus keeping a record of best solutions
chosen at different stages of computation can reduce the number of steps in
greedy_plus_n substantially.
In terms of stopping criteria, we chose a minimalist approach where if the
application of greedy_plus_k+1 does not produce a better result, we stop. In other words,
suppose the best solution so far was obtained by applying an iteration of greedy_plus_1.
We then apply another iteration of greedy_plus_1 with the new solution, if that does not
improve the solution, we next apply greedy_plus_2 where 2 additional customers are
23
chosen in each class. If greedy_plus_2 does not produce a better result we stop,
otherwise we continue. We summarize the mechanics of the solution process in the
flowchart presented in figure 3.
Insert Figure 3 here.
Clearly, many other rules can be used such as fixing the maximum number of
iterations, the maximum number of step sizes (the value of n), the minimum required
improvement, or a computational time limit. In future we will provide a detailed report
on computational experiments, however, we present some very promising computational
results in the next section that use a very simple stopping criterion.
5. Numerical Results
In this section we present some results for the performance of greedy and
greedy_plus_n as compared to the optimal solution methodology developed at the
beginning of Section 3. First, we present the performance of these heuristics as a function
of problem size. Table 5 presents the optimality gap and the computation time for the two
heuristics and the optimal solution methodology. These experiments were conducted
with the following set of parameters:
• Three service classes (J = 3).
• Number of initial bids in each class were kept the same, e.g., 20, 40, ..., in each class
(I = 20, 40, ...)
• Capacity ratio C1: C2 : C3 is 4:2:1
• Total capacity was kept at five times the initial number of bids in a class. For
example, if I = 20 then C = 100.
• The reported results are the mean of 5 replications with each setting with same input
parameters for corresponding runs with different solution procedures.
Insert Table 5 here.
Figure 4 graphically illustrates the improvement by applying the greedy_plus_n as
compared to the greedy solution. In most cases greedy_plus_n improves the greedy
24
solution substantially. Overall, the average optimality gap using the greedy was about
2% as compared to 0.5% with greedy_plus_n. Therefore, on average greedy_plus_n
produced 400% better performance with only a 50% increase in computational time. The
worst case performance of greedy during the 250 experiments was a 9.6% optimality gap
versus 1.86% for greedy_plus_n. Given the real-time computation requirements of
electronic markets, the amount of computational time required is an important factor.
Figure 5 compares the performance of the three approaches with respect to computational
time as problem size increases. For large problems even the pseudo-polynomial optimal
computation can take many minutes on a typical IBM-PC. For instance, a 3000 bids
problem took almost 11 minutes to solve optimally as opposed to 1.5 second using the
greedy_plus_n heuristic.
Insert Figure 4 here.
Insert Figure 5 here.
Figure 6 presents the optimal capacity utilization as a fraction of the total capacity
for the numerous experiments carried out. As stated in corollary 2 of the proposition 1, it
is not always optimal to use all the available capacity. Figure 6 provides an experimental
verification of corollary 2. It is not surprising that in most cases whole of the capacity is
used since we deliberately created demand that far exceeded the capacity to create
computational load on the solution procedures. In general more capacity will remain free
if excessive demand does not exist in advance and if the capacity requirements for
individual services are large relative to the total capacity.
Insert Figure 6 here.
6. Conclusions and Directions of Future Research
This paper addresses an important problem of optimizing Internet content
provider’s profits in the presence of uncertain demand and multiple service classes. We
design an asset reallocation mechanism modeled after second-price auctions, that jointly
solves the pricing and product mix problems in a capacity constrained environment.
Even in other capacity unconstrained electronic market environments the mechanism
25
determines the optimal number of items that a provider will sell without making any
simplistic assumptions regarding the nature of the demand curve. To solve the problem
we present a portfolio of optimal and heuristic solution techniques that trade off
computation time with the optimality gap, thereby providing a greater flexibility to the
decision-maker. We present some results from a simulation study that verifies the
effectiveness of the heuristic approaches presented in the paper.
The importance of making informed capacity expansion decisions in the area of
electronic commerce has been well documented. Wang et al (1997) touch upon the
significance of the capacity expansion decision in their problem of pricing an integrated
services network with guaranteed quality of service. In theory, the dual variable
corresponding to the capacity constraint (4) in the general model will indicate the
marginal value of capacity. However because of the nonlinearity imposed by the second
price auctions, and the resultant non-monotonic nature of the marginal revenue curve, we
prefer an approach akin to sensitivity analysis for deciding on capacity expansion.
Consider the case of a provider who obtains a set of bids for a particular service and
currently has a capacity shortage. Let l(c), be a function representing the amortized cost
of leasing a unit of capacity, in the form of bandwidth, to the provider. Given a priori
knowledge of the currently available capacity and the consumers valuations, the decision
maker then has to scan the marginal revenue curve to find an alternative maximum and
trade-off the corresponding benefit with l(c). In future research, we will address the
issues related to short-term capacity expansion which has a tremendous relevance in
future world of dynamic capacity and bandwidth allocation using ATM and other virtual-
circuit approaches.
Alternatively, a direction for future research would be to solve the joint problem
of pricing and capacity expansion by augmenting the objective function with cost of
acquiring new capacity. Equation (1) would then be:
Maximize )( Wxpxz ji j
ijji j
ij l∑ ∑∑∑ −= (1c)
The second term in the objective function represents the amortized cost of capacity
expansion that would be traded of with the revenue maximization achieved by the first
term.
26
This paper considers the design of a fast and efficient mercantile process for
unique/one-time services and products. The uniqueness of these products allows us to
maintain desired uniform price structure for consumers, while a design and
implementation modification (bid/consumer bundling) allows us to deal with non-
monotonicity in the marginal revenue curve for providers. However, a legitimate
question is regarding the role of similar mechanisms in the realm of products that are not
unique and whose characteristics are not changed over time such as software or music. In
addition, what happens if a consumer has dependent valuations for more than one unique
product? In future research we will address such issues by studying further design
modifications considering the anticipated consumer bidding strategies. In addition
several other interesting question arise in such markets that need to be explored. Should
the provider adopt a clearing-house approach and make accept/reject decisions at regular
intervals? Should the reservation price be altered each day in such scenarios?
Alternatively, would traditional yield management strategies, such as those used by
airlines and hotels, enable a provider to make a probabilistic assessment of future bid-
patterns and consequently make instant accept/reject decisions?
27
Notes
1. For example, events webcasted by Intel owned JamTV, Microsoft’s
Broadcast.com or the recent woodstock.com.
2. The interested reader is referred to http://www.internet2.edu/
3. In reality, depending upon the starting point of the bidding process, a person
having a value V, may end up bidding V-β where β ≤ α. However, for
expositional simplicity we assume that α is small as compared to V and hence V-
β ≈ V-α. Additionally, this assumption does not change the characteristics of any
of our results, and lower bounds on revenue can be generated by subtracting α
from all the following propositions.
4. We assume rational bidders and hence discount the possibility of a person bidding
higher than the minimum necessary bid to obtain the item.
5. i~ X j is the matrix created by the preprocessing step, i~ represents the truncated
set of bids containing only those bids that can potentially have a positive marginal
impact on total revenue.
6. Note that, while for greedy approach the worst case bound is still infinitely bad,
for greedy_plus_n approach, our tough cases are the worst cases and hence
represent worst case bounds.
Acknowledgement
This research has been funded in part by Treibick Electronic Commerce Initiative,
department of OPIM, University of Connecticut.
28
Appendix
Proof of Proposition 1
Let Ri = i*Bi+1 be the revenue if customer i is provided a service, Rmaxi-1 = max (Rk ∀ k
< i) be the maximum revenue if only the customers having index less than i are
considered. Then
mri = Ri - Rmaxi-1 (A1)
Where mri is the marginal revenue if customer i is provided the service. Note that as
long as mri-1 > 0, Rmaxi-1 = Ri-1.
Now, consider a stream of customers with marginal revenues mri-1, mri, mri+1, ..., mri+k,
mri+(k+1) such that without loss of generality mri, ..., mri+(k-1) < 0 and mri-1, mri+k and
mri+(k+1) > 0.
In this case, the minimum cardinality bundle will be the one that includes customer (i+k).
Let this bundle be called (B1). The marginal revenue for bundle (B1) will be
mrB1 = mri+k = Ri+k - Ri-1 > 0 (A2)
Equation (A1) arises from the fact that since mri, ..., mri+(k-1) < 0, Rmaxi+(k-1) = Ri-1.
Now, consider bundle (B2) that not only includes customer (i+k) but also customer
(i+k+1). Since mri+(k+1) > 0 and customer (i+k) is already part of the bundle, the marginal
revenue for bundle B2 will be
mrB2 = (Ri+(k+1) - Ri-1) > mrB1 (A3)
The proof of optimality of B1 is straightforward. Even though by including the
customer (i+k+1) we may increase the marginal revenue, it could only hurt the maximum
revenue by excluding the whole bundle due to higher capacity required. To see this we
need consider only 2 cases:
29
i) When there is enough capacity to include customer (i+k+1)
In this case, after including bundle B1, the customer (i+k+1) will automatically be
included since mri+(k+1) > 0. Therefore bundles (B1) and (B2) both will provide identical
revenue Ri+k+1.
ii) When there is not enough capacity to include customer (i+k+1) but enough to
include (i+k)
In this case if we create bundle (B2), then, due to capacity restrictions the whole bundle
could not be considered and the revenue would be Ri-1. However, bundle (B1) can be
included and the revenue would be Ri+k > Ri-1.
Therefore B2 is a sub-optimal bundle and the optimal bundle size is the minimal
cardinality bundle.
Q.E.D.
Proof of Proposition 2
Let there be unlimited capacity available ( ∞=C ) and c is the capacity utilized by each
customer (For expositional simplicity we assume that the bundle size is 1 or there are no
bundles. It is straightforward to extend the analysis for cases with different sized
bundles.) Let Ri = Bi+1 * i be the revenue obtained if i consumers’ bids were accepted.
The provider will continue to accept customers’ bids while the condition
)1(*)2()1(*)1( ++=+<+= iB iR i iB iRi is satisfied.
Define B iB iBi )2()1( +−+=∆
Then the optimal capacity to utilize is determined by accepting bids till the
marginal revenue contribution is strictly positive. Formally, the condition can be
stated as:
0*)2( >∆−+ iBB i i (A4)
30
The first term in inequality (A4) is the price of the service if the (i+1)th customer's
bid was accepted. The second term is provider's loss of revenue from the i
customers who would have received the service at a higher price, B(i+1), if the
customer (i+1)'s bid was to be accepted. Therefore, a provider who is maximizing
his revenue based on MVA price setting mechanism will only use capacity c*#i'
where
0'*)2'( ' ≤∆−+ iBB i i (A5)
Proof of Proposition 6
Note that due to the special structure due to the MVA, a customer with a lower
bid will never be considered unless all the bidders having a higher bid have already been
considered. Since, in preprocessing we sort the customers according to non-increasing
bid values, it means that if ij > i'j then to consider ij, i'j must be included in the knapsack.
In the worst case, all customers, I, will bid for all the services, J. Then, it is clear
that following pseudo-code representing an iterative exhaustive search takes O(IJ) and
solves the problem optimally.
procedure optimal_vicknap
begin
optimal_z_val = 0
for i1 = 0 to I
for i2 = 0 to I
…
for iJ = 0 to I
begin
current_z_val = (i1*B1 i1+1 + i2*B2
i2+1 + … iJ*BJ iJ+1)
optimal_z_val = max(optimal_z_val, current_z_val)
end
end
Q.E.D.
31
Proof of Proposition 7
First, note that for a given value of n the worst case for greedy_plus_n, from the
perspective of complexity, occurs when the solution is to include all bidders, however,
each application of greedy_plus_n improves the solution by including exactly one more
element. In other words, each iteration adds at least two more bidders to the best
solution; one by design (as would be the case with greedy_plus_1, i.e., when n=1) and
one to improve the existing solution. Therefore, a maximum of I/2 iterations of greedy
are required, i.e., in for a given n greedy_plus_n takes O(I2) time since greedy takes O(I)
time.
Next, the maximum number of iterations due to incrementing n are of the order of I.
Therefore, in the worst case, the overall algorithm is guaranteed to stop in O(I3) time.
32
procedure rev_max
begin
for all I bids
compute Marginal_Revenue
if (Capacity_Available AND Marginal_Revenue > 0)
revenuei = next_non-greater_bid * i
#customers_served = i
servedcustomers_# * c Coptimal =
end
Illustration 1: Pseudo Code for the Solving Single Service Revenue Maximization
procedure bundle
begin
for all J classes
while ctr <= no_bids(service class j)
MR_bundle = MR(ctr)
bundle_size = 1
begin
while (MR_bundle <= 0)
do
begin
increment bundle_size
increment ctr
MR_bundle = MR(ctr)
record bundle
end
increment ctr
end
33
end
Illustration 2: Pseudo Code for the Process of Creating Customer Bundles
procedure greedy ( capacity, j~ )
begin
x = ∅
j = ∅
do
begin
)/(max_ˆ
WeightMRbundlecurrentj
=
x = update_solution( bundlecurrent _ )
update_ j
update_capacity_consumed
end
while capacity_available
end
Illustration 3 - Pseudo-Code for Procedure Greedy
procedure greedy_plus_n(capacity)
begin
x = x g
xtemp = ∅
do
34
for ctr = 1 to j~
begin
xtemp = x ctri ,1+ (I1)
capacity = compute_remaining_capacity( x , ctr)
xtemp = greedy(capacity, j~ - ctr)
if z( xtemp ) > z( x )
begin
improvement=1
x = xtemp
end
end
while improvement
Illustration 4 - Pseudo-code for Procedure Greedy_plus_n
35
Customer 1 2 3 4 5 6 7 8 9 10
Bids ($) 12 10 4.5 4 3.5 3 2.5 2 1.9 1
Revenue 10=10x1 9 12 14 15 15 14 15.2 9
Marginal Revenue**
10 -1=9-10 2=12-10 2 1 0 -1 0.2 -6.2
Table 1 - Example Bids and Revenues under MVA
**Computed based on the previous highest revenue.
Customer 1 2 3 4 5 6 7 8 9 10
Bids ($) 12 10 4.5 4 3.5 3 2.5 2 1.9 1
Revenue 10 9 12 14 15 15 14 15.2 9
Marginal Revenue
10 -1 2 2 1 0 -1 0.2 -6.2
Bundle MR 10 2 2 1 0.2
Table 2- Effect of Bundling
36
Class \ Bids 1 2 3 4 5
1 K K K/2+ε K/2 K/2 …
2 δ δ δ δ δ …
Table 3a - Tough Case Bids
Class \ Marginal
Revenue 1 2 3 4
1 K 2ε K/2-2ε K/2 …
2 δ δ δ δ …
Table 3b - MR for Tough Case
37
Problem Data: Capacity = 9; No. of services = 3; Capacity consumed = 1 by each element in all services. Sorted Bids A 12.0 9.0 4.5 3.0 2.5 2.5 2.5 2.5 1.0 B 6.0 6.0 4.0 3.0 2.0 1.9 1.9 1.5 1.5 C 4.0 3.0 2.0 1.5 1.1 1.1 1.1 1.1 1.1 Marginal Revenues A 9.0 0 0 1.0 2.5 2.5 2.5 -9.5 B 6.0 2.0 1.0 -1.0 0.5 1.9 -0.9 0.6 C 3.0 1.0 0.5 -0.1 1.0 1.1 1.1 1.1 Bundled Marginal Revenues (bundle sizes are denoted as superscripts) A 9.0 0.3333 2.5 2.5 2.5 B 6.0 2.0 1.0 0.252 1.9 0.32 C 3.0 1.0 0.5 0.52 1.1 1.1 1.1 Greedy Solution (Shaded Area) = 23.5 A 9.0 0.3333 2.5 2.5 2.5 B 6.0 2.0 1.0 0.252 1.9 0.32 C 3.0 1.0 0.5 0.52 1.1 1.1 1.1 Greedy_plus_1 (On 'A' diagonal shading indicates the fixed elements) = 26.5*
A 9.0 0.3333 2.5 2.5 2.5 B 6.0 2.0 1.0 0.252 1.9 0.32 C 3.0 1.0 0.5 0.52 1.1 1.1 1.1 Greedy_plus_1 (On 'B' diagonal shading indicates the fixed elements) = 24.4 A 9.0 0.3333 2.5 2.5 2.5 B 6.0 2.0 1.0 0.252 1.9 0.32 C 3.0 1.0 0.5 0.52 1.1 1.1 1.1 Greedy_plus_1 (On 'C' diagonal shading indicates the fixed elements) = 23.6 A 9.0 0.3333 2.5 2.5 2.5 B 6.0 2.0 1.0 0.252 1.9 0.32 C 3.0 1.0 0.5 0.52 1.1 1.1 1.1 *Optimal Solution
Table 4 - The Solution Process
38
Bids per class
Optimality gap- greedy
Optimality gap- greedy plus_n
Time-greedy
Time-optimal
Time-greedy plus_n
Bids per class
Optimality gap-greedy
Optimality gap-greedyplus_n
Time-greedy
Time-optimal
Time-greedy_plus_n
20 2.568 1.244 0 0 0 520 2.168 0.706 0.4 88 0.4 40 1.608 0.744 0 0 0 540 3.194 0.818 0.2 104 0.4 60 2.824 0.392 0.2 0 0.2 560 2.552 0.736 0.2 116.2 0.4 80 1.404 0.346 0 0.4 0 580 3.122 0.524 0.2 129.8 0.4
100 2.068 0.242 0 0.8 0 600 1.78 0.59 0.2 143.4 0.2 120 1.99 0.23 0 1.2 0 620 2.534 0.394 0.2 159 0.4 140 1.434 0.358 0 2 0 640 2.81 0.656 0.6 173.6 0.6 160 1.08 0.344 0 2.6 0 660 2.27 0.518 0.2 190.8 0.2 180 1.508 0.4 0.2 3.4 0.2 680 1.392 0.896 0.2 208.4 0.6 200 1.382 0.25 0.2 5 0.2 700 2.55 0.996 0.2 227.6 0.6 220 1.648 0.56 0 7 0 720 1.736 0.708 0.2 247.2 0.6 240 2.71 0.308 0 9 0 740 2.336 0.576 0.4 268.2 0.4 260 2.476 0.866 0 11 0.2 760 2.164 0.548 0.6 290.8 1 280 1.928 0.324 0.2 13.8 0.2 780 1.468 0.374 0.4 314.8 0.6 300 1.296 0.348 0 17 0 800 2.104 0.532 0.8 339.6 0.8 320 1.334 0.216 0 20.4 0.2 820 2.176 0.48 0.4 365.8 0.8 340 1.128 0.502 0.2 24.8 0.2 840 0.892 0.408 0.4 368.2 0.6 360 1.65 0.488 0.4 29 0.4 860 2.49 0.678 0.4 401.4 1 380 1.722 0.544 0 34.4 0.4 880 3.014 0.542 0.4 450.6 1 400 2.998 0.89 0.2 40.2 0.2 900 2.38 0.582 0.6 482.2 0.6 420 2.004 0.358 0.2 46.2 0.2 920 2.662 0.626 0.6 500.6 0.8 440 0.974 0.464 0 53 0.2 940 2.214 0.536 0.6 522.8 0.8 460 0.982 0.696 0.6 60.8 0.6 960 2.114 0.366 0.2 556.4 1 480 1.998 0.272 0.2 69.2 0.2 980 2.366 0.498 0.4 590 1 500 1.804 0.584 0.6 78.4 0.6 1000 1.4 0.358 1 659.8 1
Table 5 - Performance Comparison for Greedy and Greedy_plus_n
39
Consumers True Valuations:Bidder 1 Bidder 2 Bidder 3 Bidder451 52 53 54
Consumers Bids
47 48 49 50
Revenue = $147
Consumers Bids:
50 51 52 53
Revenue = $156
Best Case
Scenari
o for P
rovider Worst Case Scenario for Provider
Case MVARevenue = $153
Uniform Price = $51
Vick
ery
Auct
ion
N = 3α = 14 bidder
Bidder 1 Bidder 2 Bidder 3 Bidder 4 Bidder 1 Bidder 2 Bidder 3 Bidder 4
Figure 1: Comparison of Multiple Item Vickery and English Auction
40
Apply procedure rev_maxclass j
Yes
No
Apply procedure bundle to class j
Yes
No
Increment jIs
j the lastClass?
Begin with i × j matrix of raw bids
Results in a i’ × j matrix of
Isj the lastClass?
Increment j
Results in a revised i × jconsisting of
~
Figure 2 - Capacity pre-processing and Bundling Procedure
41
Try to improve the solution by applying procedure greedy_plus_n
Apply procedure GreedySet n = 1
Create ranked bundle matrix
Improvement?
Max iterations?
Yes
Stop
Save New Solution
Yes
Increment n
No
No
Figure 3 - The Heuristic Solution procedure
42
Figure 4 - Greedy vs. Greedy_plus_n
Figure 5 -- Comparison of Computational Times
84
86
88
90
92
94
96
98
100
20 40 80 120
160
200
220
260
300
340
380
400
440
480
520
560
580
620
660
700
740
760
800
840
880
920
940
980
Problem Size
% O
ptim
al Greedy
Greedy_plus_n_k
1
10
100
1000
20 60 100
140
180
220
260
300
340
380
420
460
500
540
580
620
660
700
740
780
820
860
900
940
980
Problem Size
log
Tim
e (s
econ
ds)
GreedyOptimalGreedy_plus_n_k
43
Figure 6 -- Optimal Capacity Utilization
Capacity Utilization
98.80%
99.00%
99.20%
99.40%
99.60%
99.80%
100.00%
100.20%
0 100 200 300 400 500 600 700 800 900 1000
Problem Size
% U
tiliz
atio
n
44
References
Bapna, R., Goes, P. B., and Gupta, A., 2000a, “A Theoretical and Empirical
Investigation of Multi-Item Online Auctions”, Information Technology and
Management, 1(1), 1-23.
Bapna, R., Goes, P.B., and Gupta, A., 2000b , “Online Auctions: Insights and
Analysis,” Communications of the ACM, forthcoming.
Greenwood, J., and McAfee, R. P., 1991, "Externalities and Asymmetric
Information," Quarterly Journal of Economics, 106 (1), 103-121.
Gupta, A., Stahl, D. O., and Whinston, A. B., 1997, "A stochastic equilibrium model
of Internet pricing," Journal of Economics Dynamics and Control, 21, 697-722.
Horowitz, E., and Sahni, S., "Computing partitions with applications to the knapsack
problem," Journal of ACM, 21, 277-92.
MacKie-Mason, J. and Varian, H.,, 1995, Pricing the Internet, in Public Access to the
Internet, B. Kahin and J. Keller (eds.), Prentice-Hall, Englewood Cliffs, NJ,
Available from URL http://www-personal.umich.edu/~jmm/papers/Pricing_the_Internet.pdf.
Martello, S., and Toth, P., 1989, Knapsack Problems: Algorithms and Computer
Implementations, John Wiley and Sons, 27-29.
Petrie, C., 1997, “George Gilder on the Bandwidth of Plenty,” IEEE Internet
Computing, 1(1), 9-18.
Petrie, C., and Wiggins, M., 1997, “Bob Metcalfe on What’s Wrong with the Internet:
It’s the Economy Stupid,” IEEE Internet Computing, 1(2), 6-17.
Rothkopf, M. H., and Harstad, R. M., 1994, "Modeling Competitive Bidding: A
Critical Essay," Management Science, 40 (3), 364-384.
Rothkopf, M. H., and Harstad, R. M., 1994, "On the role of discrete bid levels in oral
auctions," European Journal of Operations Research, 74, 572-581.
Lazar, A. A. and Semret, N., 1999, "Design and Analysis of the Progressive Second
Price Auction for Network Bandwidth Sharing," Telecommunications Systems,
forthcoming. (Also available as, Center for Telecommunication Research, Columbia
45
University, Technical report 487-98-21 from URL
http://comet.columbia.edu/~nemo/telecomsys.pdf ).
Srinagesh, P., 1997, "Internet Cost Structures and Interconnection Agreements," In
Internet Economics, ed. Lee W. McKnight and joseph P. Bailey, Cambridge, Mass,:
MIT Press, 135-136.
Vickrey, W., 1961, "Counter-speculation, Auctions, and Competitive Sealed
Tenders," Journal of Finance, 41, 8-37.
Wang, Q., Peha, J. M., and Sirbu, M. A., 1997, "Optimal Pricing for Integrated
Services Networks," In Internet Economics, ed. Lee W. McKnight and Joseph P.
Bailey, Cambridge, Mass,: MIT Press, 353-376.
Wang, R., 1993, "Auctions versus Posted-Price Selling," The American Economic
Review, 84 (4), 838-851.
Zhang, L., Deering, L., Estrin, D., Shenker, S., and Zappala, D., 1993, "New resource
reservation protocol," IEEE Network, 7(5), 8-18.