These are examples of student posters in a previous class. Note: The class was on more general...

23
These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Transcript of These are examples of student posters in a previous class. Note: The class was on more general...

Page 1: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

These are examples of student posters in a previous class.

Note: The class was on more general issues of innovation.

Page 2: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Catherine

Patent Characteristics?

Background

Patent Quality?

Theoretical Impacts of Patents on Innovation and Competition:

Business Method Innovation Post-State Street: What’s the Problem?Catherine Fisher

Department of Economics, Stanford University, Stanford, California 94305

Conclusions

What are business method patents: The US Patent and Trademark Office (PTO) classifies business method patents as Class 705: data processing: financial, business practice, management or cost/price determination. Because much of the current innovation focuses on transferring known business methods to web- or software-based implementation, discussions in the academic literature and popular press often consider software and internet patents more generally.

Some Examples:• Amazon’s ‘one-click’ patent• Priceline.com’s reverse auction patent

Recent Events:Prior to 1998, there was a presumption, although not codified in statute, that methods of business were not a “new and useful process, machine, manufacture, or composition of matter,” and thus not patentable. In 1998, the Federal Circuit Court of Appeals overturned this presumption in State Street v. Signature Financial. The ruling spurred a large jump in the number of ‘business method’ patent applications, with the increase both in filings and patents awarded continuing to this day.

Effects of stronger patents on:

Benefits Costs

Innovation creates an incentive for research and new product/process development; encourages the disclosure of inventions

impedes the combination of new ideas & inventions; raises transaction costs for follow-on innovation

Competition facilitates the entry of new/small firms with a limited asset base or difficulties obtaining finance

creates short-term monopolies; in industries where cross-licensing is common, firms without patent collections may be shut out

The effect of this new class of patents on innovation and societal welfare is still unclear, however a number of potential solutions to have already emerged. While applications for business method patents have dramatically increased, it is unclear if R&D efforts have increased to match. Between 1998 and 2003, R&D spending in the financial services field as a percent of domestic sales remained fairly constant and miniscule, while in the computer programming field it fell from 15 to 7 percent (based on data from the Survey of Industrial Research and Development conducted by the National Science Foundation).

Measures to increase patent quality seemed to be the most rational short-term solution, given the uncertainty surrounding the effect of these patents; however we should be mindful of the argument that given the small fraction of patents that are ever litigated, much of any increase in PTO efforts may be inconsequential.

Literature Cited:

Conventional economic theory holds that strong patents rights help spur innovation by increasing the potential returns for the innovator. However, such patents also have an anti-competitive effect, restricting new entrants, particularly in industries where cross-licensing is frequently adopted. Both effects have been observed anecdotally in the high-tech and financial industries that were most affected by the rise of business method patents. Start-ups have been able to use patents or pending patents to demonstrate their potent profits to investors, particularly venture capitalists, facilitating their effects to bring such products to market. However, the presence of such patents has also created legal minefields, particularly in industries where new innovations substantially build off of previous technology, discouraging new research in areas that are perceived to be heavily patented. Thus, the effect on societal welfare of increased patent protections is unclear.

Separate from the issue of patent quality, however, is the question of whether stronger patent protection is welfare-enhancing

Studies of earlier expansions of patentable subject matter, particularly for biotechnology and semiconductors suggest that patents tend to hinder, rather than promote innovation when:•Sequential innovation (Bessen and Maskin) – When subsequent innovations primarily build off of previous innovations, firms must get licenses from the primary patent holder, increasing the cost of innovation or potentially halting it when the patent holder refuses to license•High rate of innovation (Hunt) –In industries that innovate rapidly, the monopoly granted by a patent is less valuable, since competitors can quickly innovate around the patent.

Possible Solutions•Tighten non-obvious or utility standard•Limit the monopoly granted by business method/software patents (ex. Allow reverse engineering, prior use exception)•Codify the business method exception explicitly

Much of the most vocal critics of business method patents centers on the claim that patents granted in this area are of lower quality, that is one that is invalid or contains claims that are overly broad.

Evidence of Lower Patent Quality:•Financial patents cite few leading academic journals – 19 citations in 445 patents (Lerner 2003)•Financial patents are litigated at a rate 27 times greater than patents as a while (Lerner 2007)•Compared to the general patent pool, business method patents are more likely to be rejected (Crouch)

Possible Solutions•Increase PTO resources and access to non-patentprior art•Implement a post-grantopposition system which allows third-parties to appeal patent grants after issuance, challengingto nonobviousness or utility of the claim.

Bessen, James, and Eric Maskin (1999). Sequential Innovation,

Patents, and Imitation. MIT Economics Department Working Paper No. 00-01.

Crouch, Dennis (2006, Oct 6). “Evidence Based Prosecution V:

Business Method Rejections.” Patently-O. Retrieved on May 21, 2007 from http://www.patentlyo.com/patent/2006/10/evidence_based__2.html.

Lerner, Josh (2003). “The Two Edged Sword: The Competitive

Implications of Financial Patents.” Presented at Financial Markets Conference, April 2-5, 2003. Sea Island, Georgia: Federal Reserve Bank of Atlanta.

Lerner, Josh (2007). Tolls on State Street: The Litigation ofFinancial Patents, 1976-2005. Unpublished working

paper, available at http://www.people.hbs.edu/jlerner/Trolls.pdf

Hall, Bronwyn H. (2003). "Business Method Patents, Innovation,

and Policy". Economics Department, University of California, Berkeley, Working Paper E03-331.

Hunt, Robert M. (1999). Nonobviousness and the Incentive to

Innovte: An Economic Analysis of Intellectual Protpery Reform. Federal Reserve Banks of Philadelphia

Working Paper No. 99-3.

Source: Hall

Beyond the question of the effects of a theoretically perfect patent system, economists have considered the economic effect of an poorly administered patent system, in which the PTO grants patents that are likely to be overturned in an infringement suit. Improperly granted patents impose further costs to the economic system, and particularly to innovative firms, because of the legal costs of defending against even an invalid patent, which may discourage firms from entering highly patented fields, and because the increased probability of patents being overturned reduces the value of patents held by anyone firm.

Evidence on whether this boost in patenting was followed by an increase in R&D efforts, however, is ambiguous:R&D Spending as percent of Domestic Sales

1998 2000 2002 2003

Financial Service (NCIS code 52,53)

0.04% 0.01% 0.06% N/A

Computer Programming (NCIS code 5112, 5415)

14.81% 17.21% 18.77% 6.41%

Data from the Survey of Industrial Research and Development conducted by the National Science Foundation.

Page 3: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Megha

Mandating Human Papillomavirus Vaccination: Comparison and Reliability of Vaccine Models

Megha Agrawal

Department of Economics, Stanford University, Stanford, California 94305

ModelsIn March, 2007, the Advisory Committee on Immunization Practices (ACIP) recommended Gardasil for vaccination in females aged 11-12 years. It cited 4 studies on the potential cost effectiveness of vaccine in the context of cervical cancer screening practices in U.S.

Cohort (Markov health-state transition) Models:

1. “Cost-effectiveness of a potential vaccine for human papillomavirus.” GD Sanders & AV Taira2. “Projected clinical benefits and cost-effectiveness of a human papillomavirus 16/18 vaccine.” SJ Goldie, M Kohli, et al.

Population Dynamic Models:

3. “Evaluating human papillomavirus vaccination programs.” AV Taira, et al.*4. “Model for assessing human papillomavirus vaccination.” E Elbasha, et al.

Background

• Human Papillomavirus (HPV)

Infecting 6.2 million Americans each year, HPV results in genital warts, cervical cancer, and other types of cancer. These diseases cost the U.S. over $4 billion annually in direct costs.

Table 1. Percent of cancers due to HPV infection in the U.S. in 2003.

• Cervical Cancer

In 2007, cervical cancer will kill 3,670 U.S. women. In developing countries, it is the 2nd most common cancer among women, resulting in 300,000 annual deaths.

• Gardasil

At $360 for a series of three doses, plus administering costs, the new vaccine is 100% effective against HPV strains 6 and 11, which cause genital warts, and strains 16 and 18, which are responsible for 70% of all cervical cancer cases.

•Debates Over Mandatory Vaccination for 12 Year Old Females

Moral issues: encouraging increased sexual behaviorEconomic issues: cost-effectiveness given current cervical cancer screening procedures in the U.S.

*Hybrid model: follows one cohort but captures herd immunity effects

Abstract

Conclusions

1) Although all of the models find an HPV vaccine to be cost-effective and efficacious for 12 year old females in the U.S., a comparison of the models shows that model 4 is significantly the most accurate due to its base-case assumptions and inclusion of herd immunity.

2) Unknowns about Gardasil, HPV disease dynamics, and population characteristics significantly affect model results, decreasing vaccine impact on disease incidence and increasing ICER’s.

3) Increasing vaccine coverage increases ICER.

4) Allow unregulated diffusion of vaccine until scientists obtain further information on unknowns and assumptions from learning by using.

5) Mandatory vaccination is likely to be more cost-effective and efficacious in developing countries with poorer screening procedures.

Figure 1. Cohort models incorporate death & HPV disease progression.

Figure 2. Dynamic models incorporate changes in population and HPV infection rates.

In June, 2006, Merck introduced the world’s first vaccine to prevent cervical cancer, Gardasil. Since the vaccine is 100% efficacious against human papillomavirus (HPV) strains resulting in 70% of all cervical cancer cases, debates over mandating vaccination in the U.S. have arisen. This study evaluates the four mathematical models that led to the Advisory Committee on Immunization Practices’ (ACIP) recommendation of vaccinating females aged 11-12 years. The models utilize cohort and population dynamic methods to project the impact and cost-effectiveness of vaccination strategies under different base-case assumptions. The models predict cost-effective reductions in cervical cancer risk from 20% to 78%. However, only one model closely models the characteristics of Gardasil and incorporates herd immunity, which significantly increases vaccine impact and decreases incremental cost-effectiveness ratios (ICER). Both types of models also rely on many unknowns regarding Gardasil and its target population, such as duration of efficacy and coverage. Differences in these values cause a 16.4% to 117.3% increase in ICER’s and a 17.8% to 69.2% decrease in impact on disease incidence. Thus, researchers should gather more data on HPV and Gardasil, through natural diffusion and learning by using, before mandating vaccination.

GoalsEvaluate mathematical models that project the long-term epidemiologic and economic consequences of HPV vaccination strategies within the U.S.

Specifically:

1. Compare the impact and cost-effectiveness results of the four specific models in the context of vaccinating 12 year old females

2. Evaluate the accuracy and reliability of general cohort and population dynamic modeling of vaccines

Figure 4. Reductions in lifetime cervical cancer risk for vaccinated cohorts (12 year old females) under current U.S. screening practices and base-case assumptions, by model.

Impact on Disease Incidence

For More Information

Please contact Megha Agrawal ([email protected]) for more information.

Cost-Effectiveness

Figure 3. Incremental cost-effectiveness ratio (ICER) of changing from strategy B to strategy A.

Figure 5. ICER’s of vaccinating 12 year old girls in each model, compared to screening only, under base-case assumptions and with current U.S. screening

practices. In the U.S., the upper limit of cost-effectiveness is $50,000-$100,000/QALY.

Comparison of Models

Reliability of Models

Cohort Model Dynamic Model

Table 2. Base-case assumptions of models

• All models find vaccination of 12 year old females efficacious and cost-effective• Model impact increases and the ICER decreases as more herd immunity effects are incorporated

• The model with the greatest reduction in cervical cancer incidence also has the lowest ICER (model 4)• In terms of cost, efficacy, and target HPV strains, model 4 most replicates the attributes of Gardasil

• Model 4, the only model fully incorporating herd immunity, finds a 134% greater impact and a 79.8% lower ICER than Model 2, which has similar base-case assumptions but does not include herd immunity

ICER

Disease Incidence

Table 3. Percent increase in ICER’s of 12-year-old female vaccination due to changes in base-case assumptions.

Table 4. Percent increase in ICER’s of 12-year-old female vaccination due to changes in base-case assumptions. *Includes catch-up vaccination for

females 12-24 years old. **CIN2/3, CIS = 0.87; CIN1, GW = 0.91.

Figure 6. Reduction in disease incidence with 10 year vaccine efficacy as a percentage of reduction in incidence with lifelong efficacy.

Figure 6. Reduction in disease incidence with 10 year vaccine efficacy as a percentage of reduction in incidence with lifelong efficacy.

Page 4: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Background & Motivation

Acknowledgments

Types of Risk

Conclusions

Carbon Capture and Storage: The Effects of Risk Type on the Diffusion of an InnovationDimitri Dadiomov

Department of Economics, Stanford University, Stanford, California 94305

Literature Cited

For further information

In response to the concern that global greenhouse gas emissions lead to climate change, power companies are looking into viable and scalable carbon free power. Carbon capture and sequestration (CCS) is one such technology that allows the use of coal in baseload electricity to become carbon neutral and compete in a carbon-restricted environment.

The advantages of CCS vis-à-vis other renewable sources of power is scalability. The BP Peterhead Project, a 475 MW power plant in northern Scotland, that employs CCS technology, will produce as much clean electricity as the entire UK wind industry, or one half of the entire worldwide shipments of solar last year.

However, some perceive CCS as a risky proposition because the carbon dioxide injected into geological formations can leak. The famous incident at Lake Nyos, Cameroon, where large amounts of naturally occurring CO2 were released and asphyxiated 1,700 residents and thousands of cattle. More realistically, given the concentrations in CCS projects, the risk is in seepage to local lakes and aquifers and the resultant acidification.

Risk in many instances defines the limits to the diffusion of an innovation. But what different types of risk are there, and how does diffusion vary among them?

Risk can be defined along the following two questions:

1.Who is the risk to? The individual adopter? The immediate vicinity? The entire world?

2.How great is the risk? Is it a “statistical” indirect risk or is it an immediate direct risk?

Carbon capture and sequestration falls right around the center of this matrix. It is an indirect risk to the local environment. In individually-risky innovations we have agencies such as the FDA regulate private businesses. In globally-risky innovations we typically end up with a highly regulated business environment that is almost a Public-Private Partnership, as with nuclear power. CCS is similar to natural gas storage in the US today.

How does diffusion vary with risk? The benefits of CO2 capture and

sequestration are global, but the risk and the assumed costs are local. Therefore policy should be crafted such that the locales have a chance to be rewarded for assuming the risk. Without any reward, they will never agree to take on the risk.

Risky innovations are not vetoed. In improving our world, we must maximize value, not just minimize risk. If a technology such as CCS allows us to gather great benefit from the reduced risk of global climate change at the expense of relatively slight local risk, that may be a price we should be willing to pay.

The challenge is in providing enough benefits for the local community such that they become actively interested in allowing carbon storage beneath their land. It can be done in a manner similar to the way we regulate mineral rights today.

I would like to thank Dr. Jerry Harris for advising me on my Honors Thesis on CCS and to Dr. Ward Hanson for allowing me to present some of the conclusions at the SIEPR Policy Forum.

Please contact Dimitri Dadiomov at [email protected] if you have any questions.

Herzog, Howard J. “Clean Coal Technology for a Greenhouse Gas Constrained World.” Presentation given at Council of State Governments Eastern Regional Conference, MIT, May 11, 2006.

Gardiner Hill. “Pre Combustion capture from gas: The Peterhead Hydrogen Power Project.” Presentation given at CSLF Workshop, Paris, France, March 2007.

BP Peterhead Project in Scotland

Directrisk

Indirect risk

Global risk

Individual adopter risk

Announced Carbon Capture and Sequestration Projects, in MW 2007-2017

Cumulative Worldwide Solar Power Production in MW 1993-2006, at a 30% Capacity Factor

Four categories of risk in innovations:

• Individual-indirect: automobiles

• Individual-direct: pharmaceuticals

• Community-indirect: pesticides

• Community-direct: nuclear power plants

All innovations carry a benefit as well as a cost. In economics, actions are defined as having an expected utility – the expected benefits minus the expected costs.

Time

Diff

usi

on

Community-direct effect

Community-indirect effect

Early adopters

Wide adoption

Page 5: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

The Feasibility of Soybean BiodieselAmanda Guy

Department of Economics, Stanford University, Stanford, California 94305

Abstract

Background

ConclusionsEnvironmental EffectsThe soybean biodiesel industry is experiencing tremendous

growth, from less than 10 biodiesel plants in 2000 to 105 plants currently operating and 77 more under construction. Biodiesel production capacity is projected to reach 1.7 billion gallons by 2008. US policy will determine the direction and future growth of this industry through tax incentives, subsidies and regulation.

• Without favorable public policy, soybean biodiesel is not cost competitive with petrodiesel on a price per gallon basis

• Soybean biodiesel is a cost effective way to reduce emissions in niche markets and as a fuel additive

• Currently US regulation, subsidies, and tax incentives are making biodiesel more competitive with petrofuels

• However, biodiesel will not be able to supply the entire US market

• US should ultimately seek renewable energy sources that do not compete with the food supply

• Nonfood feedstocks that can grow on marginal land with few inputs are a better long term solution

• The future of biofuels are Synfuel hydrocarbons or cellulosic ethanol that can provide long term sustainable energy with greater environmental benefits than soybean biodiesel

Main Points

Biodiesel plant located in Ralston, Iowa

• Biodiesel greatly reduces emissions of CO and CO2

• Contains less harmful hydrocarbons than petrofuels

• Can reduce up to 20% of tailpipe particulate emissions

• Overall emissions is 59% of the greenhouse gas emissions from an equivalent amount of diesel fuel

• High cetane ratings improves car performance and reduces emissions

• It is biodegradable and non-toxic

• Soybean production spreads agrichemicals, such as pesticides, N and P

• Agrichemicals can reduce biodiversity and elevatenitrate and nitrite levels in the water supply

• Land clearing for soybean farming releases significant amounts of CO2

• Land clearing for large-scale biofuel production threatens habitats

• Soybean biodiesel produces more nitrogen oxides than petrodiesel

Figure 2: Soybean biodiesel emissions compared to Petrodiesel

Soybean biodiesel, a fuel comprised of part soybean oil and part diesel gasoline, is gaining popularity as an alternative fuel in the United States. Although biofuels benefit the environment and the US agricultural industry, they may lack economic competitiveness and an appropriate potential supply. The soybean biodiesel industry currently relies on government subsides and tax incentives to provide competitive prices relative petrofuels. Whether or not these current subsidies are worthwhile depends upon the long-term viability of the soybean biodiesel market and relative benefits compared to the existing petrofuel system. My research examines the net energy gain, environmental benefits, economic competitiveness, and potential supply in order to determine the impact of soybean biodiesel on the

economy and the environment.

As the greatest global producer of greenhouse gas emissions,, the US faces the challenge of reducing its consumption of petrofuels. Beyond global warming, the finite supply and price volatility of petrofuels provide another rationale for adopting new energy sources. One strategy to reduce petrofuel use is to substitute petrofuels with sustainable, environmentally beneficial biofuels, such as soybean biodiesel.

• Soybean biodiesel is part soy oil, part diesel fuel and has environmental and engine lubricity advantages over pure petrodiesel • It can be used in most diesel engines without modifications • It is currently popular within the agricultural and transit industries • The cost of biodiesel can be lowered through process improvements and economies of scale but targeted policy is necessary to make biodiesel cost competitive with petrodiesel

Net Energy Balance (NEB): Energy output of biodiesel - Energy inputs

Figure 1: Net Energy Balance of soybean biodiesel vs. other Fuels

• The most accurate NEB values come from life cycle studies that measure all costs of soybean production from the extraction of all raw materials to the final end-use of the fuel

• Soybean biodiesel has a positive NEB since it does not require more energy to make than it yields

• Soybean biodiesel produces 3.2 units of fuel energy for every energy unit of fossil fuel consumed in its production

• Petrodiesel produces only 0.84 units of fuel energy per units of fossil fuel energy consumed

Benefits: Costs:

Economic CompetitivenessIndustry Structure:• In 2006, 53 biodiesel plants had the production capacity of 354 million tons• The biodiesel industry has the processing capacity to increase production rapidly as demand increases (See Figure 4)• Industry has moved from small batch plants to larger-scale continuous producers

Demand for Biodiesel:• The amount of biodiesel demanded has been historically low outside of niche markets due its high prices compared to petrodiesel (See Figure 3)• However, recent subsidies, tax incentives, and environmental regulation in addition to rising petrofuel prices have raised demand• There is also consistent demand for soybean biodiesel as a fuel additive

Biodiesel Infrastructure:• Can be used with many current vehicles - “Change your fuel, not your car”• There are about 600 biodiesel gas stations in the US compared to over 165,000 petrofuel gas stations • Most of soybean biodiesel sold today is delivered directly to the consumer by small, fragmented producers

Uncertainties:• Biodiesel’s affect on food supplies • The affect fluctuating soybean prices will have on biodiesel’s profitability

Figure 4: US biodiesel production and installed capacity, 2000-2006

Potential SupplyCurrent Situation:• Soy oil is a low-priced byproduct of soy meal available in relatively large volumes• However, US diesel consumption is 23 billion gallons and rising• The US will produce 250 million gallons of biodiesel in 2007 (only 0.4% of US diesel consumption)

Genetic/Agronomic Improvement:• Feedstock supply could increase through additional acreage, improved soybean varieties, and the use of idle crop land• However, soybeans need to be produced on valuable farm land, which limits the amount of idle crop land that can be devoted to its use• Genetic modifications can increase disease resistance and oil content while reducing the need for soil tillage

Production Limitations:• Total soybean feedstock supply in the United States is limited to 10% of current diesel consumption• A portion of soybean production must be diverted toward domestic food production• The increasing demand for biodiesel will tend to increase feedstock price and the production costs for biodiesel manufacturers • The soybean biodiesel industry’s biggest challenge may be the ability of the feedstock supply to keep up with growing demand

Figure 3: Relative prices of Soybean Biodiesel, Wholesale diesel, and Retail Diesel

For More Information:

Net Energy Balance

"The use of vegetable oils for engine fuels may seem insignificant today. But such oils may become in course of time as important as petroleum and the coal tar products of the present time."

- Rudolf Diesel, 1912

Designed the diesel engine to run on peanut oil

Please contact Amanda Guy at [email protected] for any further questions that you may have

Wholesale DieselRetail DieselSoybean Biodiesel

Cen

ts p

er

Gallo

n

350

300

200

250

150

100

50

0 1/02 3/02 5/02 7/02 9/02 11/02 1/03 3/03 5/03 7/03 9/03 11/0 1/04 3/04 5/04

On the Internet:National Biodiesel Board: http://www.biodiesel.org/United Soybean Board: http://unitedsoybean.org/CRS report for Congress: http://www.nationalaglawcenter.org/assets/crs/RL32712.pdf

Page 6: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Abstract Advantages

Conclusions

The Plasma Trash Converter: Advantages and DisadvantagesFrancisco Garcia

Department of Economics, Stanford University, Stanford, California 94305

For Further Information

In this paper, I perform a cost benefit analysis on the plasma trash converter. The plasma trash converter is a device that is able to convert municipal solid waste into energy, the importance of the plasma trash converter is that it allows for the generation of renewable electrical energy and the production of a byproduct with environmentally acceptable properties. In my analysis I will look at the potential advantages and disadvantages of implementing plasma trash converters as a means to eliminate municipal solid waste as opposed to current methods, such as incinerators and landfills. From these results I determine whether plasma trash converters are an efficient means of recuperating clean energy from wastes without high emissions at affordable costs.

Economic Business Plan

Gas Component

Percentage of Synthetic

Gas

Hydrogen 41 - 53%

Carbon Monoxide

26 - 31%

Carbon Dioxide

8 - 10%

Methane .5 - 2.5%

Nitrogen 9 - 16%

Complex Hydrocarbon

s.5 - 1%

Table 1: Constituents of the Synthetic Gas

Figure 2: California waste disposal amounts and potential electrical energy

• Plasma pyrolysis can be used to dispose of the municipal solid waste in landfills in a way that will not be extremely harmful to the environment.

• Cities can gain long term profits from pyrolysis by selling off the excess energy produced from the municipal solid waste.

• The hydrogen gas produced from pyrolysis can be used to supply hydrogen for hydrogen fuel-celled vehicles.

• If the plasma trash converter is implemented larger cities would no longer have to pay the large annual fees to ship their trash to landfills (New York City Spends half a billion dollars annually to get rid of municipal solid waste).

• The plasma trash converter is able to handle a wide variety of different wastes at the same time so cities do not have to invest in waste sorting facilities.

•The plasma trash converter reduces overall waste volume by around 95% and it reduces overall mass by 80%. The waste is encapsulated in a glassy slag that prevents the waste from leaching out into the environment.

Above: Illustration of Plasma Trash Converter

• The plasma trash converter is a legitimate technology for the disposal of municipal solid waste and it provides a possible supply of hydrogen for use in fuel-cell cars or other new fuel technologies.

• Of the available technologies for municipal solid waste disposal the plasma trash converter has the least negative effect on the environment but the plasma trash converter is also the most expensive technology to implement.

• Test facilities should be built so that the ecological-economic efficiency of the plasma trash converter can be calculated.EEE=(NEBTechnology A - NEBTechnology B) / (NPVTechnology A - NPV

Technology B)

• The plasma trash converter disposes of waste more efficiently then other existing technologies once the initial capital requirements have been satisfied.

• A business plan can be made for the plasma trash converter, although business plans should be site-specific.

• Future investments must be made in the further development of the plasma trash converter in order to reduce financial costs to make the technology more financially feasible.

Alm, Eric et al. Alternative Technologies for New York City’s Waste Disposal. Presented to the City of New York City Council. Columbia University, New York, NY. 20 August. 2004 Tellini M., R. Del Rosso, P. Centola and P. Gronchi, May 2003, “Hydrogen from Waste”, Chemical Engineering

Transactions, 4. Tendler, Michael et al. “Plasma Based Waste Treatment and

Energy Production.” Plasma Phys. Control Fusion 19 April. 2005.Ludwig, Christian et al. Municipal Solid Waste Management:

Strategies and Technologies for Sustainable Solutions. Springer, New York. 2003.

Figure 1: Typical particulate emissions from plasma trash converters and average incinerators.

Method of Treatment

Cost of Treatment:

Euros/t

Min Max

Landfill 105 160

Traditional Incineration

100 140

Plasma - No WTE

100 120

Plasma - WTE

70 80

Table 2: Economic Indexes of waste treatment methods

3 cases all with total capital cost of $79.2 million for plasma trash conversion facility operating at a capacity of 500 tons/day. Capital financed for project would be at 5.75-percent interest for 20 years, with two payments a year. 400lbs of slag produced per ton and slag can be sold for 15$/ton.

Case 1: Plasma Facility producing 600 kwh per ton of waste to be sold at 2.5¢/kwh and government incentive payments of 1.8¢/kwh. Private industry and local government finance 42.5% of the total cost and a government grant covers the remaining 57.5%.

Case 2: Plasma Facility producing 600 Kwh per ton of waste to be sold at 4.5¢/kwh and government incentive payments of 1.8¢/kwh. Private industry and local government finance the total cost.

Case 3: Plasma Facility producing 600 Kwh per ton of waste to be sold by a utility company at 6.72¢/kwh and a government tax credit of 1.8¢/kwh. Private industry and local government finance the total cost.

• Plasma trash converters can emit dioxins.

• Plasma trash converters require a large amount of electricity to keep the plasma torches running efficiently. Furthermore, this energy use may have an adverse effect on the environment.

• Because plasma pyrolysis is a complex and new technology some designs may be inefficient and may be subject to frequent equipment failure. • Plasma trash converters have a high initial capital cost as well as significant installation requirements.

• Operating costs including electricity and consumables (plasma torches have a limited life span) may be significant.

• Although, it is believed that the slag is safe for re-use there may not be a private market for the material because of the content of the slag.

Disadvantages

Case 1

Capital Investment Tipping Fee

Government Grant

Industry and Local Government (Financed)

For MSW at break even (per ton)

$45,562,000 $33,608,000 $35

Case 2 $0 $79,170,000 $44

Case 3 $0 $79,170,000 $35

* $35 per ton is the current tipping fee for a landfill

Background• Nearly half of the world’s growing population lives in urban areas, causing enormous pressure on the local environment.

•Affluent industrialized economies are facing an ever-increasing load of wastes and declining landfill space.

• Sustainable management of waste with the overall goal of minimizing its impact on the environment in an economically and socially acceptable way is a challenge for the coming decades.

• The plasma trash converter is important to this goal because it allows for the destruction and conversion of waste to energy, while having relatively small impacts on the environment.

Page 7: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Recommendations

1) India should institute a comprehensive plan, of which mircocredit is one part, to address poverty and women issues, including programs tackling food assistance, health insurance, legal education etc.

2)Given India’s extremely low ratio of microcredit staffers to borrowers, India’s number one focus should be to provide resources to educate borrowers and help manage accounts rather than carelessly expanding number of loans.

Methodology

This Paper will test the following hypothesis:

“Quality” is defined as the ability of microcredit to achieve the following three criteria: 1) start or fund continuation of an entrepreneurial business, 2) empower women and 3) lift borrowers out of poverty. As there is not a lot of data, much of the analysis is qualitative.

Presentation of Data

Exponential growth of microcredit in India can be contributed to a large un-served market with estimated need of US$100billion in microcredit loans, government encouragement and a rapid increase in number of MFIs in India.

Empowerment of Women?• While microcredit certainly has helped women, there are many areas for improvement to better target the welfare of women and clear tradeoffs that women have to make.

• Were women who took loans already empowered or were they empowered as a result of the loans?

Examining the Quality of Microcredit Diffusion: A Case-study on India

Kabir Chadha

Department of Economics, Stanford University

Solving the Poverty Problem?

While giving a monetary sum to a person living in poverty is bound to help in some way, the following are concerns regarding the effectiveness of the program:

• 5% of program participants are raised out of poverty, therefore as the Indian population increases by 1.8%year in poor areas, microcredit serves to barely hold back an increase in poverty.

• Little durability of poverty reduction.

• Loans are used to pay off existing loans.

• Not suited to all poor people equally – those with poor oral math opt for wage subsidization public programs .

• Unable to reach the poorest of the poor and thus not a holistic solution to poverty.

Since many have neither the skills nor the inclination to be entrepreneurs, why is microcredit spreading?

Background

In 2005, the United Nations announced the Year of Microcredit, urging its members to support microcredit programs because of its potential for “the eradication of poverty, its contribution to social development and its positive impact on the lives of people living in poverty.”

India, host to one quarter of the world’s poor, has experienced speedy growth in the microcredit industry.

This growth has primarily come through the proliferation and management of Self Help Groups (SHG’s) that collectively manage funds and encourage repayment of loans.

Conclusion• We find our hypothesis to be qualitatively true: the social performance of microcredit loans decrease as number of loans increase in India.

• This paper finds that microcredit cannot singlehandedly solve India’s poverty issues and neither can it fully empower women to the degree that is needed.

•After over 10 years of rapid growth, microcredit is still struggling to hold back poverty.

Additional Information• http://www.microcreditsummit.org

• MicroBanking Bulletin

• Center for Microfinance at Institute for Financial Management and Research

Figure 1: Exponential in crease in Indian Microcredit

Quality of Microcredit in India = 1 / Diffusion

Figure 2: Rapid growth of active borrowers in Andhra Pradesh serves as a model for the entire nation

Figure 3: Inverse relationship between financial and social performance

Borrowers Loan Amount (US$)

1996 200,000 4 million

2006 17,000,000 1.3 billion

Figure 2: A typical SHG in Andhra Pradesh

Objective

This paper hopes to asses the validity of the claim that as microcredit in India spreads, the quality of transactions decrease.

POSITIVE IMPACT NEGATIVE IMPACT

Handling moneyOnly serve as a liaison between the MFIs and the husband

Operating independent businesses

Use money to repay past loans instead of starting business

Earning from the family and making the spending decisions

Time away form family and female children

Figure 4: Positive and negative impacts of micro- finance on female empowerment

Number of borrowers per staffer

Afghanistan 54

Bangladesh 131

India 439

Figure 5: India must improve staffer to borrower ratio

Page 8: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

AbstractThe biopharmaceuticals industry is characterized by high demand for therapeutic drugs but manufacturing bottlenecks in the production of these drugs. Biopharming is a innovative technology in the biotechnology industry that involves engineering the genes of plants to allow them to produce proteins used in producing these drugs. The main advantages that biopharming has over other methods of drug production are 1) it is much less expensive than cell culture or animal production methods, 2) it is relatively easy and inexpensive to scale-up, allowing producers to realize economies of scale quickly. The disadvantage of using biopharming is that there is large risk of environmental contamination, and thus there are more costs associated with complying with environmental regulations. As biopharming develops over the next decade, we expect to see a potential solution to the drug manufacturing bottleneck, lower producer costs, potentially lower consumer costs, and increased investment and innovation in the biopharmaceuticals industry.

What is Biopharming?“Biopharming” is the use of plants as bioreactors. That is, plants are genetically modified to allow them to produce biopharmaceutical proteins. The major breakthrough for biopharming came in 1989 when functional antibodies were first produced in tobacco leaves. Since then, researchers have been developing new techniques and experimenting with a variety of plant species to maximize the effectiveness, duration, and yield of proteins produced through biopharming.

Currently, biopharming is used to produce 4 types of proteins:

1. Parental Therapeutics & Pharmaceutical Intermediates – hemoglobin, human growth hormone – ultimately used to produce drugs

to treat human diseases

2. Industrial proteins – enzymes used in pharmaceutical manufacturing

3. Monoclonal Antibodies – a special type of protein that can precisely target specific antigens, used to treat diseases such as cancer, arthritis

4. Vaccines – can potentially be taken orally: herpes, cholera, human serum albumin vaccine

Why use Biopharming?Biopharming has been touted as the newest solution to the production bottleneck that characterizes the biopharmaceutical industry. Biopharming allows relatively low-cost mass production of drugs without sacrificing quality. The following table compares biopharming to other biopharmaceutical production agents:

Economic Impact of Biopharming on the Biopharmaceutical Industry

Breaking the Manufacturing Bottleneck. There is currently a huge gap between demand for pharmaceutical drugs and supply. This is because current methods of production do not allow ease of scale-up to meet the large demand. Biopharming allows cost-effective and relatively easy scale-up of drug manufacturing, and is one solution to the manufacturing bottleneck.

Lower Producer Costs. Adoption of biopharming lowers producer costs in two ways. 1) Per-unit costs of protein production decrease dramatically. 2) Biopharming inherently encourages large-scale production since costs of scale-up are low. As producers increase production, they will see even lower costs. Although there are costs associated with abiding by non-contamination regulations, the total costs of production are still much lower compared to other methods of drug production.

Potentially Lower Consumer Costs. Studies have been mixed on the net effect of biopharming on consumer surplus. On the one hand, adoption of biopharming will for certain lower producer costs. However, drug patents allow pharmaceutical companies to leverage monopoly power in pricing drugs. This suggests that even as biopharming is adopted, producers can continue to raise drug prices above actual cost levels. Some studies have predicted that consumer surplus will not change very much. Other studies suggest that as costs of production become cheaper, more producers will enter the market and produce better substitutes for drugs that are already on the market. In the end, consumers will see a decline in prices as drug producers compete with each other.

Increased Investment. One of the major roadblocks to investment in the pharmaceutical industry is the inherent risk of investing in development of a drug. There are high fixed costs which can only be recuperated if a drug is clinically successful and can go to market. The drug development pipeline takes roughly 10 years to go from development to final approval, and many factors can disrupt approval of a drug, ranging from patent issues to clinical failures. Biopharming has much lower fixed costs and approval timeframes, meaning a lower risk for investors and thus potentially increased investment in the industry.

Increased Innovation. It has only been a few years since biopharming technology has reached a level where it has become feasible to implement in production. With the increased regulatory scrutiny surrounding biopharming and the initial successes of biopharm-produced drugs, researchers are putting much more effort into exploiting new plants, developing new techniques to comply with regulations, and identifying new medical indications to target.

Biopharming: Advantages, Disadvantages and Economic Impact on the Biopharmaceutical Industry

Lisa Huang

B.A. Candidate, Department of Economics, Stanford University

Advantages of Biopharming Disadvantages of Biopharming

Overview of the Biopharmaceutical Industry

I. A rapidly growing industry: • 84 biopharmaceuticals on the market serving 60 million patients worldwide for a cumulative market value of $20 billion

• 15% growth rate for biopharmaceuticals vs. 7-8% for small molecules over next decade

• 500 biopharmaceuticals are estimated to be in clinical trials globally, 378 of which are in earlier stages (Phase I and II), while 122 are in Phase III or awaiting FDA approval

• 6 or 7 new large-molecule drugs to reach the market each year over the next several years

Figure: Sales of Therapeutical Antibodies. Source: Journal of Biotechnology

II. Large Demand, Shortage of Supply: • Some pharmaceutical drugs must be given in high doses, e.g. monoclonal antibody-based drugs

• Estimated 20-50% of potential therapeutics industry-wide could be delayed over the next decade due to the lack of manufacturing capacity

• To meet the expected demand for new drug production, more than three times the current production capacity is required

Case Study: Enbrel (1998)1998 Immunex introduces Enbrel, a biotech drug used to treat rheumatoid arthritis

Enbrel is produced in 10,000-liter bioreactors of cultured cells

2001 Success of Enbrel generates high demand for the drug By March, there is a shortage of supply

2002 By March, there is a waiting list of 13,000 patients Immunex starts rationing Enbrel sold to pharmacies Immunex launches a new production facility in Germany, but this will take 5 years and $450 million to complete

Now Continued shortage of supply

1. Lower Costs

Fixed Capital CostsA mammalian cell culture fermentation plant costs $450 million and 4-7 years to build and approve. A corn purification facility with the same

protein production capacity costs $80 million and 3-5 years to build and approve.

Marginal Production CostsIn large-scale productions, costs for using plant bioreactors is 4-5 times lower than for animal cell bioreactors

Figure: Production costs of the protein Immunoglobin A using mammalian cell culture, transgenic goats, and transgenic plants. Source: TRENDS in Plant Science

2. ScalabilityThe large-scale farms needed to grow transgenic plants are readily available. To meet demand, producers only need to plant more seeds and cultivate more acres of land. Although initial set-up costs are high, maintenance costs are low, so there are economies of scale.

3. SaferIn animal production systems, viruses are often used to introduce the target gene to animal cells. There is thus the risk of virus mutation or creation of animal prions, both of which pose risks to humans. Since the only viruses that are used for plant production are plant viruses, there is little infection risk to humans.

Transgenic

Plants

Yeast Bacteria Mammalian Cell Cultures

Transgenic Animals

Production Costs

(per gram of protein)

$10-20 $50-100 $50-100 $500-5000 $20-50

Scale-up Costs Low High High High High

Storage Costs Low Low Low High High

Time Effort High Medium Low High High

Distribution Easy Medium Medium Difficult Difficult

Production Yields High High Medium Med-High High

Product Homogeneity High Medium Low Medium Low

Can be used as delivery vehicle for drug?

Possible No No No Yes

Viral Contamination Risks

Low Unknown Yes Yes Yes

Ethical Concerns Medium Medium Medium Medium High

1. Risk of Environmental Contamination

There is a high risk that transgenic plants will cross-pollinate with wild plants if proper precautions are not taken. Once an engineered gene is introduced into the natural environment, there can be enormous impacts on the ecosystem. Wild plants that can produce very powerful proteins pose risks to humans and wildlife if they are accidentally consumed.

Another risk comes from possible cross-pollination between transgenic plants and food crops. Current regulations require a 100% guarantee of zero contamination for any farm producing transgenic food crops. To address this, technologies have been developed to prevent contamination, including:

Isolation of transgenic plants in greenhouse Producing non-fertile plants using a “terminator” gene Manually stripping away plants’ flowers so that no cross-pollination can occur Use of non-food crops for biopharming

2. Increased Costs of Compliance with Regulation

Because of the high risk of environmental contamination, transgenic plants are under strict regulations. The cost structure of biopharming companies is highly affected by measures to mitigate risks of contamination. Complying with regulations increases the costs of production and has the following economic effects:

R&D money spent on developing technologies to prevent contamination, rather than on developing more efficient transgenic plants Research in transgenic techniques shifts from food (corn) to non-food (tobacco) crops Increased self-inspection costs to ensure non-contamination Possible outsourcing of transgenic plant farming to other countries

3. Waste Disposal

Producing 200 kg of antibody protein from corn can generate 400,000 kg of waste byproduct, which may contain another 100 kg of protein byproducts. In general, these byproducts have to be incinerated for safety and contamination issues. Waste disposal can thus be difficult.

For Further Information

Ko and Koprowski, 2005. Plant Biopharming of Monoclonal Antibodies, Virus Research. 111(1), pp. 93-100.

Elbehri, A, 2005. Biopharming and the food system: Examining the potential benefits and risks, AgBioForum. 8(1), pp.18-25.

Daniell, Streatfield, and Wycoff, 2001. Medical Molecular Farming: Production of Antibodies, Biopharmaceuticals, and Edible Vaccines in Plants, TRENDS in Plant Science. 6(5), pp. 219-226.

Page 9: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Abstract

Conclusions

LASIK Eye Surgery: A Patient’s Cost-Benefit Analysis David Muramoto

Department of Economics, Stanford University

References

Laser in situ keratomileusis (LASIK) is currently one of the most popular and effective methods of surgical vision correction. This project investigates the upfront and long-term costs and benefits of LASIK eye surgery, using an expected net present value (ENPV) framework to model the patient implementation choice. In approximating the direct procedural and expected complication costs and long-term vision correction savings, this analysis concludes that LASIK eye surgery can potentially yield significant positive NPV to prospective patients. Exact valuations vary by individual patient characteristics, including income level, time preference, and the personal value attached to avoiding alternative methods of vision correction.

Benefits

Upfront

- Personal Value: Convenience and cosmetic values from avoiding alternative vision correction; involves strong heterogeneity

Long-term

- Savings from avoiding alternative methods of vision correction

Excluded

- Potential value from improved vision not offered through alternative correction methods

Costs

Upfront

- Costs of surgery: $3,984 (2007 average)

- Foregone wages: Average of 2 workdays lost; total effect varies by income level

Long-term

- Expected costs of complications

Excluded

- Disutility from primary surgery and potential re- treatment due to complication

- Transportation costs: To and from eye center for surgery and pre- and post-

operation evaluations; considered negligible in overall decision analysis

-

Analysis

Cost-effectiveness is formulated using an ENPV model

Assumptions

- Time horizon is 20 years to account for onset of presbyopia, age-related near-vision deterioration

- Savings from alternative vision correction and discount rate are constant

- Patients are risk-neutral

Model Specification

- Expected costs of complications are included using costliest method of re-treatment (following Lamparter, Dick, and Krummenauer2)

- Complications are independently distributed

- Foregone wages, personal value, and savings are patient-specific and incorporated using a continuum of values to find their break-even (ENPV = 0) levels

Fig. 1: LASIK procedure

Results Time Preference: 5% annual discount rate / Time Horizon: 20 years / Expected Cost of

Complications: $700

Break-even savings levels: $242, $282, $323 Break-even personal values: $1,360, $1,776, $2,193

Introduction

Pioneered in 1991, LASIK eye surgery is a two-step procedure in which a thin, outer layer of corneal tissue is folded back, enabling an excimer laser to reshape the cornea. Ideally, LASIK can correct even severe refractive errors and allow patients to forego alternative lensescompletely. Though the

risk of complication has diminished since the early 1990s, patients still face the possibility of double vision, poor night vision, and even permanent blindness. Despite these risks, the number of LASIK procedures performed in the United States has grown significantly, with approximately 1.3 million patients in 2005.

Fig. 2: Maximum incidence of LASIK complications1

1) Prospective patients face a break-even ENPV for personal values of between approximately one-third to one-half of the upfront surgical cost

2) Annual savings from alternative forms of vision correction of between approximately $200-$400 are required for ENPV > 0, depending on income level

3) LASIK is therefore a cost-effective treatment option for patients with high current vision correction costs and personal values without prohibitively large incomes

4) ENPV results will also vary significantly with alternative patient income levels, time horizons, and time preferences

With American healthcare expenditures continually rising, the cost-effectiveness of elective treatment solutions is a crucial patient concern. To investigate this issue, this project posed the following research question:

Fig. 4: ENPV for $300 annual savings and varying personal values by patient income level

Fig. 3: ENPV for $100,000 income level and varying annual savings by patient personal value

Is LASIK eye surgery a cost-effective treatment solution for a prospective patient?

For Further InformationPlease contact David at [email protected]

1) Schallhorn, Steven C., Amesbury, Eric C. and David J. Tanzer. “Avoidance, Recognition, and Management of LASIK Complications.” Am J Ophthalmol 2006; 141:733- 739.

2) Lamparter, J., Dick, H.B. and F. Krummenauer. “Clinical Benefit, Complication Patterns and Cost Effectiveness of LASIK in Moderate Myopia.” Eur J Med Res 2005; 10:402-409.

Expected NPV Equation

Page 10: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

AbstractThe biopharmaceuticals industry is characterized by high demand for therapeutic drugs but manufacturing bottlenecks in the production of these drugs. Biopharming is a innovative technology in the biotechnology industry that involves engineering the genes of plants to allow them to produce proteins used in producing these drugs. The main advantages that biopharming has over other methods of drug production are 1) it is much less expensive than cell culture or animal production methods, 2) it is relatively easy and inexpensive to scale-up, allowing producers to realize economies of scale quickly. The disadvantage of using biopharming is that there is large risk of environmental contamination, and thus there are more costs associated with complying with environmental regulations. As biopharming develops over the next decade, we expect to see a potential solution to the drug manufacturing bottleneck, lower producer costs, potentially lower consumer costs, and increased investment and innovation in the biopharmaceuticals industry.

What is Biopharming?“Biopharming” is the use of plants as bioreactors. That is, plants are genetically modified to allow them to produce biopharmaceutical proteins. The major breakthrough for biopharming came in 1989 when functional antibodies were first produced in tobacco leaves. Since then, researchers have been developing new techniques and experimenting with a variety of plant species to maximize the effectiveness, duration, and yield of proteins produced through biopharming.

Currently, biopharming is used to produce 4 types of proteins:

1. Parental Therapeutics & Pharmaceutical Intermediates – hemoglobin, human growth hormone – ultimately used to produce drugs

to treat human diseases

2. Industrial proteins – enzymes used in pharmaceutical manufacturing

3. Monoclonal Antibodies – a special type of protein that can precisely target specific antigens, used to treat diseases such as cancer, arthritis

4. Vaccines – can potentially be taken orally: herpes, cholera, human serum albumin vaccine

Why use Biopharming?Biopharming has been touted as the newest solution to the production bottleneck that characterizes the biopharmaceutical industry. Biopharming allows relatively low-cost mass production of drugs without sacrificing quality. The following table compares biopharming to other biopharmaceutical production agents:

Economic Impact of Biopharming on the Biopharmaceutical Industry

Breaking the Manufacturing Bottleneck. There is currently a huge gap between demand for pharmaceutical drugs and supply. This is because current methods of production do not allow ease of scale-up to meet the large demand. Biopharming allows cost-effective and relatively easy scale-up of drug manufacturing, and is one solution to the manufacturing bottleneck.

Lower Producer Costs. Adoption of biopharming lowers producer costs in two ways. 1) Per-unit costs of protein production decrease dramatically. 2) Biopharming inherently encourages large-scale production since costs of scale-up are low. As producers increase production, they will see even lower costs. Although there are costs associated with abiding by non-contamination regulations, the total costs of production are still much lower compared to other methods of drug production.

Potentially Lower Consumer Costs. Studies have been mixed on the net effect of biopharming on consumer surplus. On the one hand, adoption of biopharming will for certain lower producer costs. However, drug patents allow pharmaceutical companies to leverage monopoly power in pricing drugs. This suggests that even as biopharming is adopted, producers can continue to raise drug prices above actual cost levels. Some studies have predicted that consumer surplus will not change very much. Other studies suggest that as costs of production become cheaper, more producers will enter the market and produce better substitutes for drugs that are already on the market. In the end, consumers will see a decline in prices as drug producers compete with each other.

Increased Investment. One of the major roadblocks to investment in the pharmaceutical industry is the inherent risk of investing in development of a drug. There are high fixed costs which can only be recuperated if a drug is clinically successful and can go to market. The drug development pipeline takes roughly 10 years to go from development to final approval, and many factors can disrupt approval of a drug, ranging from patent issues to clinical failures. Biopharming has much lower fixed costs and approval timeframes, meaning a lower risk for investors and thus potentially increased investment in the industry.

Increased Innovation. It has only been a few years since biopharming technology has reached a level where it has become feasible to implement in production. With the increased regulatory scrutiny surrounding biopharming and the initial successes of biopharm-produced drugs, researchers are putting much more effort into exploiting new plants, developing new techniques to comply with regulations, and identifying new medical indications to target.

Biopharming: Advantages, Disadvantages and Economic Impact on the Biopharmaceutical Industry

Lisa Huang

B.A. Candidate, Department of Economics, Stanford University

Advantages of Biopharming Disadvantages of Biopharming

Overview of the Biopharmaceutical Industry

I. A rapidly growing industry: • 84 biopharmaceuticals on the market serving 60 million patients worldwide for a cumulative market value of $20 billion

• 15% growth rate for biopharmaceuticals vs. 7-8% for small molecules over next decade

• 500 biopharmaceuticals are estimated to be in clinical trials globally, 378 of which are in earlier stages (Phase I and II), while 122 are in Phase III or awaiting FDA approval

• 6 or 7 new large-molecule drugs to reach the market each year over the next several years

Figure: Sales of Therapeutical Antibodies. Source: Journal of Biotechnology

II. Large Demand, Shortage of Supply: • Some pharmaceutical drugs must be given in high doses, e.g. monoclonal antibody-based drugs

• Estimated 20-50% of potential therapeutics industry-wide could be delayed over the next decade due to the lack of manufacturing capacity

• To meet the expected demand for new drug production, more than three times the current production capacity is required

Case Study: Enbrel (1998)1998 Immunex introduces Enbrel, a biotech drug used to treat rheumatoid arthritis

Enbrel is produced in 10,000-liter bioreactors of cultured cells

2001 Success of Enbrel generates high demand for the drug By March, there is a shortage of supply

2002 By March, there is a waiting list of 13,000 patients Immunex starts rationing Enbrel sold to pharmacies Immunex launches a new production facility in Germany, but this will take 5 years and $450 million to complete

Now Continued shortage of supply

1. Lower Costs

Fixed Capital CostsA mammalian cell culture fermentation plant costs $450 million and 4-7 years to build and approve. A corn purification facility with the same

protein production capacity costs $80 million and 3-5 years to build and approve.

Marginal Production CostsIn large-scale productions, costs for using plant bioreactors is 4-5 times lower than for animal cell bioreactors

Figure: Production costs of the protein Immunoglobin A using mammalian cell culture, transgenic goats, and transgenic plants. Source: TRENDS in Plant Science

2. ScalabilityThe large-scale farms needed to grow transgenic plants are readily available. To meet demand, producers only need to plant more seeds and cultivate more acres of land. Although initial set-up costs are high, maintenance costs are low, so there are economies of scale.

3. SaferIn animal production systems, viruses are often used to introduce the target gene to animal cells. There is thus the risk of virus mutation or creation of animal prions, both of which pose risks to humans. Since the only viruses that are used for plant production are plant viruses, there is little infection risk to humans.

Transgenic

Plants

Yeast Bacteria Mammalian Cell Cultures

Transgenic Animals

Production Costs

(per gram of protein)

$10-20 $50-100 $50-100 $500-5000 $20-50

Scale-up Costs Low High High High High

Storage Costs Low Low Low High High

Time Effort High Medium Low High High

Distribution Easy Medium Medium Difficult Difficult

Production Yields High High Medium Med-High High

Product Homogeneity High Medium Low Medium Low

Can be used as delivery vehicle for drug?

Possible No No No Yes

Viral Contamination Risks

Low Unknown Yes Yes Yes

Ethical Concerns Medium Medium Medium Medium High

1. Risk of Environmental Contamination

There is a high risk that transgenic plants will cross-pollinate with wild plants if proper precautions are not taken. Once an engineered gene is introduced into the natural environment, there can be enormous impacts on the ecosystem. Wild plants that can produce very powerful proteins pose risks to humans and wildlife if they are accidentally consumed.

Another risk comes from possible cross-pollination between transgenic plants and food crops. Current regulations require a 100% guarantee of zero contamination for any farm producing transgenic food crops. To address this, technologies have been developed to prevent contamination, including:

Isolation of transgenic plants in greenhouse Producing non-fertile plants using a “terminator” gene Manually stripping away plants’ flowers so that no cross-pollination can occur Use of non-food crops for biopharming

2. Increased Costs of Compliance with Regulation

Because of the high risk of environmental contamination, transgenic plants are under strict regulations. The cost structure of biopharming companies is highly affected by measures to mitigate risks of contamination. Complying with regulations increases the costs of production and has the following economic effects:

R&D money spent on developing technologies to prevent contamination, rather than on developing more efficient transgenic plants Research in transgenic techniques shifts from food (corn) to non-food (tobacco) crops Increased self-inspection costs to ensure non-contamination Possible outsourcing of transgenic plant farming to other countries

3. Waste Disposal

Producing 200 kg of antibody protein from corn can generate 400,000 kg of waste byproduct, which may contain another 100 kg of protein byproducts. In general, these byproducts have to be incinerated for safety and contamination issues. Waste disposal can thus be difficult.

For Further Information

Ko and Koprowski, 2005. Plant Biopharming of Monoclonal Antibodies, Virus Research. 111(1), pp. 93-100.

Elbehri, A, 2005. Biopharming and the food system: Examining the potential benefits and risks, AgBioForum. 8(1), pp.18-25.

Daniell, Streatfield, and Wycoff, 2001. Medical Molecular Farming: Production of Antibodies, Biopharmaceuticals, and Edible Vaccines in Plants, TRENDS in Plant Science. 6(5), pp. 219-226.

Page 11: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Abstract

Conclusions

LASIK Eye Surgery: A Patient’s Cost-Benefit Analysis David Muramoto

Department of Economics, Stanford University

References

Laser in situ keratomileusis (LASIK) is currently one of the most popular and effective methods of surgical vision correction. This project investigates the upfront and long-term costs and benefits of LASIK eye surgery, using an expected net present value (ENPV) framework to model the patient implementation choice. In approximating the direct procedural and expected complication costs and long-term vision correction savings, this analysis concludes that LASIK eye surgery can potentially yield significant positive NPV to prospective patients. Exact valuations vary by individual patient characteristics, including income level, time preference, and the personal value attached to avoiding alternative methods of vision correction.

Benefits

Upfront

- Personal Value: Convenience and cosmetic values from avoiding alternative vision correction; involves strong heterogeneity

Long-term

- Savings from avoiding alternative methods of vision correction

Excluded

- Potential value from improved vision not offered through alternative correction methods

Costs

Upfront

- Costs of surgery: $3,984 (2007 average)

- Foregone wages: Average of 2 workdays lost; total effect varies by income level

Long-term

- Expected costs of complications

Excluded

- Disutility from primary surgery and potential re- treatment due to complication

- Transportation costs: To and from eye center for surgery and pre- and post-

operation evaluations; considered negligible in overall decision analysis

-

Analysis

Cost-effectiveness is formulated using an ENPV model

Assumptions

- Time horizon is 20 years to account for onset of presbyopia, age-related near-vision deterioration

- Savings from alternative vision correction and discount rate are constant

- Patients are risk-neutral

Model Specification

- Expected costs of complications are included using costliest method of re-treatment (following Lamparter, Dick, and Krummenauer2)

- Complications are independently distributed

- Foregone wages, personal value, and savings are patient-specific and incorporated using a continuum of values to find their break-even (ENPV = 0) levels

Fig. 1: LASIK procedure

Results Time Preference: 5% annual discount rate / Time Horizon: 20 years / Expected Cost of

Complications: $700

Break-even savings levels: $242, $282, $323 Break-even personal values: $1,360, $1,776, $2,193

Introduction

Pioneered in 1991, LASIK eye surgery is a two-step procedure in which a thin, outer layer of corneal tissue is folded back, enabling an excimer laser to reshape the cornea. Ideally, LASIK can correct even severe refractive errors and allow patients to forego alternative lensescompletely. Though the

risk of complication has diminished since the early 1990s, patients still face the possibility of double vision, poor night vision, and even permanent blindness. Despite these risks, the number of LASIK procedures performed in the United States has grown significantly, with approximately 1.3 million patients in 2005.

Fig. 2: Maximum incidence of LASIK complications1

1) Prospective patients face a break-even ENPV for personal values of between approximately one-third to one-half of the upfront surgical cost

2) Annual savings from alternative forms of vision correction of between approximately $200-$400 are required for ENPV > 0, depending on income level

3) LASIK is therefore a cost-effective treatment option for patients with high current vision correction costs and personal values without prohibitively large incomes

4) ENPV results will also vary significantly with alternative patient income levels, time horizons, and time preferences

With American healthcare expenditures continually rising, the cost-effectiveness of elective treatment solutions is a crucial patient concern. To investigate this issue, this project posed the following research question:

Fig. 4: ENPV for $300 annual savings and varying personal values by patient income level

Fig. 3: ENPV for $100,000 income level and varying annual savings by patient personal value

Is LASIK eye surgery a cost-effective treatment solution for a prospective patient?

For Further InformationPlease contact David at [email protected]

1) Schallhorn, Steven C., Amesbury, Eric C. and David J. Tanzer. “Avoidance, Recognition, and Management of LASIK Complications.” Am J Ophthalmol 2006; 141:733- 739.

2) Lamparter, J., Dick, H.B. and F. Krummenauer. “Clinical Benefit, Complication Patterns and Cost Effectiveness of LASIK in Moderate Myopia.” Eur J Med Res 2005; 10:402-409.

Expected NPV Equation

Page 12: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Motivation

Acknowledgments

Fitted projections

Conclusion

Innovation and Women’s RightsThe use of parliamentary quotas to achieve equitable representation in elected government

Peter Banwarth, SIEPR Policy Forum, Stanford University, California 94305

Data Sources

Throughout history, women have never enjoyed equal representation in government. Although in most countries women have long had the right to vote, the average representation of women in national parliaments was just 11.7 percent in 2000. To combat this problem, an innovative policy approach was needed: Parliamentary quotas. A small number of countries had constitutional quota systems before 1990, but instituting quotas only began in earnest in the mid-1990s. Gender equality in government is extremely important to the principles of democratic governance. I ask the following questions: 1) has this innovation been widely adopted, and 2) have quotas been effective in increasing the share of women in representative government?

I use the Bass Model of Diffusion and OLS regression to determine the answers to these two questions, then present ideas for future research.

EffectivenessLimitations of the Bass Model

Diffusion of Parliamentary QuotasMoving Average vs. Bass Model Predictions

1990 – 2030

Cumulative Total of Parliaments 85

Coefficient of Innovation 0.0106

Coefficient of Imitation 0.2097Contact Peter Banwarth at [email protected]

Effect of Quotas on Women’s Representation in National Parliaments

– World Average

Diffusion of Parliamentary Quotas

Year of Adoption

1990 Colombia * Nepal

1991 Argentina

1992

1993 Italy ^

1994 Eritrea

1995 Phillipines Uganda

1996 Belgium Costa Rica Paraguay

1997 Brazil Dom. Republic Ecuador Kenya Panama Peru

1998 North Korea Venezuela *

1999

2000 France Honduras Tanzania

2001 Bolivia

2002 Djibouti Mexico Morocco Niger Pakistan Macedonia

2003 Indonesia Jordan Rwanda

2004 Afghanistan Burundi Iraq South Korea Somalia Uzbekistan

2005 Liberia Sudan

2006 Guyana Mauritania Portugal Serbia

2007 Spain

^ Declared unconstitutional in 1995

* Declared unconstitutional in 1999

Timeline of quota adoptions by national parliaments

1990 – 2007

Unfortunately, given the limited data points and the highly discreet values, the Bass Model, applied straightforwardly, is not sufficient to describe the diffusion of parliamentary quotas. Additionally, the cyclic nature of representative elections means that the yearly adoption of parliamentary quotas may be correlated across time.

However, by building a block bootstrap estimate using 3-year sample means, I have constructed a time series which I believe to be sufficiently accurate to accommodate the Bass Model.

Below are the fitted projection and parameters as determined by the Bass Model. They are not intended to be a precise description of the diffusion of this innovation, but rather a useful approximation to judge the timeline and eventual saturation of parliamentary quotas.

Bass Model Parameters

This chart clearly shows the positive effect that quotas have on women’s representation in national parliaments. Because the years in which countries adopted their quotas varied, the data was standardized to time zero - when the quota was instituted. For time < 0, the average representation of women among parliaments was 13.4 percent and grew very slowly over time. Upon adoption of quotas, the proportion of women in representative government grew much more strongly, reaching an average of 25 percent only 10 years after instituting the quotas. Results of OLSOLS analysis corroborates the graphical results. Below are the OLS estimators and their standard errors.

(1) (2) (3)

Quota proportion

log GDP

log Quota proportion log GDP

Quota binary

log GDP

0.127 1.099 8.33 12.66 3.5 1.11

(0.015*) (0.667) (3.095*) (5.02*) (0.460*)(0.680

)

(1) OLS of women’s proportionate representation on mandated quota proportion and log(GDP).

(2) OLS of women’s proportionate representation on log(mandated quota proportion) and log(GDP).

(3) OLS of women’s proportionate representation on existence of quota and log(GDP).

Standard errors in parentheses. (*) significant at p < 0.001

Parliamentary quotas have had a beneficial effect on women’s proportionate representation in elected government. The worldwide outlook of women’s rights is promising given the strong diffusion of this innovative solution to gender inequality. Furthermore, parliamentary quotas are clearly more than just a statement of intent. Although many countries still have trouble meeting their quotas, every country that has instituted the quota system has made progress in increasing women’s representation.

Next Steps

While increasing gender equality in government is a valuable goal in its own right, perhaps more important are the potential improvements to society that could come from representative gender equality. Women tend to spend a larger proportion of their private income on health and child welfare, which raises the question, do health and child welfare outcomes improve as a result of increased gender equality in government? Thanks to parliamentary quotas, this has become an answerable question, and a worthy pursuit.

Proportional Representation DataInternational Parliamentary Union. “Women in National Parliaments.” http://www.ipu.org/wmn-e/classif-arc.htm.

Quota DataInstitute for Democracy and Electoral Assistance. “Global Database of Quotas for Women.” http://www.quotaproject.org/index.cfm.

To Professor Seema Jayachandran for help with Stata and advice on scaling variables.

To Professor Grant Miller for advice on choosing a topic for his class, “The Economics of Health Improvement in Developing Countries.”

To Professor Ward Hanson for help on the Bass Model and valuable comments on the poster.

Page 13: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

:

The Village Phone: Mobile Phones as a Solution to the Digital Telecommunications Divide

Ren Fung Yu

Department of Economics, Stanford University, Stanford, CA

1. First, a villager in Bangladesh takes out a microcredit loan from a microfinance institution such as Grameen Bank.

2. She uses the loan to purchase a “starter kit”: a phone, coverage and marketing collateral, for between US$250 and US$300 (Keogh, Wood).

3. The now “wireless woman” sells usage of the phone to villagers on a per-call basis. The loan is repaid gradually with business profits.

Future Challenges•Telecommunications monopolies and protectionism impede the Village Phone’s diffusion in many developing countries. In Bangladesh, the program did not spread until after market liberalization in 1996.

•Technological barriers, such as high costs of expanding rural wireless coverage and unreliable electricity sources might double future expansion costs (RDH).

•Village Phone networks in Bangladesh rely on inefficient GSM technology. Switching to wireless local loops (WLL) or other systems will require significant infrastructural investments or partnerships with other rural providers.

•Grameen Telecom’s business model relies on internal subsidies from urban cellular users and financing from Grameen Bank. Such support is unavailable in countries in which microfinance and supporting wireless networks are not already widespread.

(650) 799 6977, [email protected]

BackgroundThe “global digital divide” is the technological disparity between the world’s rich and poor countries stemming from the scarcity of top-down technological transfers and “trickle-down” effects. While developed nations have benefited from communications technology, poor nations are trailing behind. The unavailability of technology cripples their ability to gather information, coordinate economic activity, and increase their productivity. Mobile phones are a potential solution to this problem. However, the problem therein lies with distribution rather than the technology. The Village Phone Program is a shared-access program pioneered in Bangladesh to address this problem. This poster discusses the economic impact of the Village Phone and its expansion potential after 10 years of operation in Bangladesh.

The Village Phone Program

ReferencesThe Economist, Economic Focus, “Calling Across the Divide”. March 10, 2005.

Grameen Foundation, “Village Phone: Connecting Technology and Innovation”, Grameen Foundation Website. http://www.grameenfoundation.org/what_we_do/technology_programs/village_phone

Keogh, D. and Wood, T., Village Phone Replication Manual, Grameen Technology Center, Grameen Foundation USA, USA, 2005.

Bayes, A., von Braun, J., Akhter, R., “Village Pay Phones and Poverty Reduction: Insights from a Grameen Bank Initiative in Bangladesh”. ZEF – Discussion Papers on Development Policy, Bonn University, June 1999.

Quadir, Iqbal. “The Power of the Mobile Phone to End Poverty (Speech)”, TED Talks, July 2005.

Richardson, D., Ricardo, R., Haq, M. “Grameen Telecom's Village Phone Programme:A Multi-Media Case Study”. Telecommons Development Group, March 2000.

Conclusions• Under favorable market conditions, the shared-access

approach has the greatest potential for increasing rural telecommunications access.

• The high revenues generated by the shared-access model suggest the effectiveness and expansion possibilities of market-driven approaches to development.

• Potential benefits of shared-access programs include enhanced productivity, social welfare, and new sources of rural income.

• The Village Phone, a shared-access program is still in its exponential growth stages. However, its complete diffusion potential is contingent on the long-term viability of microfinance institutions, future deregulation, and partnerships with WLL-based network companies.

Figure 4: “Phones have helped elevate the status of the female [VPOs]… VPOs become socially empowered as they earn an income… in rural Bangladeshi society, women usually have no say.” (Keogh, Wood)

Economic StatusExtremely poor Moderately poor Non-poor

Would not try - - 2.3Telephone from other phone

26.5 26.5 43.0

Post office 5.9 7.1 6.8Transportation/hire someone

67.6 56.4 47.3

Other - - 0.6Total 100.0 100.0 100.0

Economic Status

Hours through alternative methods

Transport costs

Opportunity costs

Total costs

Total cost of Village Phone

Consumer surplus

All poor 3.67 60389 34.32 95.21 17.35 77.86Extremely poor

3.08 54.97 26.41 81.38 20.08 61.30

Moderately poor

4.15 65.82 40.89 106.71 15.07 91.64

Non-poor 2.54 45.80 21.71 67.51 16.73 50.78

Entire sample

2.70 48.02 23.57 71.58 16.82 54.77

Decisions Decision Makers TotalSelf Husband Both

Family affairs 16.0 12.0 72.0 100.0Utilization of GB credit 30.1 10.0 60.0 100.0

Income from phone 36.0 6.0 58.0 100.0

Diffusion and Growth Microeconomic EffectsConsumer Surplus

Effects on Gender Equality

Bangladesh Teledensity

Village Phone Diffusion and Effects on Growth

Table 2: Consumer surplus from a phone call ranges from 2.64% to 9.8% of mean monthly household income. Trips to the city cost between 2 to 8 times the cost of a phone call, making real savings per call between 132 and 490 Taka ($2.70 to $10) (BBA).

Table 3: Studies show that women have greater control over microcredit loans and Village Phone revenue than other domestic issues (BBA).

Table 1: Alternatives to phone calls in villages without access to a Village Phone, based on a 1998 survey (BBA).

Figure 1: In 1997, Bangladesh had one of the lowest number of telephones per 100 people in the world. Mobile phones account for 94% of teledensity growth over the past 10 years (WB).

Figure 2: The number of VPOs in Bangladesh grew from 950 in 1999 to 278570 in 2007. Grameen Bank claims a repayment rate of 98% on its telecommunications loans (GF).

Figure 3: The productivity gains from a single mobile phone contribute $6000 on average to Bangladesh’s GDP. Maintenance cost amount to around $2000 (Qadir).

AbstractThis study assesses the Village Phone’s efficacy on macro and microeconomic levels. The findings lead to the following main conclusions: 1)Shared-access telecommunications can serve as a marketable commodity in rural settings with the support of microfinance institutions.2)Policies that support village-level entrepreneurship can be a substitute for aid in alleviating poverty in the developing world. The exponential diffusion of Village Phones in Bangladesh suggests that there is still significant growth potential in the rural telecommunications market. Furthermore, mobile phone diffusion has shown to have a significant effect on the GDP of Third World countries and has lower expansion costs than landlines. Under favorable policy environments, rural networks, combined with shared access strategies that concentrate demand and generate efficient usage, may enable profitable, market-driven approaches to providing connectivity and infrastructure in rural areas. 

Page 14: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

AbstractDomain owners seeking to sell their assets can do so in several ways. Which of these is profit maximizing? How does this outcome hold in the market? I attempt to answer these question using the following four common methods of transaction:

(1) Reactive Sale

(2) Active Sale

(3) Public Auction

(4) Public Posting: Fixed Price

First, I use economic theory to predict which method is optimal. Second, I compare this prediction to empirical observations of what sellers do. Surprisingly, although method (3) maximizes seller’s profits in most cases, method (1) is currently in most use.

(1) Reactive SaleExtension of the Coase Conjecture

Inderst (2003) shows that for a seller with a single indivisible good, price settles at owner’s lowest possible valuation when bargaining if:

-an infinite number of buyers with private valuations approach the seller

sequentially.

-seller can only choose to bargain with one buyer at a time (no auctions).

-time between two consecutive offers z 0.

Stipulation: if the discount factor r=0, seller’s expected payoff is the highest possible valuation.

Application to domain name market:

-domain owner, like the seller above, is unable to credibly commit to not lowering sale price in future bargaining rounds.

-owner is reactive, ie. a stream of interested buyers approach the seller.

-assumption: seller and buyer both prefer to reach an agreement (z0).

-r=0 is possible if expected appreciation of domain equals the cost of waiting.

The Optimal Sales Channel for Domain Names

Riaz Rahim

Department of Economics, Stanford University, Stanford, California 94305

Literature CitedGoeree, Jacob K., and Offerman, Theo.

"Efficiency in Auctions with Private and Common Values: An Experimental Study." The American Economic Review Volume 92. Number 3. June 2002 (pp. 625-643). 29 May 2007 <http://links.jstor.org/sici?sici=0002-8282%28200206%2992%3A3%3C625%3AEIAWPA%3E2.0.CO%3B2-0>.

Inderst, R. (2003). The Coase Conjecture in a Bargaining Model with Infinite Buyers. Working Paper LSE.

Milgrom, Paul R., and Weber, Robert J.. "A Theory of Auctions and Competitive Bidding." Econometrica Volume 50. Number 5.September 1982 (pp. 1089-1122). 29 May 2007<http://www.jstor.org/view/00129682/di952661/95p0170b/0?frame=frame&[email protected]/01cce4405f00501c02fe5&dpi=3&config=jstor>.

FindingsThe optimal method for sellers depends on r:

-if r>0, Public Auction is best.

-if r=0, Reactive Sale is best.

Sedo.com, which has ~40% of the resale market share, only launched its auction service in 2006. The bulk of sales are currently made using (1). This implies that:

-domain sellers have had low discount rates historically, presumably due to anticipated asset appreciation.

-the recent boom years for domain sales have increased opportunity cost of not selling, creating a higher incidence of r>0 and thus a greater demand for public auction services.

For further information

Please contact Riaz Rahim at [email protected] with further questions.

(2) Active SaleExtension of Inderst Model for Reactive Sales

Inderst (2003) shows that his model extends to the case where the “seller has to leave the old buyer in order to search for a new buyer.”

Same results as (1), except this method may require the seller to incur labor costs. Thus, (2) has a weakly worse outcome than (1).

Figure 1: An active seller markets the availability of a domain.

(4) Public Posting: Fixed Price

The Dutch Auction

Milgrom and Weber (1982) shows that for a general auction following the principles of (3), the Dutch Auction leads to a lower sale price than the ascending price auction.

Method (4) mimics a Dutch Auction. The seller’s optimal strategy is to initially post a high sale price and then incrementally reduce the value until the domain is sold. Since method (3) is an ascending price auction with a time limit, (4) leads to a worse outcome than (3).

(3) Public AuctionGeneral Model: Private Values and Common Values

Goeree and Offerman (2002) shows that in an auction with bidders who have both private and common values, the result is inefficient, where:

-total bidder valuation is equal to the private valuation plus the mean of the perceived common values.

-realized efficiency = (twin –tmin)/ (tmax –tmin) x 100%; t is private value.

-there are a finite number of bidding rounds.

Application to domain name market:

-bidders have private values based on planned purpose and common values based on the future prospects of the Internet.

-instead of a finite number of rounds, domain auctions usually have a time limit.

Though precise efficiency rates can’t be inferred from the literature, when r>0 (3) is weakly better than (1), and when r=0 (3) is weakly worse than (1).

Charts & Tables

Figure 3: Zetetic provides some market statistics. Sellers may set r=0 anticipating high mean

returns.

Figure 2: A sample of domains up for auction on Sedo.com

Page 15: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Cable Television: A Look to the Future

How Cable Television might become a thing of the pastShari Summers

Department of Economics, Stanford University, Stanford, California 94309

Cable Television Historical Data

Abstract

Problem

Bass Model

Reasons for slowed growth: New substitutes

penetrating the Cable Monopoly

Analysis:

Cable Television was the cultural force that redefined television by altering entertainment, sports, news, etc. Developed in 1948 by John Walson, it rapidly grew starting in 1981 largely in part due to popular networks like CNN, CSPAN, ESPN, and MTV. By 2001, it reached an all-time high at 65.7 million subscribers. However, cable television now moves into a more competitive environment in which substitutes have begun to penetrate this monopoly. The main competitors are broadcast satellite which includes DirecTV and Dish Network and FiOS. With these new alternatives offering higher quality, consumers now face decisions that they haven’t in the past. Close substitutes have slowed growth for the number of cable subscribers and will continue to lead to a decline in cable consumers unless cable can match both quality and pricing by offering appealing and comparable bundles. Cable television was once a monopoly but now faces challenges with a new competitive environment.Cable television was a powerful monopoly but more recently faces a more competitive environment. Close substitutes, Satellite and FiOS, threaten the future for cable and have already slowed down cable growth. The problem is whether cable can remain a dominant force, and if so, what changes need to be made.

•The Bass Model predicts that cable television will peak around the year 2005. •This data shows rapid growth with >4% increase each year until 1992.•Between 2003-2005 growth has slowed to <1%.

Until entry of DBS in 1994, most cable firms were local monopolies…entry of satellite brought higher-quality substitute for cable with more channels and a clearer picture.

Broadcast Satellite: DirecTV/Dish on Demand

• DBS Two main firms: DirecTV and Dish Network

•DirecTV 12.3M by 2002•Dish Network 9.4M by 2002

•Satellite offers higher quality than most cable tiers:

•More channels•Additional sports subscriptions and other programs unavailable to cable consumers•Higher quality reflected in higher prices

FiOs

Satellite Subscriptions are slowly penetrating the cable monopoly starting in 1995… increasing satellite rates are matched with declining cable growth rates.

• Offers broadband internet access, digital cable, and VoIP telephone services in fiber optic lines.• First major US carrier in the United States to offer these services.• 10% market penetration by year 2006.• Target: 20-25% market penetration by year 2010.• Of the customers who have subscribed to FiOS TV, 2/3 have discontinued their cable TV service.• Expected to be a serious competitor in the next 10 years.

Adoption Prediction for Cable Alternatives

• Adoption for multimedia application results show broadband and satellite networks have window of opportunity for capturing sizeable portion of emerging multimedia market starting mainly in 2002. Cable rates have begun to decline starting in 2002 as seen in Figure 1.

ConclusionAlthough cable television met with early on success and continues to dominate, the future predicts the decline in number of subscribers to cable as a result of emerging competitors penetrating the monopoly of the past. As a result of competition, cable networks are forced to meet the quality of satellite alternatives which implies increasing prices, or eventually losing all subscribers to their more appealing substitutes. It is possible to coexist but satellite and FiOS penetration is increasing yearly and cable must restructure in order to keep up.

Abaye, Ali. R, Babbitt, Best, Hu, Maveddat. “Forecasting Methodology and Traffic Estimation for Satellite Multimedia Services”Strover, Sharon. “United States: Cable Television”NCTA Cable Industry StatisticsLilien (1999), http://www.ebusiness.xerox.com/isbm/dscgi/ds.py/Get/File-89/7-1999.pdf

Works Cited

AcknowledgementsThank you to Dr. Ward Hanson and Andrea Pozzi for all their assistance throughout.

Alternatives Market Development Deployment

Satellite 2000 2002

BWA 1999 2000

ADSL 2000 2001

Third Gen PCS 2000 2002

Cable Modems 1999 2000

ISDN 1992 1997

Figure 2 Source: Lilien (1999), http://www.ebusiness.xerox.com/isbm/dscgi/ds.py/Get/File-89/7-1999.pdf

Figure 3

Figure 1

Cable Television Development: 4 Phases

1948-1965: slow growth

1965-1975: FCC attempt to restrict cable television to non-urban markets (forms local media service)

1975-1992: regulatory acts and expansion across country…promotion of new satellite-delivered programming services

1992-present: moves into more competitive environment

Cable Response:

60.3% cable markets induced by satellite to increase quality at the expense of increasing prices in order to remain competitive.

Cable chooses combination of price and quality in order to appeal to consumer types that might switch to satellite however satellite penetration is increasing and predicted to reach around 82 million homes by 2010.

Figure 4

Page 16: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

AbstractIn January 2002, President Bush signed the innovative

No Child Left Behind Act (NCLB) into federal legislation with the goal of improving the performance of students in U.S public schools. NCLB tests every child in grades 3-8 and uses the scores to create report cards for every school so that parents can see the successes or failures of the local public education. If their school is not making adequate yearly progress NLBC requires that school districts notify the parents and offer alternatives for better education. I will:

MotivationOne of the nation’s most pressing problems is the

inequality of public education. The performance of primary and secondary students differs among states and also among race and ethnicities, creating an achievement gap. The performance of Black and Hispanic students improved during the 1970s and 1980s, but in the 1990s, the achievement gap remained stagnant or grew. An innovative solution to the achievement gap became an imperative policy goal for the new administration.

In order to address the achievement gap, the NCLB act creates an incentive for public schools to implement national standards for every student. Standardized tests provide available data for school districts, teachers, and parents in order to evaluate each school. After five years of implementation, politicians and policymakers are evaluating the success of the act in improving student performance. If appropriate progress has not been achieved under NCLB, policymakers will have to reevaluate the system.

Goals

No Child Left Behind: Closing the Achievement Gap in California Public Schools

Stuti Goswamy

B.A. Candidate, Department of Economics, Stanford University

Strengths of NCLB: MathematicsThe average scale scores in Mathematics in California has generally increased every year since 1978. After the implementation of the NCLB act in 2002, the average score jumped significantly to an all-time high in 2004.

DataNational Assessment of Educational Progress (NAEP) measures

students' achievement in many subjects, including reading and mathematics.

Under No Child Left Behind, as a condition of receiving federal funding, states are required to participate in the NAEP math and reading assessments for fourth- and eighth-grade students every two years.

I am using eighth grade data in reading and mathematics for California public schools and for the national average of public schools.

Conclusions

1). The NCLB act has had mixed results on subject score reports. While mathematics scores have increased since implementation, reading scores have decreased. A subject-specific analysis is necessary since the current standardized test may only improve the scores in certain subjects.

2). California public schools do not perform as well as the national average. In order to improve performance, the federal and state governments must provide more funding to schools. Without better resources and teachers, California will continue to lag behind other states.

3). The achievement gap is still a serious problem. While math scores have improved among all races and ethnicities, the difference between the groups has not decreased. Whites and Asian Americans continue to perform better than Blacks and Hispanics in Math and in reading. As California’s population rises, extra effort to teach reading may be necessary – including after-school initiatives and more programs in ESL (English as Second Language).

4). Since the NCLB has achieved mixed results, it is necessary for policymakers to reevaluate the system. Because it has achieved significant improvements in mathematics, the system does not need to be abandoned, but better suited to help under-performing states and closing the achievement gap among ethnicities.

California State Profile:Number of schools: 9,690Number of students enrolled: 6,441,557

Racial/Ethnic Background:White: 31.9%Black: 8.1%Hispanic: 47.7%Asian/Pacific Islander: 11.5%

The following graph more clearly shows the rise in scores after 2002. However, the gap between California school scores and the national average school scores is increasing. California is not improving at the same rate as the rest of the country.

The following graph shows how both California school scores and the national average school scores have decreased since 2002. The gap between the two has remained fairly stagnant.

Weaknesses of NCLB: ReadingThe average scale scores in Reading in California has been erratic, both rising and falling, appearing almost stagnant over time. After the implementation of the NCLB act, the average score decreased.

The following graph shows the rise in scores over time regardless of race and ethnicity. Each ethnicity improves in score in roughly the same proportion. However, the difference in average scale scores between ethnicities is significant, with Blacks and Hispanics underperforming.

The following graph highlights the achievement gap between race and ethnicities. Blacks and Hispanics perform at a much lower level than Whites and Asian American/Pacific Islanders. Furthermore, the scores for Whites and Blacks have decreased since the NCLB act.

1. chart students performance in mathematics and reading in grade 8 in California public schools

2. compare pre-NCLB score results with post-NCLB scores in order to see if performance has increased

3. chart the national average scores vs. California’s average scores and

4. discuss the strengths and weaknesses of the act, using the National Assessment of Educational Progress (NAEP) data

5. compare score performances between different races/ethnicities to see whether the achievement gap is closing.

Analyze data in graphs to investigate the impact of NCLB on public school performance and use the results to determine strengths and weaknesses of the act.

Specifically, we want to determine the impact of the NCLB act on:

• Reading and mathematics scores in California, grade 8, before and after implementation in 2002

• California public school scores vs. the national average of public school scores

• Closing the achievement gap between race and ethnicities in California.

For More InformationFor data: http://nces.ed.gov/nationsreportcardFor NCLB: http://www.ed.gov/nclb/overview/Please contact Stuti Goswamy ([email protected]) with questions.

Page 17: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Background

Literature CitedGoetz, Andrew R., and Timothy M. Vowles. Progess in Intermodal Passenger Transportation: Private Sector Initiatives. University of Denver. Denver.

Janic, Milan, and Yvonne Bontekoning. "Intermodal Freight Transport in Europe: an Overview and Prospective Research Agenda." OTB Research for Housing, Urban and Monbility Studies; Delft University of Technology.

"Technology and Traffic Management: London Case Study." Making the Modern World. 23 May 2007

<http://www.makingthemodernworld.org.uk/learning_modules/geography/>.

Congestion Pricing

Congestion pricing can be considered a Pigovian tax, designed to correct negative market externalities. In this scenario, roads are supplied by the city for drivers, but it is the city’s pedestrians, workers, and residents who must deal with the negative effects of congested streets.

London Case Study

In 2003, London mayor Ken Livingston introduced congestion pricing, a tax paid by drivers who clog up the city’s most congested areas, such as finance districts, government office areas, and major tourist destinations.

- Prior to congestion charging 40,000 vehicles an hour drove every morning into central London.

- This traffic resulted in drivers in central London to spend 50 percent of their times in jams, costing the city £2 to £4 million a week.

- 136,000 residents live within charging zones.

£ 8 congestion charge addresses transport priorities for London:

1. Reducing congestion2. Improving bus services3. Improving journey time reliability for car users.4. Making distribution of good and services more

reliable, sustainable and efficient.

Results

- Congestion in the zone has dropped by around 30 percent and is now lower than any stage since the mid-1980’s.

- There are 50,000 fewer motor vehicles entering charge zone per day, this represents a 16% drop.

- Journey times in the center have decreased by approximately 14% percent and reliability has improved by an average 30%.

- Payment schemes working well, over £68 million were raised in 2003/04 and has been steadily increasing and expected to plateau at £130 million per year.

- Revenue is spent on bus network improvements, increasing late-night transportation, and expanding Underground and rail capacity with new services across central London.

A Perfect HybridCongestion pricing alone cannot change the way a city’s population moved around, it is of paramount importance for there to exist alternative modes of transportation readily available for commuters to use.

- London had well developed alternative modes of transportation including the Underground, an efficient bus system, as well as a well developed rail system providing commuters easy access to the center. Congestion pricing has proven to be a profitable mechanism for internalizing the externalities brought forth by congestion.

A congestion pricing plan along with a well developed intermodal system will offer cities an opportunity to capitalize on transportation resources while allowing them to continuously seek to improve them.

Efficient Decongesting Mechanisms: Congestion Pricing and Intermodal Systems

Luis Raul Cerna

Department of Economics, Stanford University, Stanford, California 94305

For further information

The past few decades have presented dramatic changes for world demographics; the global population has risen to over 6.7 billion persons and the concentration of population has shifted away from rural to urban centers. This shift has presented myriads of difficulties for metropolises across the globe in that many suffer from problems relating to overpopulation and inadequate city planning. Amongst the greatest problems plaguing these cities is over-congestion due to saturated transportation systems that have become obsolete due to a dramatic increase in motor vehicles. There are approximately 600 million motor vehicles in circulation today, if the current trend persists it is estimated this amount will double to 1.2 billion motor vehicles by 2030. Given the length of time it takes to plan and prepare, it is of paramount importance for the public and private sectors to develop mechanisms encouraging the use of public transportation so that congestion issues can be resolved.

Intermodal Passenger Systems

Basic definition: Being or involving transportation by more than one form of carrier during a single journey.

Motivation: Establish efficient movement of passengers between modes of transit thus reducing travel time and dependence on passenger vehicles.

Amsterdam

- Rail networks are the city’s high-speed backbone while people get around locally by bike.

- City has linked both systems by planning bicycle lanes as feeder systems for rail stations and by building extensive bicycle parking.

Hong Kong

- Intermodal hubs link regional and international networks.

- High-speed rail systems link the international air terminals with downtown station for subways, ferries and double-decker street cars.

Please contact Luis Raul Cerna at [email protected] if you have any questions.

Figure 1. Heavy congestion in downtown London during rush hour

Figure 3. Clear signs at Hong Kong International Airport show different types of transport to center.

Figure 4. Airport Express is the most direct route

Connecting airport with city center in 24 minutes.

Figure 2.The white-on-red C marks all entrances to the congestion charge zone. Congestion charge was originally introduced at a price of £5 per day. Congestion charge has recently risen to £8 a day, approximately $16.

Figure 5. Gare du Nord TrainStation in Paris links international and domestic rail systems with theCity’s extensive subway system.The Station also provides easy access to Paris’s international

and domestic air terminals.

Page 18: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Why Care?

Growth of Innovation FirmsThe Evolution of Innovation

Facilitating Innovation

Future of Innovation

A little help R&D?: The influence and expansion of Innovation FirmsRay Jones

Department of Economics, Stanford University, Stanford, California 94305

For further information

Following the inventive days of Tesla and Edison, corporations grew to facilitate their own Research and Development sectors. Over the past few decades, there has been greater presence of innovation firms often contracted by the same R&D departments. Innovative design firms are assuming more and more of the duties that R&D once performed in-house.

Figure 1. Nikola Tesla holding balls of flame in his hands

Factors contributing to the growth of non-corporate innovation firms:

The type of product, its diffusion, and the development of the science being innovated

Innovation firms are displaced around the world with a strong concentration located in Silicon Valley. Notable companies include IDEO, Frog Design, and Lunar Design.

The increase in the strength of the U.S. Patent System and the number of patents granted correlates with the increased market demand for innovative firms to help design these patented ideas and products from large firms.Figure 2. Lunar Design contributed to

the design of the Oral-B toothbrush

Figure 3. Scientists work together in a corporate R&D lab

The conception of IDEAS versus addressing design challenges distinguishes Corporate R&D from the services of today’s innovation firms.

R&D departments most commonly INVENT and innovate according to the products provided by the company while Innovation Firms DESIGN the most successful product based on a client’s original idea.

The function of such firms is not limited to product design consultation and functionally lends itself to share the same innovative process of R&D labs following the conception of the original product.

Figure 6. Showing an Increase in Patents Granted

The design consultation nature of these modern firms fills a vacancy in the market for firms to aid larger corporations in innovative product development. As the market grows, corporations may find it most efficient to contract out such innovative services that modern innovation firms provide and put less emphasis on R&D.

Please contact Ray Jones at [email protected] if you have any questions.

Intellectual Property protection and the changing U.S. patent system

Figure 5. Psions and Sharp Wizards of th1980s were considered to be the first PDAs. They were followed by the Atari Portfolio and the Apple Newton. Later the Palm Pilot was introduced and hit main stream in 1996 and the IDEO designed Palm V in 1999.

Figure 7. The flexibility of a hybrid’s design may have a significant impact on the future of innovation firms as car companies contract out

New growth creates an increase in outsourcing and its externalities R&D changing process and innovation changing form The presence of innovation firms will strengthen in the future based on current trends and innovations.

Conclusions

The increase in the number of patents in the U.S. has contributed to the growth of innovation firms. The diffusion a product can help predict its matriculation into the scope of innovation firms. The function of R&D sectors is changing and average percentage of revenue spent on R&D is decreasing in certain industries. Amongst other trends and statistics over the past few decades, full-time R&D scientists and engineers have decreased.

SOURCE: National Science Foundation/ Division of Science Resources

Statistics, Survey of Industrial Research and Development

F R O M R & D T O I N N O V A T I O N F I R M S

Page 19: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

What is a Real Estate Derivative?

Derivative Asset -Securities providing payoffs that depend on or are contingent on the values of other assets. Types of ContractsFutures - Obligates real estate future traders to purchase or sell an asset at an agreed-upon price on a specified future dateOptions – A contract that gives its owner the right but not obligation to purchase or sell an asset at a fixed price at some future date

Real Estate Derivatives Payoff Schedules

Figure 1: Different Derivative Payoffs with 10% Strike Price

Why Purchase Real Estate Derivatives?

HedgingFuture Home Owners

•Buy derivatives in order to lessen the affect

of increasing real estate prices on future purchases.

Builders/Real Estate Oriented Companies•Sell derivatives in order to lessen the

negative impact of decreasing real estate prices.

DiversificationMutual Funds

•Buy derivatives in order to take advantage of the uncorrelated or minimally correlated real estate indices.

Real Estate Investors (Commercial/Residential)

•Sell derivatives in order to lessen their exposure to real estate price declines.

UK Efficient Frontiers

Figure 2: Theoretical Efficient Frontiers with and without the use of real estate derivatives. At any level of risk investors can achieve higher returns.

Conclusion• Although rivaling stocks as the largest

asset class in the U.S., the demand for Real Estate derivatives has not proven significant.

• Hedging and diversification advantages appear to exist for both buyers and sellers.

• A key obstacle of adoption is the accuracy of underlying real estate indices.

• Substantially varying real estate indices cause investors to lose confidence.

• In order to lessen the fear of inaccuracy, a nationally accepted index must emerge as either a single superior index or from the consolidation of several indices.

• With greater consistency investors will be more willing to accept this nationally-accepted index’s returns and will not feel slighted by variation among indices, thus increasing confidence and investment.

Are Real Estate Derivatives Feasible?Are Real Estate Derivatives Feasible?Timothy Horan

Department of Economics, Stanford University, Stanford, California 94305

Literature Cited1)Iacoviello, M. and Ortalo-Magne, F. (2002).

Hedging Housing Risk in London. Working Papers in Economics. Boston College.

2)National Association of Realtors. Metropolitan Area Prices. http://www.realtor.org/Research.nsf/Pages/MetroPrice. 5/21/2007.

3)Shiller, R. (2004). Comment: Betting the House. Daily Times, 12/22/2004. 5/21/2007.

3)S&P/Case-Shiller U.S. National Home Price Values. http://www2.standardandpoors.com/portal/site/sp/en/us/page.topic/indices_csmahp/0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0.html. 5/21/2007.

What is the Key Obstacle Real Estate Derivatives Face?

Accuracy

•Both Commercial and Real Estate derivatives are calculated by underlying indices.

•Both commercial and residential indices exist, but they often differ in ex post returns.

•Without a single market, investors may question the accuracy of the underlying index upon which their derivatives are based.

Different Returns Hurt Investor Confidence

Figure 3: The National Association of Realtors (Blue) and CME Case/Shiller (Yellow) indexes calculate residential real estate returns using different methodology. However, it is difficult to know which is more accurate.

Where are Real Estate Derivatives Traded?

United Kingdom •Largest commercial real estate derivatives market but limited availability to the public.

United States•The Chicago Mercantile Exchange offers commercial residential real estate derivatives to the public.

With Housing Derivatives

Without Housing Derivatives

For Further InformationPlease contact Tim Horan at [email protected] if you have any questions.

Abstract•At an estimated $62 trillion worldwide, real estate is the largest asset without a developed derivative market.

•Indices exist which claim to accurately track real estate prices upon which these advantageous derivatives markets could be based.

•However, I contend Real Estate Derivative markets will stagnate until a nationally accepted real estate index replaces the indices available today which often vary substantially in returns.

San Francisco, CA

New York, NY

United States

Page 20: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Key References

Abstract

The Need for Urban Sustainability

Project Dongtan Is Dongtan Cost-Effective? (II)

The World’s First Eco-City—Dongtan, ChinaKenneth King, Dept. of Economics, Stanford University

Conclusion

Financing & IncentivesDue to large-scale industrialization and rapid urbanization, China is in desperate need of sustainable development. China is currently involved with one of the world’s most substantial sustainable development project by building the Dongtan Eco-City—the world’s first true eco-city designed to be carbon neutral, self sustainable, and to have a minimal ecological footprint. In answering the overarching question of whether the ambitious Dongtan project will succeed, the paper examines its funding sources and the incentive mechanisms between players. The paper then performs a cost-effectives analysis to see if Dongtan’s goals are reached with least cost. Lastly, the paper factors in the project’s potential to be a diffusible innovation, and concludes on Dongtan’s prospects.

Goals: (1) To provide sustainable urban living with a minimal ecological footprint for half a million people, and (2) to be a template for urban development elsewhere in China and the world.

Who: Commissioned by the partly state-owned Shanghai Industrial Investment Corporation (SIIC) to the British design and engineering firm Arup.

Where: Dongtan will be built on Chongming Island, China’s third largest island, in the Yangtze River Delta, 15 km north of Shanghai, on an area of land near the size of Manhattan.

When: By 2010 it will be a city of 25,000, and by 2050 a city of 500,000.

Cost: The initial phase of Dongtan will cost around $2.5 billion, but the entire cost of the project is expected to be in the double-digit billions.

Project Description:

• The city will consist of compact villages intersected by canals and lakes. The layout will minimize the need for petroleum-based transportation.

• The entire transport system (cars, trams, buses, boats) will be powered by electric motors or hydrogen fuel cells.

• Highly energy efficient buildings (1/3 of typical energy consumption) with their own photovoltaic solar panels.

• City largely powered by renewable energy—other than solar energy, wind turbines will generate 20% of the city’s energy needs and biomass energy production from rice husk wastes will generate most of the city’s electricity.

• Waste would be either recycled or composted—the bulk of its organic wastes will be returned to local farmlands, and no more than 10% of the city’s trash would end up as landfill.

• Sewage will be cleaned mainly using decentralized biological treatment systems that will capture the nutrients in the waste water. Green rooftops will collect, filter, and store water as part of the city’s water system.

• 60% of Dongtan will remain agricultural, where organic-farming techniques linked to the waste and sewage recycling system are designed to create a sustainable cycle of local food production.

• A larger number of research institutes from eco-industries will form a major component of Dongtan’s economy.

Is Dongtan Cost-Effective? (I)

China is the world’s most populous country, with a fast growing economy and rapid urbanization. China’s urban population is expected to reach 1.1 billion by 2050. By then nearly half of China’s current population will have moved from rural to urban areas. This process is already having detrimental effects on the urban living environment. The benefits of China’s record economic growth could effectively be undone by the ever-growing costs of resource depletion, pollution and damage to human health. For example, sulfur-dioxide emissions alone are causing China’s GNP an annual loss of 12%.

Heavy Industrialization:• One third of China’s water courses are severely polluted• Has 16 of the world’s 20 most polluted cities • Builds a coal power station a week• World No. 1 emitter of SO2

• Will be the world No. 1 emitter of CO2 in 2008• Air pollution causes 400,000 premature deaths a year

Rapid Urbanization:• By 2050, 600 million people will move from rural to urban areas in China• By then, China will have 50 mega cities (> 2 million people), 150 big cities, 500 medium-sized cities, and 1,500 small cities as urbanization rate rise from 36% to 70%

As China’s per capita resource consumption is catching up with Western countries, sustainable development is key to prevent a pending environmental catastrophe.

Strong Financing: • Quasi-public funding: 100% financed by the Shanghai Industrial Investment Corporation (SIIC), a conglomerate fully funded by the Shanghai Municipality. • Strong government backing: The land of Dongtan was given to SIIC by the Shanghai government in order to replenish the company’s assets during the 1997 economic downturn. Keeping Dongtan green is a key feature of the deal.

Aligned Incentives across players: • Arup: Successful carry-out of the project is (1) a means to further its establishment in Asia, and is also (2) a major commercial opportunity to recruit international clients interested in sustainable urban development by leveraging the firm’s first mover advantage from engineering the eco-city. • SIIC: (1) Has agreement with the Shanghai government to keep Dongtan ecologically sustainable; (2) SIIC will showcase the progress of Dongtan in the 2010 Shanghai World Expo.

Cost effectiveness means reaching a goal with least cost…

Examine goal 1 (to provide more sustainable living):

The improvements Dongtan brings in sustainable living per year (i.e. the projected difference in resource use if Dongtan’s residence were instead living in a traditional city) are:

• 24,560 million stere of waste gas• 5 million tons of solid waste• 5.92 million tons of untreated waste water

This is estimated by giving a 20% discount to the pollution statistics of a tradition Chinese city, Qinhuangdao, which has the same population as Dongtan. This assumes that if Dongtan’s residences were to live in a traditional city instead, their living patterns would be similar to that of Qinhuangdao’s. Data is provided by China’s State Environmental Protection Administration (SEPA).

The costs of building facilities to treat pollution for an average city is:• 50.4 stere of waste gas / yuan• 0.27 tons of waste water / yuan

This is estimated by averaging the (amount treated / cost) of 11 Chinese cities, based on numbers provided by SEPA, e.g.:

Hence, to achieve the improvements of Dongtan by implementing sustainability development in currently-built cities would cost:

• 487.3 million yuan to treat waste gas (=$63.66 million)• 21.92 million yuan to treat waste water (=$2.86 million)

So, it costs $65.6 plus a certain amount for solid waste, together which comes to hundreds of times less than the double-digit billions it would cost to build Dongtan. NO!

Dongtan: Diffusible Innovation?

Examine goal 2 (to serve as a model for sustainable living in order to be a catalyst for changes in other cities): Assuming that the collective costs of living of citizens are the same regardless of which city they live in—can more change be catalyzed with 500,000 citizens living in one model city or with 500,000 model citizens living in different cities throughout China?

500,000 people living in a model city will serve as a much stronger catalyst for change, since a collective program will receive much stronger publicity and its template of sustainability could be passed on to other cities new or old, domestic or international. Whereas 500,000 model citizens living throughout China would only produce scattered influences. YES!

City Purification (million stere) Total cost (10,000 yuan) stere/yuan

Beijing 131500 274433.5 47.91689

Shanghai 226400 228617.6 99.03

… … … …

Average 50.39704

• Eco-cities do not fit into the natural diffusion model of innovative consumer products.• Condition to diffuse is strong government support or funding in countries where (1) urbanization causes significant environmental and health problems, where (2) the pursuit of large-scale sustainability projects can offset enough of these harms to justify their costs, and where (3) bureaucracy is not cumbersome.• Private initiation usually ends in failure due to funding and incentive issues, e.g. Arcosanti, Epcot, Huangbaiyu. • China satisfies the condition and has 4 eco-cities in the pipeline—Dongtan will serve as a blueprint for them.• Global impacts so far:

• Mayor of London visited Dongtan to ideas for a huge zero-emission development about to break ground in east London.• Arup would also apply lessons from Dongtan to new developments in San Francisco and Napa County.

• With strong financing and aligned incentives, Dongtan has the right elements to succeed. • However, Dongtan is far from being cost-effective using the environmental benefits it aims to achieve as a reference.• This is compensated by its potential to catalyze further sustainability developments globally by serving as a pioneering example.• As its innovations could be applied to both new cities and already-built cities, the Dongtan model could be a real solution to rapid urbanization if it works out. • The Shanghai World Expo in 2010 will be the time to critique Dongtan’s progress.

• Yan, Zhao, and Herbert Girardet. Dongtan: An Eco-City. SIIC and Arup, 2006• Girardet, Herbert, and Peter Head. “City of the Future: Is Dongtan a New Urban Development Paradigm?” Living for the City: A New Agenda for Green Cities. Ed. Jesse Norman. Policy Exchange, 2006. 139-151.• Arup. Dongtan Eco-City. <http://www.arup.com/eastasia/project.cfm?pageid=7047>

Page 21: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

PurposeTo analyze the different models of microfinance institutions and their relative advantages in light of the trend toward for-profit orientations and the growing interest of the mainstream financial sector in providing services to the huge poor market. Specifically, I will investigate whether commercialization seems to increase the viability and growth rate of microfinance institutions, as well as compare the various services offered. I will also consider the incomes and service penetration of the populations served by the different models, and whether a certain form should be emphasized to combat poverty most effectively.

BackgroundWhat is Microfinance?

Microfinance narrowly defined as microcredit consists of small loans (typically under $300) provided to poor entrepreneurs. These borrowers have traditionally lacked access to conventional sources of credit because of their low incomes. Therefore, microfinance aims to help alleviate poverty by providing critical services to the poorest members of society.

Historical ContextEarly 1970s: Beginning small-scale microcredit

programs 1980s: Implementation of sustainable lending

practices1990s: Transition from microcredit to

microfinance: other services, such as savings and money transfers, become available

Types of MFIsNon-Profit (NGO): Part of the traditional aid sector. Generally include larger social goals, such as women’s empowerment.Subsidized: Utilize many sustainable practices but partially funded through donations.Commercial: For-profit. Often offer a broader range of financial services, though few additional social programs.

Potential Market

CGAP estimates that there are at most 500 million of the 3 billion poorest people in the world with their own savings or loan accounts, many of which are poor quality. Thus there is a huge potential market for microfinance services.

Benefits•Vast financial sector funding could be utilized to combat poverty on a much larger scale than possible with more traditional, aid-based poverty alleviation strategies

-Creates a market for poverty alleviation

•Use of funds often more efficient•More targeted, specialized service in financial sector•Larger infrastructure, potential for economies of scale

The Commercialization of Microfinance: Opportunities and Dangers

Stephanie A. Hausladen

Department of Economics, Stanford University

Case StudiesGrameen Bank:

Conclusions1) Fully commercialized MFIs more profitable,

provide highest diffusion potential through traditional banking systems

2) Subsidized and nonprofit institutions service poorer clients through on average less-profitable, uncollateralized group lending schemes.

3) The fewer monetary constraints on subsidized MFIs can provide a fertile environment to foster innovations to reach poorer and more remote clients, which could then be adopted by the larger financial sector.

Current Issues

Which types of lending best serve the “poorest of the poor?”

• Individual-based lending: the borrower posts some form of collateral and receives an individual loan to improve a business.

• Group-based lending: either by solidarity group, composed of 3-5 people together responsible for loans to individual group members or as village banking, with larger groups holding joint liability.

• Relative Advantages of Individual Lending:-Good borrowers not penalized by non-

performers-More flexibility-Higher profitability

• Relative Disadvantages: -Moral hazard problems

(higher default rates)-Adverse selection (group members generally

have more local knowledge and so can better screen potential borrowers for lower cost)

-Poorest cannot meet collateral requirements; “social capital” collateralizes group loans

Should women be targeted?• Yes: More money spent on education, healthcare,

greatest impact on increasing welfare• No: Lending to men stimulates more economic growth

since they are generally involved in more commercial activity

• Transformed into a formal bank in 1983• Most funding from: grants, loans, savings, and shareholders• Type of lending: mostly solidarity group• Services: loans, savings, insurance• Active borrowers (2005): ~7 million• Average loan (2006): $69• Profit margin (2005): 2.19%

• FINCA Tanzania:• Founded in 1998• Non-profit, part of FINCA International with branches in 20

countries• Most funding from: grants, loans• Type of lending: village banking• Services: loans, savings, insurance• Active borrowers (2006): 42,785• Average loan (2006): $119• Profit margin (2005): 2.72%

• BancoSol:• Transformed from an NGO to a commercial bank in 1992• Most funding from: loans, savings, shareholders• Type of lending: mixture of solidarity group and individual• Services: savings, loans, insurance, transfers, training, health• Active borrowers (2006): 103,786• Average loan (2006): $1,571• Loans below $300: 75.29%• Clients below poverty line: 71%• Profit margin (2006): 16.81%

Costs• Focus on marginally poor rather than poorest

- May not be possible to provide very small financial services profitably

- “Fixed cost” administering loan vs. increasing revenue with loan size

- Often individual-based lending, with larger, more lucrative loans

• High interest rates to provide smallest loans potentially mitigates positive impact of providing funding to poor

• Absence of collateral social programs could diminish effectiveness in addressing major issues- Lack of targeting women leads to less absolute

poverty alleviation- Loans not constricted to socially constructive

uses

*Data only available from 1996

*MFI: Microfinance Institution

Page 22: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

AbstractDomain owners seeking to sell their assets can do so in several ways. Which of these is profit maximizing? How does this outcome hold in the market? I attempt to answer these question using the following four common methods of transaction:

(1) Reactive Sale

(2) Active Sale

(3) Public Auction

(4) Public Posting: Fixed Price

First, I use economic theory to predict which method is optimal. Second, I compare this prediction to empirical observations of what sellers do. Surprisingly, although method (3) maximizes seller’s profits in most cases, method (1) is currently in most use.

(1) Reactive SaleBilateral Bargaining with Repeated Offers

Inderst (2003) shows that for a seller with a single good, price settles at owner’s lowest possible valuation if:

-an infinite number of buyers approach the seller sequentially.

-seller makes offers to one buyer at a time.

-while buyer’s valuation is private, its probability distribution is common knowledge.

-seller’s value is common knowledge.

-time between two consecutive offers z 0.

Stipulation: if the discount factor r=0, seller’s expected payoff is the highest possible valuation.

Application to domain name market:

-owner is reactive, ie. a stream of interested buyers approach the seller.

-assumption: seller and buyer both prefer to reach an agreement (z0).

-r=0 is possible if expected appreciation of domain equals the cost of waiting.

The Optimal Sales Channel for Domain Names

Riaz Rahim

Department of Economics, Stanford University, Stanford, California 94305

Literature CitedGoeree, Jacob K., and Offerman, Theo.

"Efficiency in Auctions with Private and Common Values: An Experimental Study." The American Economic Review Volume 92. Number 3. June 2002 (pp. 625-643). 29 May 2007 <http://links.jstor.org/sici?sici=0002-8282%28200206%2992%3A3%3C625%3AEIAWPA%3E2.0.CO%3B2-0>.

Inderst, R. (2003). The Coase Conjecture in a Bargaining Model with Infinite Buyers. Working Paper LSE.

Milgrom, Paul R., and Weber, Robert J.. "A Theory of Auctions and Competitive Bidding." Econometrica Volume 50. Number 5.September 1982 (pp. 1089-1122). 29 May 2007<http://www.jstor.org/view/00129682/di952661/95p0170b/0?frame=frame&[email protected]/01cce4405f00501c02fe5&dpi=3&config=jstor>.

FindingsThe optimal method for sellers depends on r:

-if r>0, Public Auction is best.

-if r=0, Reactive Sale is best.

Sedo.com, which has ~40% of the resale market share, only launched its auction service in 2006. The bulk of sales are currently made using (1). This implies that:

-domain sellers have had low discount rates historically, presumably due to anticipated asset appreciation.

-the recent boom years for domain sales have increased opportunity cost of not selling, creating a higher incidence of r>0 and thus a greater demand for public auction services.

For further information

Please contact Riaz Rahim at [email protected] with further questions.

(2) Active SaleExtension of Inderst Model for Reactive Sales

Inderst (2003) shows that his model extends to the case where the “seller has to leave the old buyer in order to search for a new buyer.”

Same results as (1), except this method may require the seller to incur labor costs. Thus, (2) has a weakly worse outcome than (1).

Figure 1: An active seller markets the availability of a domain.

(4) Public Posting: Fixed Price

The Dutch Auction

Milgrom and Weber (1982) shows that for a general auction following the principles of (3), the Dutch Auction leads to a lower sale price than the ascending price auction.

Method (4) mimics a Dutch Auction. The seller’s optimal strategy is to initially post a high sale price and then incrementally reduce the value until the domain is sold. Since method (3) is an ascending price auction with a time limit, (4) leads to a worse outcome than (3).

(3) Public AuctionGeneral Model: Private Values and Common Values

Goeree and Offerman (2002) shows that in an auction with bidders who have both private and common values, the result is inefficient, where:

-total bidder valuation is equal to the private valuation plus the mean of the perceived common values.

-realized efficiency = (twin –tmin)/ (tmax –tmin) x 100%; t is private value.

-there are a finite number of bidding rounds.

Application to domain name market:

-bidders have private values based on planned purpose and common values based on the future prospects of the Internet.

-instead of a finite number of rounds, domain auctions usually have a time limit.

Though precise efficiency rates can’t be inferred from the literature, when r>0 (3) is weakly better than (1), and when r=0 (3) is weakly worse than (1).

Charts & Tables

Figure 3: Zetetic provides some market statistics. Sellers may set r=0 anticipating high mean

returns.

Figure 2: A sample of domains up for auction on Sedo.com

Page 23: These are examples of student posters in a previous class. Note: The class was on more general issues of innovation.

Background Conclusions

Lessons Learned From DIVX: Diffusion and DRMSandra Tyan

Department of Economics, Stanford University

The Digital Versatile Disc (DVD) format was introduced in April 1997. DVD is an open format, meaning all players carrying a DVD logo can play a DVD. It can hold 10 times more information than a CD, has twice the visual clarity of a videocassette, and has multichannel surround sound.

Digital Video Express (DIVX) was announced in September 1997 and introduced a year later as a pay-per-view alternative to the DVD. DIVX discs required special hardware to play. The discs were priced at $4-5 and were encrypted so that they unlocked for 48 hours when the user first played the disc. Further play time or full unlocking required an additional fee. In June 1999, DIVX was phased out. The DVD installed base then was 1.9 million while the DIVX installed base was 165,000.

Top 5 ReasonsWhy DIVX Failed to

Diffuse BitTorrent recently partnered with studios to provide digital-only movies for rent, much like DIVX discs except online. Movies are secured by Microsoft’s DRM, which only allows playback on one computer. They are priced from $3-4 and expire 30 days from the initial purchase or 24 hours after the consumer plays the movie. In order for this endeavor to be successful, BitTorrent should learn from the failure of DIVX.

Although DIVX failed to diffuse, it did have a novel idea with flexible DRM. However, the success of DRM for pay-per-view movies is uncertain as DIVX was not fully adopted by consumers. DIVX discs would have been successful had they contained widescreen and special features, so that when a consumer chose to unlock a DIVX disc, it would be exactly the same as its DVD counterpart. Also, giving consumers freedom of choice in playtime would have been helpful in its success; whether it is choosing the number of days for which they can view the disc, or a set number of hours of video play so they can pause and restart the movie without having to worry about a set 48 hours from start. BitTorrent and other content providers need to provide a flexible DRM policy to reduce the costs to consumers.

Copyright Protection

As with all digital media, piracy is a concern since all future copies are identical in quality to the first copy. Also, the internet facilitates fast and widespread distribution of illegal copies of digital media. Protection is included in all DVD’s such as regional encoding and Macrovision, which prevents direct copying to a videotape or recordable DVD player.

Figure 1: DVD player sales

DRM

For More Information

Dranove, D. and Gandal, N. (2003) “The DVD vs.

DIVX Standard War: Empircal Evidence of Network Effects and Preannouncement Effects.” Journal of Economics and Management Strategy. 12(3), 363-386.

Fisher, K. (2006) “The Problem With MPAA’sShocking Piracy Numbers,” Ars Technica, 5 May.

Stone, B. (2007) “Software Exploited by Pirates

Goes to Work for Hollywood,” The New York Times, 25 February

Digital Rights Management is an attempt by copyright owners to control or prevent access to copying digital media.

1. Perceived standards war

The DVD format was a single format developed by hardware manufacturers and movie studios to replace VHS. Consumers thought DIVX was a competing standard and early adopters of DVD did not support DIVX.

2. Price point

DIVX players were priced $100 above DVD players. When DIVX players were released, DVD player manufacturers slashed their prices in order to increase sales.

3. Partial compatibility

DIVX players could play both DIVX and DVD discs, whereas DVD players could not play DIVX discs. Studios had more incentives to release movies in DVD format in order to reach both owners of DVD and DIVX players. DIVX lacked studio support.

4. Limited titles

By the time Circuit City pulled the plug on DIVX, there were 3,317 titles available on DVD and 471 titles available on DIVX, but only 100 titles available exclusively on DIVX. The high overlap in titles and lack of DIVX-only titles hindered the growth of DIVX.

5. Lack of special features

Not only did consumers get director commentary and other special features on DVD, DVD titles were also available in widescreen format while the aspect ratio on DIVX remained at 4:3.

Costs to Consumers

- High prices

With more protection, studios have more incentive to produce more movies, leading to cheaper products. There is no indication that movies are being released to DVD faster or cheaper than they were on VHS (which had no DRM). Also, there has never been a title protected by DRM that has not been shared online.

- Pay more for what you already own

DRM forces consumers to purchase more copies of what they already own for playback on different mediums (DVD player, iPod video, etc).

Benefits for Studios

- Maintain profits

The DVD market is currently $32 billion. An MPAA study revealed U.S. movie studios are losing $6.1 billion due to piracy.

- Price discrimination

DRM allows content providers to price discriminate and maintain high profits. Content providers can segment the market and price accordingly for download-to-own consumers and pay-per-view consumers.