A1

20
Vol. 36, No. 1, January–February 2006, pp. 6–25 issn 0092-2102 eissn 1526-551X 06 3601 0006 inf orms ® doi 10.1287/inte.1050.0181 © 2006 INFORMS General Motors Increases Its Production Throughput Jeffrey M. Alden General Motors Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected] Lawrence D. Burns General Motors Corporation, Mail Code 480-106-EX2, 30500 Mound Road, Warren, Michigan 48090, [email protected] Theodore Costy General Motors Corporation, Mail Code 480-206-325, 30009 Van Dyke Avenue, Warren, Michigan 48090, [email protected] Richard D. Hutton General Motors Europe, International Technical Development Center, IPC 30-06, D-65423 Rüsselsheim, Germany, [email protected] Craig A. Jackson General Motors Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected] David S. Kim Department of Industrial and Manufacturing Engineering, Oregon State University, 121 Covell Hall, Corvallis, Oregon 97331, [email protected] Kevin A. Kohls Soar Technology, Inc., 3600 Green Court, Suite 600, Ann Arbor, Michigan 48105, [email protected] Jonathan H. Owen General Motors Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected] Mark A. Turnquist School of Civil and Environmental Engineering, Cornell University, 309 Hollister Hall, Ithaca, New York 14853, [email protected] David J. Vander Veen General Motors Corporation, Mail Code 480-734-214, 30003 Van Dyke Road, Warren, Michigan 48093, [email protected] In the late 1980s, General Motors Corporation (GM) initiated a long-term project to predict and improve the throughput performance of its production lines to increase productivity throughout its manufacturing oper- ations and provide GM with a strategic competitive advantage. GM quantified throughput performance and focused improvement efforts in the design and operations of its manufacturing systems through coordinated activities in three areas: (1) it developed algorithms for estimating throughput performance, identifying bottle- necks, and optimizing buffer allocation, (2) it installed real-time plant-floor data-collection systems to support the algorithms, and (3) it established common processes for identifying opportunities and implementing perfor- mance improvements. Through these activities, GM has increased revenue and saved over $2.1 billion in over 30 vehicle plants and 10 countries. Key words : manufacturing: performance/productivity; production/scheduling: applications. F ounded in 1908, General Motors Corporation (GM) is the world’s largest automotive manufacturer. It has manufacturing operations in 32 countries, and its vehicles are sold in nearly 200 nations (General Motors Corporation 2005b). GM’s automotive brands include Buick, Cadillac, Chevrolet, GMC, Holden, HUMMER, Opel, Pontiac, Saab, Saturn, and Vauxhall. In 2004, the company employed about 324,000 people world- wide and sold over 8.2 million cars and trucks—about 15 percent of the global vehicle markets—earning a reported net income of $2.8 billion on over $193 billion in revenues (General Motors Corporation 2005a, b). GM operates several hundred production lines throughout the world. In North America alone, GM operates about 100 lines in 30 vehicle-assembly plants (composed of body shops, paint shops, and gen- 6

description

or

Transcript of A1

Page 1: A1

Vol. 36, No. 1, January–February 2006, pp. 6–25issn 0092-2102 �eissn 1526-551X �06 �3601 �0006

informs ®

doi 10.1287/inte.1050.0181©2006 INFORMS

General Motors Increases Its Production ThroughputJeffrey M. Alden

General Motors Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected]

Lawrence D. BurnsGeneral Motors Corporation, Mail Code 480-106-EX2, 30500 Mound Road, Warren, Michigan 48090, [email protected]

Theodore CostyGeneral Motors Corporation, Mail Code 480-206-325, 30009 Van Dyke Avenue, Warren, Michigan 48090, [email protected]

Richard D. HuttonGeneral Motors Europe, International Technical Development Center, IPC 30-06, D-65423 Rüsselsheim, Germany,

[email protected]

Craig A. JacksonGeneral Motors Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected]

David S. KimDepartment of Industrial and Manufacturing Engineering, Oregon State University, 121 Covell Hall, Corvallis, Oregon 97331,

[email protected]

Kevin A. KohlsSoar Technology, Inc., 3600 Green Court, Suite 600, Ann Arbor, Michigan 48105, [email protected]

Jonathan H. OwenGeneral Motors Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected]

Mark A. TurnquistSchool of Civil and Environmental Engineering, Cornell University, 309 Hollister Hall, Ithaca, New York 14853,

[email protected]

David J. Vander VeenGeneral Motors Corporation, Mail Code 480-734-214, 30003 Van Dyke Road, Warren, Michigan 48093,

[email protected]

In the late 1980s, General Motors Corporation (GM) initiated a long-term project to predict and improve thethroughput performance of its production lines to increase productivity throughout its manufacturing oper-ations and provide GM with a strategic competitive advantage. GM quantified throughput performance andfocused improvement efforts in the design and operations of its manufacturing systems through coordinatedactivities in three areas: (1) it developed algorithms for estimating throughput performance, identifying bottle-necks, and optimizing buffer allocation, (2) it installed real-time plant-floor data-collection systems to supportthe algorithms, and (3) it established common processes for identifying opportunities and implementing perfor-mance improvements. Through these activities, GM has increased revenue and saved over $2.1 billion in over30 vehicle plants and 10 countries.

Key words : manufacturing: performance/productivity; production/scheduling: applications.

Founded in 1908, General Motors Corporation (GM)is the world’s largest automotive manufacturer. It

has manufacturing operations in 32 countries, and itsvehicles are sold in nearly 200 nations (General MotorsCorporation 2005b). GM’s automotive brands includeBuick, Cadillac, Chevrolet, GMC, Holden, HUMMER,Opel, Pontiac, Saab, Saturn, and Vauxhall. In 2004,the company employed about 324,000 people world-

wide and sold over 8.2 million cars and trucks—about15 percent of the global vehicle markets—earning areported net income of $2.8 billion on over $193 billionin revenues (General Motors Corporation 2005a, b).GM operates several hundred production lines

throughout the world. In North America alone, GMoperates about 100 lines in 30 vehicle-assembly plants(composed of body shops, paint shops, and gen-

6

Page 2: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 7

eral assembly lines), about 100 major press lines in17 metal fabrication plants, and over 120 lines in17 engine and transmission plants. Over 400 first-tiersuppliers provide about 170 major categories of partsto these plants, representing hundreds of additionalproduction lines.The performance of these production lines is critical

to GM’s profitability and success. Even though theoverall production capacity of the industry is abovedemand (by about 25 percent globally), demand forcertain popular vehicles often exceeds planned plantcapacities. In such cases, increasing the throughputof production lines increases profits, either by addingsales revenue or by reducing the labor costs associatedwith unscheduled overtime.

The Need to Increase ThroughputIn the late 1980s, competition intensified in the auto-motive industry. In North America, this competitionwas fueled by an increasing number of imports fromforeign manufacturers, customers’ growing expecta-tions for quality, and slow growth of the overallmarket, which led to intense pricing pressures thatlimited GM’s opportunities to increase its prices andrevenues. In comparison with its foreign competi-tors, GM was seen as wasteful and unproductive,seemingly unable to improve, and ineffectively copy-ing Japanese methods without understanding the realproduction problems. Indeed, in competitive com-parisons, The Harbour Report, the leading industryscorecard of automotive manufacturing productivity,ranked GM near the bottom in terms of produc-tion performance (Harbour and Associates, Inc. 1992).Many plants were missing production targets, work-ing unscheduled overtime, experiencing high scrapcosts, and executing throughput-improvement initia-tives with disappointing results, while their managersargued about how to best meet production targets andbecome more productive and cost effective. GM waslosing money, even with products in high demand,and was either cutting costs to the penny or openingthe checkbook to solve throughput problems. Man-agers were extremely frustrated and had no logicalplan to guide them in improving plant throughputand controlling production costs.Vehicle launches (that is, the plant modifications

required to produce a new vehicle) also suffered

greatly: it took plants years, not months, to achievethroughput targets for new model lines, typically onlyafter major, costly changes in equipment and lay-out. Long launches meant lost sales and expensiveunplanned capital investment in machines, buffers,and space to increase throughput rates. Several fac-tors contributed to inadequate line designs:—Production data was unavailable or practically

useless for throughput analysis, crippling engineers’ability to verify the expected performance of newlines prior to the start of production.—Inadequate tools for analyzing manufacturing

throughput, mostly discrete event simulation (DES),took days or weeks to produce results, during whichengineers may have changed the line design.—Intense corporate pressure to reduce the costs

for line investment led to overly optimistic esti-mates of equipment reliability, scrap rates, reworkrates, space requirements, material-flow capability,operator performance, and maintenance performance.These optimistic estimates were promoted by vendorswho rated equipment performance based on idealconditions.Under these conditions, many plants had great dif-

ficulty achieving the optimistic throughput targetsduring launch and into regular production. GM hadtwo basic responses: (1) increase its pressure on man-agers and plants to improve the throughput rates withthe existing investment, or (2) invest additional cap-ital in equipment and labor to increase throughputrates—often to levels even higher than the originaltarget rates to pay for these unplanned investments.The confluence of these factors severely hindered

GM’s ability to compete; as a result, GM closedplants and posted a 1991 operating loss of $4.5 billion(General Motors Corporation 1992)—a record at thetime for a single-year loss by any US company.

Responding to the NeedRecognizing the need and the opportunity to improveplant productivity, GM’s research and development(R&D) organization began a long-term project toimprove the throughput performance of existingand new manufacturing systems through coordinatedefforts in three areas:—Modeling and algorithms,—Data collection, and

Page 3: A1

Alden et al.: General Motors Increases Its Production Throughput8 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

—Throughput-improvement processes.The resulting technical advances, implementation

activities, and organizational changes spanned aperiod of almost 20 years. We list some of the mile-stones in this long-term effort:

1986–1987 GM R&D developed its first analyticmodels and software (C-MORE) for esti-mating throughput and work-in-process(WIP) inventory for basic serial pro-duction lines and began collecting dataand defining a standard throughput-improvement process (TIP).

1988 GM’s Linden body fabrication plant andits Detroit-Hamtramck general assemblyplant implemented pilots of C-MOREand TIP.

1989–1990 GM R&D developed a new analytic de-composition model for serial lines, ex-tending C-MORE to accurately analyzesystems with unequal workstation speedsand to provide approximate performanceestimates for parallel lanes.

1991 A GM R&D implementation team de-ployed C-MORE at 29 North AmericanGM plants and car programs, yieldingdocumented savings of $90 million in asingle year.

1994–1995 GM North America Vehicle Operationsand GM Powertrain formed central staffsto coordinate data collection and con-solidate throughput-improvement activi-ties previously performed by divisionalgroups.

1995 GM R&D developed an activity-basednetwork-flow simulation, extending C-MORE to analyze closed carrier sys-tems and lines with synchronous transfermechanisms.

1997 GM expanded the scope of implemen-tation beyond improving operations toinclude designing production systems.

1999 GM Global Purchasing and Supply Chainbegan using C-MORE tools for supplier-development activities.

2000 GM R&D developed a new hybrid sim-ulation system, extending C-MORE to

analyze highly complex manufacturingsystems within GM.

2003 GM Manufacturing deployed C-MOREtools and throughput-improvement pro-cesses globally.

2004 GM R&D developed an extensiblecallable library architecture for modelingand analyzing manufacturing systems tosupport embedded throughput analysiswithin various decision-support systems.

Modeling and AlgorithmsIn its simplest form, a production line is a seriesof stations separated by buffers (Figure 1), throughwhich parts move sequentially until they exit the sys-tem as completed jobs; such serial lines are com-monly the backbones of many main production linesin automotive manufacturing. Many extensions tothis simple serial-line model capture more complexproduction-system features within GM. Example fea-tures include job routing (multiple passes, rework,bypass, and scrap), different types of processing(assembly, disassembly, and multiple parts per cycle),multiple part types, parallel lines, special buffering(such as resequencing areas), and conveyance systems(chains, belts, transfer bars, return loops, and auto-mated guided vehicles (AGVs)). The usual measureof the throughput of a line is the average number ofparts (jobs) completed by the line per hour (JPH).Analyzing even simple serial production lines is

complicated because stations experience random fail-ures and have unequal speeds. When a single sta-tion fails, its (finite) input buffer may fill and blockupstream stations from producing output, and its out-put buffer may empty and starve downstream sta-tions of input. Speed differences between stations canalso cause blocking and starving. This blocking andstarving creates interdependencies among stations,which make predicting throughputs and identifyingbottlenecks very difficult. Much research has beendone on the throughput analysis of production lines(Altiok 1997, Buzacott and Shantikumar 1993, Chenand Chen 1990, Dallery and Gershwin 1992, Gershwin1994, Govil and Fu 1999, Li et al. 2003, Papadopoulosand Harvey 1996, Viswandham and Narahari 1992).Although analysts naturally wish to construct very

detailed discrete-event simulation models to ensure

Page 4: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 9

Figure 1: A simple serial (or tandem) production line is a series of stations separated by buffers. Each unpro-cessed job enters Station 1 for processing during its work cycle. When the work is complete, each job goes intoBuffer 1 to wait for Station 2, and it continues moving downstream until it exits Station M. The many descriptorsof such lines include station speed, station reliability, buffer capacity, and scrap rates.

realistic depiction of all operations in a productionsystem, such emulations are difficult to create, val-idate, and transport between locations. The numberof analyses and the time they take make such sim-ulation models impractical. In analyzing productionsystems, a major challenge is to create tools that arefast, accurate, and easy to use. The trade-offs betweenthese basic requirements motivated us in developingdecomposition-based analytic methods and customdiscrete-event-simulation solvers that we integratedwith advanced interfaces into a suite of throughput-analysis tools called C-MORE.While the specific assumptions we made in mod-

eling and analysis depended on the underlyingalgorithm used for estimating performance, we gen-erally assumed that workstations are unreliable, withdeterministic operating speeds and exponentially dis-tributed times between successive failures and timesto repair. We also assumed that all buffers have finitecapacity and that the overall system is never starved(that is, jobs are always available to enter) or blocked(that is, completed jobs can always be unloaded).Among the performance measures C-MORE providesare the following outputs:—The throughput rate averaged over scheduled

production hours, which we use to predict perfor-mance, to plan overtime, and to validate the modelbased on observed throughput rates;—System-time and work-in-process averages that

provide lead times and material levels useful forscheduling and planning;—The average state of the system, which includes

blocking, starving, processing, and downtimes foreach station and average contents for each buffer;—Bottleneck identification and analysis, which

helps us to focus improvement efforts in areas thatwill indeed increase throughput and to assess oppor-tunities to improve throughput;

—Sensitivity analysis to estimate the impact ofvarying selected input parameters over specifiedranges of values, to account for uncertainty in inputdata, to identify key performance drivers, and to eval-uate suggestions for system improvements; and—Throughput distribution (per hour, shift, or

week) to help managers to plan overtime andto assess system changes from the perspective ofthroughput variation (less variation is desirable).The basic trade-off in the C-MORE tools and

analysis capabilities is between the complexity ofthe system that can be modeled and the executionspeed of the analysis (Figure 2). Because we alwaysneed the fastest possible, accurate analysis, whatperformance-estimation method we choose dependson the complexity of the manufacturing system ofinterest. C-MORE’s basic analysis methods are ana-lytic decomposition and discrete simulation.

Analytic-Decomposition MethodsThe analytic solvers we developed represent theunderlying system as a continuous flow model andrely on a convergent iterative solution of a seriesof nonlinear equations to generate performance esti-mates. These methods are very fast, which enablesautomatic or user-guided exploration of many alter-natives for system design and improvement. Further-more, tractable analytic models require minimal inputdata (for example, workstation speed, buffer size, andreliability information).Except in the special case of identical workstations,

no exact analytic expressions are known for through-put analysis of serial production lines with more thantwo workstations. However, heuristics exist; they aretypically based on decomposition approaches thatuse the analysis of the station-buffer-station caseiteratively as the building block to analyze longerlines (Gershwin 1987). The fundamental challenge is

Page 5: A1

Alden et al.: General Motors Increases Its Production Throughput10 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

Fas

tS

low

Exe

cutio

n sp

eed

System complexity

C-MORE analytic solver - serial lines - approximate simple routing

C-MORE network simulation - closed systems - transfer stations

C-MORE hybrid network/event simulation - complex routing - multiple products with style-based processing - carrier subsystems - external downtimes (light curtains, schedules)

General-purpose discrete event simulation

HighLow

Paint shop

Body shop andgeneral assembly

Figure 2: The basic trade-off among the C-MORE tools and general-purpose discrete-event simulation (DES)is between the complexity of the system that can be modeled and the execution speed of analysis. For typical sys-tems, the C-MORE analytic solver produces results almost instantaneously, while C-MORE simulations usuallyexecute between 50–200 times faster than comparable DESs. Because execution speed is important, throughputengineers typically use the fastest analysis tool that is capable of modeling the system of interest. For example,in the context of GM’s vehicle-assembly operations, the analytic solver and network simulation tools are mostsuitable for the production lines found in body shops and general assembly, while the complexity of paint shopoperations requires use of the hybrid network/event simulation or general-purpose DES.

the analysis of the two-station problem. Our analy-sis approach to the two-station problem for our situ-ation gives a closed-form solution (Alden 2002). Weused this approach in the analytic throughput solverof C-MORE.We based our analysis on a work-flow model

approximation of the two-station system in which sta-tions act like gates on the flow of work and buffers actlike reservoirs holding work. This paradigm allowsus to use differential equations instead of the moredifficult difference equations. We assumed that sta-tion speeds are constant when processing under idealconditions (no blocking, starving, or failures). We alsoassumed that each station’s operating time betweenfailures and its repair time are both distributed asnegative exponential random variables. We usedoperating time instead of elapsed time, because most

stations rarely fail while idle. We assumed that allvariables are independent and generally unequal. Wemade one special simplifying assumption to handlemutual station downtime: when one station fails andis not working, the remaining station does not fail butprocesses at a reduced speed equal to its stand-alonethroughput rate (its normal operating speed multi-plied by its stand-alone availability). This assumptionsimplifies the analytic analysis by allowing conve-nient renewal epochs (for example, at station repairs)while still producing accurate results. We believe thatwe can relax this last assumption and still produce aclosed-form solution—a possible topic of future work.Our analysis approach focuses on the steady-state

probability distribution of buffer contents at a ran-domly selected time of station repair. Under the (cor-rect) assumption that this distribution is the weighted

Page 6: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 11

sum of two exponential terms plus two impulsesat zero and full buffer contents, we can derive andsolve a recursive expression by considering all pos-sible buffer-content histories between two consecu-tive times of station repair (Appendix). Given thisdistribution, we can derive closed-form expressionsfor throughput rate, work in process, and systemtime. These expressions reduce to published resultsfor special cases, for example, equal station speeds(Li et al. 2003). Extensive empirical observation andsimulation validations have shown this approach tobe highly accurate, with errors typically within twopercent of observed throughput rates for actual sys-tems at GM that have many stations. Large predictionerrors in practice are usually caused by poor qualitydata inputs; in these cases, we often use C-MORE tohelp us to identify the stations likely to have data-collection problems by using sensitivity analysis andcomparisons with historical results. We obtain per-formance estimates for typical GM systems almostinstantaneously and bottleneck and sensitivity analy-ses in a few seconds on a desktop PC.

Discrete Simulation MethodsThe analytic method we described and other simi-lar methods provide very accurate performance esti-mates for simple serial lines. Unfortunately, they arenot applicable to GM’s most complex production sys-tems (for example, its paint shops). These productionsystems employ splits and merges with job-level rout-ing policies (for example, parallel lanes and reworksystems), use closed-loop conveyances (such as palletsor AGVs), or are subject to external sources of down-time that affect several disjoint workstations simul-taneously (for example, light-curtain safety devicesfor nonadjacent operations). To analyze such fea-tures with the available analytic-decomposition meth-ods, one must use various modeling tricks that yieldapproximate and sometimes misleading results.General-purpose discrete-event-simulation (DES)

software can analyze complex production systemswith a high degree of accuracy. It takes too long,however, to develop and analyze these models forextensive what-if scenario analysis. GM needs suchextensive scenario analysis to evaluate large setsof design alternatives and to explore continuousimprovement opportunities. To overcome the limita-tions of the available DES software and the modeling

deficiencies of existing analytic methods, we devel-oped and implemented new simulation algorithms foranalyzing GM’s production systems.In these simulation algorithms, we used activity-

network representations of job flow and efficient datastructures (Appendix). We used these representationsto accurately simulate the interaction of machinesand the movement of jobs through the system, with-out requiring all events to be coordinated througha global time-sorted queue of pending actions. Weimplemented two simulation algorithms as part ofthe C-MORE tool set (Figure 2); one is based entirelyon an activity-network representation, while the otheraugments such a representation with the limited useof an event queue to support analysis of the mostcomplex manufacturing systems. Our internal bench-marking of these methods demonstrated that theiranalysis times are typically 50 to 200 times faster thanthose of comparable DES methods and their resultsare identical. Chen and Chen (1990) reported dramaticsavings in computational effort for similar simulationapproaches.With the C-MORE simulation tools, we provide

modeling constructs designed specifically for repre-senting GM’s various types of production systems.Examples include “synchronous line” objects to rep-resent a series of adjacent workstations that sharea single transfer mechanism to move jobs betweenthe workstations simultaneously, convenient mecha-nisms to maintain the sequence order of jobs withinparallel lanes, and routing policies to enforce restric-tions at splits and merges. With such modeling con-structs, the C-MORE simulation tools are much easierto use than general-purpose DES software for mod-eling GM’s production systems. Although C-MORE’susability comes at the expense of flexibility, it pro-vides two important benefits: First, end users spendfar less time on model development than they wouldwith commercial DES software (minutes or hours ver-sus days or weeks); GM Powertrain Manufacturing,for example, reported that simulation-analyst time perline-design project has dropped by 50 percent sinceit began using these tools in 2002. Second, the use ofcommon modeling constructs facilitates communica-tion and transportability of models among differentuser communities within the company; a benefit thatis especially important in overcoming organizationaland geographic boundaries between global regions.

Page 7: A1

Alden et al.: General Motors Increases Its Production Throughput12 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

Delivery MechanismTo facilitate use of the C-MORE analysis tools (bothanalytic and simulation approaches), we designedand developed an extensible, multilayer softwarearchitecture, which serves as a framework for deploy-ing the C-MORE tools (Figure 3). We implementedit as a set of callable libraries to provide embed-ded access to identical modeling and analysis capa-bilities for any number of domain-specific end-usersoftware applications. This approach promotes con-sistency across the organization and allows disparateuser groups with divergent interests to access iden-tical analysis capabilities while sharing and reusingsystem models. As a concrete example, the through-put predictions obtained by engineers designing afuture manufacturing line are consistent with theresults obtained later by plant personnel using theline’s actual performance data to conduct continuous-improvement activities, even though the two usergroups access the C-MORE tools through separateuser interfaces.The C-MORE software architecture provides a mod-

eling isolation layer, which decouples system mod-

Userinterface

Userinterface

Modeling isolation layer

Solvers

C-MOREsimulation

C-MOREanalytic

Optimizationmodules

Bottleneckanalysis

Bufferoptimization

C-M

OR

E c

alla

ble

libra

ry

Figure 3: The C-MORE callable library decouples system modeling fromanalysis and provides embedded throughput-analysis capabilities for mul-tiple, domain-specific user interfaces. The software architecture designsupports future extendibility as new analysis techniques are developedand modeling scope is broadened.

eling from analysis capabilities (much like AMPLdecouples mathematical modeling from such solversas CPLEX or OSL). The C-MORE software architec-ture provides analysis capabilities through “solvers”and “optimization modules.” We use the term solverto refer to a software implementation of an algo-rithm that can analyze a model of a manufacturingsystem to produce estimates of system performance(for example, throughput, WIP levels, and station-level blocking and starving). Solvers may employdifferent methods to generate results, including ana-lytic and simulation methods. We use the termoptimization module to refer to a higher-level analy-sis that addresses a specific problem of interest toGM, typically by executing a solver as a subroutinewithin an iterative algorithmic framework. For exam-ple, C-MORE includes a bottleneck-analysis modulethat identifies the system components that have thegreatest impact on system throughput. Other exam-ples of optimization modules include carrier analy-sis, which determines the ideal number of pallets orcarriers to use in a closed-loop system, and a buffer-optimization module, which heuristically determinesthe distribution of buffer capacity throughout the sys-tem according to a given objective, such as, maximumthroughput. Each of these optimization modules com-prises an abstract modeling interface and one ormore algorithm implementations. Generally, theseoptimization modules can use any available solver toestimate throughput performance.Because its modeling and analysis capabilities are

decoupled, the C-MORE software architecture allowsusers to reuse system models with different analy-sis approaches. In addition, the extensible architecturedesign facilitates rapid deployment of new analysiscapabilities and provides a convenient mechanism fordeveloping and testing new approaches.

Data CollectionCollecting production-line data is a difficult and mas-sive task. A line may have hundreds of workstations,each with its own processing times and operatingparameters. Furthermore, models with many datainputs tend to be complex and time consuming toanalyze. Hence, for practical use, it is important todevelop models and algorithms with modest data

Page 8: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 13

requirements that still produce meaningful results. Todetermine the appropriate data inputs, we conductedrepeated trials and validation efforts that eventuallydefined the minimal data requirements as a functionof production-line characteristics (routing, scrapping,failure modes, automation, conveyance systems, andso on).In our first pilot deployments, we had almost

no useful data to support throughput analyses. Thedata available was often overwhelming in quantity,inaccurate, stale, or lacking in internal consistency.GM had not established unambiguous standards thatdefined common measures and units. This lack ofuseful data presented a tremendous obstacle. Gooddata collection required ongoing tracking of changesin workstation speed, scrap counts, and causes ofworkstation stoppages (equipment failures, qualitystops, safety stops, part jams, feeder-line starving, andso forth). We gradually instituted standardized data-collection practices to address these issues. Early on,we collected and analyzed well-defined data manu-ally at several plants to prove its value in supportingthroughput analysis. This success helped us to justifymanual data-collection efforts in other plants for thepurpose of increasing throughput rates. It also helpedus to understand what data we needed and obtaincorporate support for implementing automatic datacollection in all GM plants.Automatic data collection presented further chal-

lenges. GM’s heritage as a federation of disparate,independently managed businesses meant that itsplants employed diverse equipment and technologies.In particular, plant-floor control technologies variedwidely from plant to plant, and standard interfacesdid not exist. We could not deploy data-collectiontools and techniques we developed at one plantto other plants without modifying them extensively.Likewise, we could not readily apply the lessons welearned at one plant to other plants. We needed com-mon equipment, control hardware and software, andstandard interfaces. GM formed a special implemen-tation team to initiate data collection and begin imple-mentation of throughput analysis in GM’s NorthAmerican plants. This activity continues today, withestablished corporate groups that develop and sup-port global Web-accessible automatic data collection.These groups also support throughput analysis in all

plants by training employees, analyzing data, andproviding on-site expertise.

Automatic Data Collection, Validation,and FilteringThe automated machines and processes in automotivemanufacturing facilities include robotic welders,stamping presses, numerically controlled (NC)machining centers, and part-conveyance equipment.Industrial computers called programmable logiccontrollers (PLCs) provide operating instructionsand monitor the status of production equipment. Toautomate data collection, we implemented PLC code(software) that counts production events and recordstheir duration; these events include machine faultsand blocking and starving events. Software summa-rizes event data locally and transfers them from thePLC into a centralized relational database. We useseparate data-manipulation routines to aggregate theproduction-event data into workstation-performancecharacteristics, such as failure and repair rates, andoperating speeds. We then use the aggregated dataas input to C-MORE analyses to validate models, toidentify bottlenecks, and to improve throughput.We validate and filter data at several stages, which

helps us to detect missing and out-of-specificationdata elements. In some cases, we can correct thesedata anomalies manually before we analyze the plant-floor data with C-MORE. We compare the C-MOREthroughput estimates to actual observed throughputto detect errors in the model or data. We pass all datathrough the various automatic filters before usingthem to improve throughput. GM collects plant-floordata continually, creating huge data sets (typicallyfrom 15,000 to 25,000 data points per shift, dependingon plant size and the scope of deployment). Therefore,we must regularly archive or purge historical perfor-mance data.In general, PLCs can detect a multitude of event

types; we had to identify which data are meaning-ful in estimating throughput. The C-MORE analysistools helped GM to determine which data to col-lect and standard ways to aggregate the data. GM’sstandardization of PLC hardware and data-collectiontechniques has helped it to implement C-MORE andthe throughput-improvement process (TIP) through-out North America, and it makes its ongoing global-ization efforts possible.

Page 9: A1

Alden et al.: General Motors Increases Its Production Throughput14 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

Historical Data for Line DesignAs we collected data, we developed a databaseof actual historical station-level performance data.This database, which is organized by equipmenttype (make and model) and purpose, has becomean integral part of GM’s processes for designingnew production lines. Previously, GM’s best sourcesof performance estimates for new equipment werethe machine specifications equipment vendors pro-vided. Because vendors typically base their ratings onassumptions of ideal operating conditions, the ratingsare inherently biased and often unsuitable as inputsto throughput-analysis models. GM engineers nowhave a source of performance estimates that are basedon observations of similar equipment in use withinactual production environments. Using this histori-cal data, they can better predict future performancecharacteristics and analyze proposed production-linedesigns realistically, setting better performance targetsand improving GM’s ability to meet launch timetablesand throughput targets.

Throughput-Improvement ProcessesIn our pilot implementations of throughput analysisin GM during the late 1980s, we faced several cul-tural challenges. First, we had difficulty assemblingall the right people and getting them to focus on asingle throughput issue unless it was an obvious cri-sis. Because those who solved large crises becameheroes, people sought out and tackled the big prob-lems quickly, rather than working on the little per-sistent problems. These little problems often provedto be the real throughput constraints, and identify-ing and quantifying them requires careful analysis.Second, our earliest implementation efforts appearedto be yet another set of not-invented-here corporateprogram-of-the-month projects, to be tolerated untilthey fizzled out. As a result, we had difficulty get-ting the support and commitment we needed forsuccessful implementation. Finally, plant personnellacked confidence that a black-box computer programcould really understand the complexities of a produc-tion line.We responded to these problems in several ways.

We prepared solid case studies and excellent trainingmaterials to prove the value of throughput analysis,

and we developed a well-defined process the plantwould own to support and reward ongoing through-put improvements. To this last point, we developedand taught a standard TIP to promote the activeand effective use of labor, management, data collec-tion, and throughput analysis to find and eliminatethroughput bottlenecks. The TIP consisted of the fol-lowing steps:—Identify the problem: Typically, it’s the need to

improve throughput rates.—Collect data and analyze the problem: Use

throughput analysis to find and analyze bottlenecks.—Create plans for action: The planners are usu-

ally multidisciplinary teams of engineers, line oper-ators, maintenance personnel, suppliers, managers,and finance representatives.—Implement the solution: Assign responsibilities

and obtain commitment to provide implementationresources.—Evaluate that the solution: Ensure that the solu-

tion is correctly implemented and actually works.—Repeat the process.To further cultural transformation, we distributed

free copies of The Goal (Goldratt and Cox 1986) to per-sonnel at all plants. We conducted classes in the basictheory of constraints (TOC) to help plant personnelunderstand, from a very fundamental perspective, thedefinition and importance of production bottlenecks,why they can be difficult to identify, and why weneeded C-MORE to find them.With the TIP in place and the support of plant man-

agers and personnel, plants soon improved through-put and discovered new heroes—the teams thatquickly solved persistent throughput problems andhelped the plants meet their production targets. Tofurther reward and promote the use of throughputanalysis and TIP, GM gave a popular “BottleneckBuster” award to plants that made remarkableimprovements in throughput.After plants improved throughput of existing pro-

duction lines, we applied the process in a similarfashion to designing new production lines. We estab-lished formal workshops, especially in powertrain(engine and transmission) systems, to iteratively findand resolve bottlenecks using the C-MORE tools anda cross-functional team of experts.

Page 10: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 15

An Early Case Study—TheDetroit-Hamtramck Assembly PlantIn 1988, GM conducted a pilot implementation ofC-MORE and TIP at the Detroit-Hamtramck assem-bly plant, quickly achieving substantial throughputgains and cost savings. This experience serves as agood example of improvements that can be realizedthrough the use of these tools.In the late 1980s, GM considered the Detroit-

Hamtramck plant a model for its future, with hun-dreds of training hours per employee and vastamounts of new (and untested) advanced manu-facturing technology throughout the facility. It wasdesigned to produce 63 JPH with no overtime; yet theplant could barely achieve 56 JPH even with over-time often exceeding seven person-hours per vehi-cle. The associated costs of lost sales and overtimeexceeded $100,000 per shift. Rather than increase effi-ciency, the plant’s advanced manufacturing technol-ogy actually hindered improvement efforts becausethe added operational complexity made it difficultfor plant workers and managers to determine whereto focus improvement efforts. Under intense pres-sure to achieve its production targets, the plantworked unscheduled overtime and conducted manyprojects to increase throughput rates. Unfortunately,most of these initiatives produced meager resultsbecause they focused on local improvements that didnot affect overall system throughput. The lack ofimprovement caused tremendous frustration and fur-ther strained the relationships among the labor union,plant management, and corporate management.In 1988, GM researchers presented their new

throughput-analysis tools (an early version ofC-MORE) to the Detroit-Hamtramck plant man-ager, who in turn asked a plant-floor systems engi-neer to visit the GM Research Labs to learn more.Through interactions with GM researchers, the engi-neer learned about bottlenecks, throughput improve-ment, and C-MORE’s analysis capabilities. He alsoread the book they recommended, The Goal (Goldrattand Cox 1986), to learn more about continuousimprovement.Back at the plant, he started collecting workstation

performance data. After several days, he had gath-ered enough data to model and analyze the system

using C-MORE, and he found the top bottleneck oper-ation: the station installing the bulky headliner in theceiling of each car. The data showed this station fail-ing every five cycles for about one minute. Althoughevery vehicle had to pass through this single oper-ation, managers had never considered this station asignificant problem area because its downtime wassmall and no equipment actually failed. The engineerinvestigated the operation and found that the oper-ator worked for five cycles, stopped the line, wentto get five headliners, and then restarted the line.The downtime was measured at slightly less thanone minute, and it occurred every five cycles. Furtherinvestigation revealed that the underlying problemwas lack of floor space. The location of a supervisor’soffice prevented the delivery and placement of theheadliners next to the line. Ironically, the plant hadpreviously considered relocating the office but haddelayed the move because managers did not considerit a high priority. Based on this first C-MORE analy-sis, the engineer convinced the planner to move theoffice, despite its low priority. On the Monday afterthe move, the headliners were delivered close to theline and the line stops ended. Not surprisingly, thethroughput of the whole plant increased.Up to this point, the prevailing opinion had been

that plant operations were so complex that one couldnot predict cause-and-effect relationships, and thebest one could do was to fix as many things as pos-sible and hope for the best. Now GM could hope tosystematically improve plant throughput.We repeatedly collected data and addressed bot-

tlenecks until the plant met its throughput targetsconsistently and cut overtime in half (Figure 4). Theplant’s achievement was recognized and documentedin The Harbour Report, which placed the Detroit-Hamtramck plant in the top 10 most-improvedvehicle-assembly plants with a 20 percent improve-ment in productivity (Harbour and Associates, Inc.1992).Because of this success, plant operations changed.

First, the language changed; such terms as bottlenecks,stand-alone availability (SAA), mean cycles between fail-ures (MCBF), and mean time to repair (MTTR) becamepart of the vernacular. Armed with the C-MORE anal-ysis, the plant could prioritize repairs and changes

Page 11: A1

Alden et al.: General Motors Increases Its Production Throughput16 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

3.4

7.1

56.1

62.8

54

56

58

60

62

64

Nov-1988 Dec-1988 Jan-1989 Feb-1989 Mar-1989 Apr-1989 May-1989

Thr

ough

put r

ate

(JP

H)

0

1

2

3

4

5

6

7

8

9

Ove

rtim

e(h

ours

/veh

icle

)

OvertimeThroughput

Figure 4: Within six months of using C-MORE in the Detroit-Hamtramckassembly plant in November 1988, we found and removed bottlenecks,increased throughput by over 12 percent, attained the 63 jobs-per-hour(JPH) production target, and cut overtime hours per vehicle in half.

more effectively than ever before. Accurate data col-lection exposed the true performance characteristicsof workstations, and the finger pointing that causeddistrust in the plant gave way to teamwork, with onearea of the plant volunteering resources to another tohelp it to improve the bottleneck station.Building on this successful pilot, the Detroit-

Hamtramck plant continued its productivity improve-ment efforts in the ensuing years and achieved resultsthat were described as “phenomenal” in The HarbourReport North America 2001 (Harbour and Associates,Inc. 2001).

Implementation and the Evolution ofthe OrganizationEven after early pilot successes, it took a special effortand top management support to implement through-put analysis widely within GM. In 1991, with a vicepresident as eager champion, we formed a specialimplementation-project team. This team consisted of10 implementation members and 14 supporting mem-bers to help with management, consultation, training,programming, and administrative work. Our statedgoal was to “enable GM’s North American plants tosave $100 million in 1991 though the implementa-tion of C-MORE.” We trained team members and sta-tioned them in 15 of the most critical plants. Theycollected data, modeled and analyzed systems usingC-MORE to find the top throughput bottlenecks, ranTIP meetings, tracked performance, and provided

overall training. The results were remarkable: plantsachieved dramatic throughput increases (over 20 per-cent was common), reduced their overtime hours, andoften met their production targets for the first time.Our most conservative accounting of dollar impactfor this effort was over $90 million in a single year(1991) through C-MORE and TIP implementations for29 GM plants and car programs. This work provedthe value and feasibility of throughput analysis.To build on the success of the implementation

project team, central management established a ded-icated staff to spread the use of C-MORE and TIPto other plants. Thus, in 1994, it formed a cen-tral organization within GM North America Vehi-cle Operations to continually help plants with allaspects of throughput analysis, education, and imple-mentation. Over time, this group, which grew toalmost 60 members, developed common C-MOREdata-collection standards as part of all new equipmentgoing into plants and produced Web-enabled bottle-neck reports for use across the corporation. It devel-oped training courses and custom games to teachemployees and motivate them to use throughput-improvement tools and concepts. Improvement teamsidentified and attacked launch problems before newequipment started running production parts. Integralto this activity, C-MORE is now a standard corpo-rate tool for improving plant throughput and design-ing production lines. To support its widespread use,GM undertook several activities:—GM integrated C-MORE into multiple desktop

applications to provide various users easy access tothroughput-analysis capability.—GM provided formal training and software to

over 400 global users.—GM formed a C-MORE user’s group to share

developments, lessons, and case studies across theorganization and summarize results in a widely dis-tributed newsletter.—GM established common methods and standards

for data collection to support throughput analysis andits rapid implementation worldwide.—GM established a “Bottleneck Buster” award to

recognize plants that make remarkable improvementsin throughput.We are continuing to expand the use of the

C-MORE tools and TIP in GM’s global operations.

Page 12: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 17

The C-MORE tools and data provide a common plat-form and language that engineers, managers, andanalysts across GM’s worldwide operations use todevelop and promote common best-manufacturingpractices. Standard data definitions and performancemeasures facilitate communication among GM’s geo-graphically dispersed operations and enable the fasttransfer of lessons learned among manufacturingfacilities, no matter where they are located. C-MOREunderpins GM’s new global manufacturing system(GMS), which it is currently deploying throughout itsworldwide operations.Because of the success GM realized internally using

C-MORE and TIP, it formed a separate corporateorganization to conduct supplier-development initia-tives. Through its activities, GM can improve themanufacturing capabilities of external part suppliersby helping them to design and improve their produc-tion systems. Previously, GM plants had to react tounexpected constraints on part supplies (often dur-ing critical new vehicle launches); now GM engineerscollaborate with suppliers to proactively develop andanalyze models of suppliers’ manufacturing systems.With GM’s assistance, key suppliers are better ableto design systems that will meet contractual through-put requirements. As a result, GM’s suppliers becomemore competitive, and GM avoids the costs and neg-ative effects of part shortages. In one recent example,12 (out of 23) key suppliers for a new vehicle programhad initially designed systems that were incapable ofmeeting throughput-performance requirements; GMdiscovered this situation and helped the suppliers toresolve it prior to the vehicle launch period, avoid-ing costly delays. Although their impact is diffi-cult to quantify, these supplier-development initia-tives become increasingly important as GM reliesmore heavily on partnerships with external suppliers.

ImpactThe use of the C-MORE tools, data, and the through-put-improvement process pervades GM, with appli-cations to improve manufacturing-system operations,production-system designs, supplier development,and production-systems research. The impact of workin these areas is multifaceted. It has yielded measuredproductivity gains that reduced costs and increased

revenue, and it has changed GM’s processes, organi-zation, and culture.

ProfitsUse of the C-MORE tools and the TIP for improvingproduction throughput alone yielded over $2.1 bil-lion in documented savings and increased revenue,reflecting reduced overtime and increased sales forhigh-demand vehicles. Although it is impossible tomeasure cost avoidance, internal users estimate theadditional value of using the C-MORE tools in design-ing manufacturing systems for future vehicles to beseveral times this amount; they have improved deci-sions about equipment purchases and GM’s ability tomeet production targets and reduce costly unplannedinvestments. We have not carefully tracked the impacton profits of using the C-MORE tools in developingsuppliers.

Competitive ProductivityThe use of C-MORE and the TIP has improved GM’sability to compete with other automotive manufactur-ers. The remarkable improvements GM has achievedhave been recognized by The Harbour Report, the lead-ing industry scorecard of automotive manufacturingproductivity. When GM first began widespread imple-mentation of C-MORE in the early 1990s, not one GMplant appeared on the Harbour top-10 list of mostproductive plants. In fact, of the 15 least productiveplants measured by Harbour from 1989 through 1992,14 were GM plants. During this same time period,The Harbour Report estimate for GM’s North Americanmanufacturing capacity utilization was 60 percent,and its estimate of the average number of workersGM needed to assemble a vehicle was more than50 percent higher than its largest domestic competi-tor, Ford Motor Company (Harbour and Associates,Inc. 1992). This unbiased external assessment of GM’score manufacturing productivity clearly attested to itsneed to improve throughput.In the ensuing years, GM focused continu-

ally on increasing throughput, improving vehi-cle launches, and reducing unscheduled overtime.C-MORE enabled these efforts, helping GM to regaina competitive position within the North Americanautomotive industry. GM has made steady, year-after-year improvements in manufacturing productivity

Page 13: A1

Alden et al.: General Motors Increases Its Production Throughput18 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

45.5

35.934.7

37.0

46.5

34.3

30

35

40

45

50

1997 1998 1999 2000 2001 2002 2003 2004

Tot

al la

bor

hour

s pe

r ve

hicl

e (H

PV

)(a

ssem

bly,

sta

mpi

ng a

nd p

ower

trai

n) General Motors

DaimlerChrysler

Ford Motor

Figure 5: Productivity improvements in General Motors’ North Americanmanufacturing operations have outpaced those of its two major domes-tic competitors, DaimlerChrysler and Ford Motor Company. From 1997through 2004, GM’s total labor hours per vehicle improved by over 26 per-cent (Harbour and Associates, Inc. 1999–2003, Harbour Consulting 2004).

since 1997 (Figure 5). Its earlier use of C-MORE alsoled to measurable improvements, but direct compari-son with prior years is problematic because HarbourConsulting changed its measurement method in 1996.According to the 2004 Harbour Report, GM’s total

labor hours per vehicle dropped by over 26 percentsince 1997. GM now has four of the five most produc-tive assembly plants in North America and sets thebenchmark in eight of 14 vehicle segments (HarbourConsulting 2004). The estimate of overall vehicle-assembly capacity utilization is now 90 percent. Inaddition, GM’s stamping operations lead the industryin pieces per hour (PPH), and GM’s engine opera-tions lead the US-based automakers. C-MORE andTIP, combined with GM’s new global manufacturingsystem (GMS) and other initiatives, have enabled GMto make these outstanding improvements.

Standard Data CollectionCommon standards for data definitions, plant-floordata-collection hardware and software, and data-management tools and practices have made theotherwise massive effort of implementing and sup-porting throughput analysis by GM engineers and inGM plants a well-defined, manageable activity. Plant-floor data collection and management is now a globaleffort, with particular emphasis in Europe and theAsia-Pacific region.

Line DesignGM engineers often design new production linesin cross-functional workshops whose participants tryto minimize investment costs while achieving qual-ity and throughput goals. These workshops useC-MORE routinely because they can quickly and eas-ily develop, analyze, and comprehend models. UsingC-MORE, they can evaluate hundreds of line designsfor each area of a plant, whereas in the past theyconsidered fewer than 10 designs because of limiteddata and analysis capability. Also, GM now has goodhistorical data on which to base models. Althoughthe impact of line designs on profit is more diffi-cult to measure than the impact of plant improve-ments, GM users estimate that line designs that meetlaunch performance goals have two to three (some-times up to 10) times the profit impact of through-put improvements made after launch. This impactis largely because of better market timing for prod-uct launches and less unplanned investment dur-ing launch to achieve throughput targets. By usingC-MORE, GM has obtained such savings in severalrecent product launches, achieving record ramp-uptimes for the two newest engine programs in GM’sPowertrain Manufacturing organization.

Plant Operations and CultureGM’s establishment of throughput analysis hasfostered important changes in its manufacturingplants. First, they use throughput sensitivity analy-sis and optimization to explore, assess, and promotecontinuous-improvement initiatives. They can use itto resolve questions (and avoid speculation) aboutbuffer sizes and carrier counts.Throughput analysis forces plants to consider all

stations of a line as integral components of a sin-gle system. This perspective, together with the cross-functional participation in TIP meetings, reinforcesa focus on system-level performance. With manylocal improvement opportunities competing for lim-ited resources (for example, engineers, maintenance,and financial support), C-MORE and TIP help theplants concentrate their efforts on those initiativesthat improve bottleneck performance and have thegreatest impact on overall system throughput. Thischange greatly reduces the frustration of working toget project resources, only to find the improvements

Page 14: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 19

did little to improve throughput; with throughputanalysis, such efforts find quick support and providesatisfying results.Tailored throughput reports improve scheduling of

reactive and preventive maintenance by ranking areasaccording to their importance to throughput. Thesereports also enhance plant suggestion programs; man-agers can better evaluate and reward suggestions forimproving throughput, acting on the better-targetedsuggestions and obtaining higher overall improve-ments. They thus motivate employees to seek andprevent problems rather than reacting only to theproblem of the moment.The plants have integrated throughput analysis in

their organizations by designating throughput engi-neers to oversee data collection, to conduct through-put analyses, to help with problem solving, and tosupport TIP meetings. They have made this commit-ment even though they are under pressure to reduceindirect labor costs.The C-MORE users group (active for several years)

and the central staff that support it collect and commu-nicate best practices for throughput analysis and datacollection among the plants. They thus further supportthe continuous improvement of plant operations andcapture lessons learned for use by many plants.

Organizational LearningThe organization has learned many lessons during itseffort to institute throughput analysis. While some ofthese lessons may seem straightforward from a purelyacademic perspective—or to an operations research(OR) professional experienced in manufacturing-systems analysis—they were not apparent to theorganization’s managers and plant workers. Devel-oping an understanding of these concepts at anorganizational level was challenging and requiredparadigm shifts in many parts of the company; insome cases, the lessons contradicted active initiatives.We conducted training sessions at all levels of theorganization, from management to the plant floor.This training took many forms, from introductorygames that illustrate basic concepts of variability andbottlenecks while demonstrating the need for data-driven analysis, to formal training courses in the toolsand processes. Here are some lesson topics:—Understanding what data is needed and (more

important) what data is not needed.

—Understanding our data-collection capabilitiesand identifying the gaps.—The importance of accurate and validated plant-

floor data for analyzing operations improvements andfor designing new systems.—The importance of basing decisions on objective

data and analysis, not on beliefs based on isolatedexperiences.—The impact of variability on system performance

and the importance of managing buffers to protectsystem performance.—Reducing repair times to improve throughput

and decrease variability.—Understanding that extreme applications of the

lean mantra (zero buffers) are harmful.—Understanding that bottleneck operations aren’t

simply those with the longest downtimes or loweststand-alone availability.—The need for robust techniques to find true bot-

tlenecks.—Understanding that misuse of simulation analy-

sis can foster poor decisions.—The importance of consistency in modeling and

in the transportability of models and data.—The use of sensitivity analysis to understand key

inputs and modeling assumptions that work well inpractice.—The impact of early part-quality assessment on

cost and throughput.The organizational changes made during the course

of establishing throughput analysis included the for-mation of central groups to serve as “homerooms” forthroughput knowledge and to transfer best practicesand lessons learned among plants. Ultimately, GM’sinclusion of throughput analysis in its standard workand validation requirements testify to its success inlearning these lessons.

Stimulation of Manufacturing-Related OperationsResearch Within GMThe success of throughput analysis in GM hasexposed plant and corporate staffs to OR meth-ods and analysis. General Motors University, GM’sinternal training organization, has had over 4,500enrollments in its eight related courses on TOC,the C-MORE analysis tools, data collection, and TIP.

Page 15: A1

Alden et al.: General Motors Increases Its Production Throughput20 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

Employees’ growing knowledge has greatly facil-itated the support and acceptance of furthermanufacturing-related OR work in GM. Since thisproject began, GM has produced over 160 inter-nal research publications concerning improving plantperformance. GM researchers have developed andimplemented analysis tools in related areas, includingmaintenance operations, stamping operations, plant-capacity planning, vehicle-to-plant allocation, compo-nent make-buy decisions, and plant layout decisions.These tools generally piggyback on C-MORE’s repu-tation, data collection, and implementations to gainentry into plants and line-design workshops. Manyof these tools actually use embedded throughput cal-culations from C-MORE through its callable libraryinterface. The cumulative impact on profit of the addi-tional OR work enabled by our throughput-analysiswork is an indirect benefit of the C-MORE effort.

Concluding RemarksImproving GM’s production throughput has requireda sustained effort for over 15 years and hasyielded tangible results that affect its core business—manufacturing cars and trucks. Today, we continuethis effort on a global scale in both plant opera-tions and line design with strong organizational sup-port. The benefits include reduced costs and increasedrevenues. Our work has established standardized,automatic collection of data, fast and easy-to-usethroughput-analysis tools, and a proven and sup-ported throughput-improvement process. Success inthis work results from a focused early implementa-tion effort by a special task team followed by estab-lishment of a central support staff, plant ownershipof throughput-analysis capability with designatedthroughput engineers, strong management supportand recognition of successes, and an ongoing collab-oration between Research and Development, centralsupport staffs, and plant personnel.GM’s need for fast and accurate throughput-

analysis tools gave us exciting opportunities to applyand extend state-of-the-art results in analytic and sim-ulation approaches to throughput analysis and opti-mization and to conduct research on many topicsrelated to the performance of production systems.GM recognizes that much of this work provides itwith an important competitive advantage in the auto-

motive industry. In addition, our work has fosteredan increased use of OR methods to support decisionmaking in many other business activities within GM.From a broader perspective, our project represents

a success story of the application of OR and high-lights the multifaceted approach often required toachieve such success in a large, complex organiza-tion. The impact of such projects goes beyond profit.At GM, this effort has advanced organizational pro-cesses, employee education, plant culture, productionsystems research and related initiatives, and it hasgreatly increased the company’s exposure to OR.

Appendix

An Analytic Solution for the Two-Station ProblemThe solution of the two-station problem can be usedas a building block to analyze longer lines using adecomposition approach that GM developed in thelate 1980s and recently disclosed (Alden 2002). Wefirst describe the solution of the two-station problemwith unequal speeds and then discuss the solutionof longer lines. We begin with a job-flow approxima-tion of the two-station problem under the followingassumptions and notation:

Assumption 1. Jobs flow though the system so that foreach fraction of a work cycle completed in processing a job,the same fraction of the job moves though the station andenters its downstream buffer.

Assumption 2. The intermediate buffer, on its own,does not delay job flow, that is, it does not fail and has zerotransit time. It has a finite and fixed maximum job capac-ity of size B. (We increase buffer size by one to account forthe additional buffering provided by in-station job storagein the representation of the job flow.)

Assumption 3. Station i’s �i= 1�2� processing speed Siis constant under ideal processing conditions, that is, whennot failed, blocked, or starved.

Assumption 4. While one station is down, the otherstation does not fail, but its speed drops to its normalspeed times its stand-alone availability to roughly accountfor possible simultaneous downtime. We thus simplify theanalysis with little loss of accuracy.

Assumption 5. Station i’s processing time betweenfailures (not the elapsed time between failures) is an expo-nentially distributed random variable with a fixed failure

Page 16: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 21

rate �i. This means a station does not fail while idle, that is,when it is off, failed, under repair, blocked, or starved.

Assumption 6. Station i’s repair time is an expo-nentially distributed random variable with a fixed repairrate �i.

Assumption 7. The first station is never starved andthe second station is never blocked.

Assumption 8. All parameters (station speed, buffersize, failure rates, and repair rates) are positive, indepen-dent, and generally unequal.

Assumption 9. The system is in steady state, that is,the probability distribution of the system state at a ran-domly selected time is independent of the initial conditionsand of the time.

For now, we also assume that the processing speedof Station 1 is faster than that of Station 2, that is,S1 > S2. While both stations are operational, the finite-capacity buffer will eventually fill because S1 > S2.While the buffer is full, Station 1 will be idle at theend of each cycle until Station 2 finishes its job andloads its next job from the buffer. This phenomenon iscalled speed blocking of Station 1. (A similar speed starv-ing of Station 2 occurs when S1 < S2.) Speed blockingbriefly idles Station 1; as stations typically do not failduring idle time, we reduce the failure rate �1 by theratio S2/S1 in the flow-model representation duringperiods of speed blocking.We focus on the state of the system at times of

a station repair, that is, when both stations becomeoperational. At such times, the state of the system iscompletely described by the buffer contents (rangingfrom 0 to B), and the associated state probability dis-tribution is given by

P0 = probability the buffer content is zero (empty)when a station is repaired,

PB = probability the buffer content is B (full) whena station is repaired, and

p�x�= probability density that the buffer content is xfor 0 ≤ x ≤ B when a station is repaired,excluding the finite probabilities (impulses) atx= 0 and x= B.

Let p̂�x� denote the entire distribution given by(P0, p�x�, PB) so that p̂�x� is the mixed probability dis-tribution composed of p�x� plus the impulses of areasP0 and PB located at x= 0 and x= B, respectively. In a

similar manner, let q̂�x� or �q�x��QB� be the full prob-ability state distribution at a station failure (note forS1 > S2, Q0 = 0).Under our assumptions, the system has conve-

nient renewal times at station repairs, that is, whenthe probability distributions of the system’s state areidentical. We break the evolution of the distribution ofbuffer contents between subsequent times of stationrepairs into parts: (a) evolution from station repair toa station failure, and (b) evolution from station failureto its repair (Figure 6).For S1 > S2, there are five general cases of buffer-

content evolution from station repair to subsequentfailure (Figure 7) and seven general cases of buffer-content evolution from station failure to subsequentrepair (Figure 8). Assumption 4 greatly simplifiesthis case analysis by reducing the infinite numberof possible station failure-and-repair sequences beforeboth stations are operational to a manageable finitenumber.By considering all five cases of buffer-content evo-

lution from repair to failure, we can express q̂�x�in terms of p̂�x�. By considering all seven cases ofbuffer-content evolution from failure to repair and therenewal property of station repairs, we can derivep̂�x� in terms of q̂�x�. Alden (2002) showed that q̂�x�can be removed by substitution to develop a recursiveequation for p�x� that suggests it may be the weightedsum of three exponential terms (actually only two arerequired—the first term drops out). Assuming thisis the case, we can solve the recursive equations toobtain

p�x�= a2P0e�2x + a3P0e

�3x� where

a2 =��2+ r2���2+ r�

�2−�3� a3 =

��3+ r2���3+ r�

�3−�2�

r = �1+�2S1− S2

� r1 =�1��2+�2�

S2�2� r2 =

�2��1+�1�

S1�1�

�2 =− r − r1+ r22

+√(

r − r1+ r22

)2+(

�2�1+�2

)r�r1+ r2�� and

�3 =− r − r1+ r22

−√(

r − r1+ r22

)2+(

�2�1+�2

)r�r1+ r2��

Page 17: A1

Alden et al.: General Motors Increases Its Production Throughput22 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

B

B

Buf

fer

cont

ent

Time

RepairRepair Failure0

0 B0 B0xxx

P0 P0

PB PB

QB

p (x ) p (x )

p (x )

q (x )

q (x )ˆ ˆp (x )ˆ

Figure 6: The probability distribution of buffer contents at (any) station repair p̂�x� evolves to become the proba-bility distribution of buffer contents at the subsequent failure of any station q̂�x� and continues to evolve into thesame distribution of buffer contents p̂�x� when the station is repaired—a renewal. In this particular example,Station 1 was repaired, and then it failed, and subsequently it was repaired again.

Given p�x�, we can derive the equations

PB =��̂2��2+ r2�+ �2r�e

�2B − ��̂2��3+ r2�+ �2r�e�3B

�̂1��2−�3�P0

and

P0 = ��̂1�2�3��2−�3�� ·(�̂1rr2��2−�3�

+�3���2+ r2���2+ �̂1r�+ �2�2r�e�2B

−�2���3+r2���3+ �̂1r�+�2�3r�e�3B

)−1� where

�2 =�2

�1+�2� �̂1 =

S2�1S2�1+ S1�2

� and

Buf

fer

cont

ent

Time

Repair Failure Repair Failure Repair Failure Repair Failure Repair Failure

B

0

Figure 7: For S1 >S2, there are five cases of buffer-content evolution from a station repair to a subsequent failureof any station.

�̂2 =S1�2

S2�1+ S1�2�

Given p̂�x�, we can derive closed-form expressionsfor the various system states, throughput rate, work-in-buffer (WIB) and work-in-process (WIP), and sys-tem time (Alden 2002).We can easily derive results for the case S1 < S2 from

the above results by analyzing the flow of spaces forjobs that move in the opposite direction of job flow(a duality concept). With this understanding, we canquickly derive the following procedure to analyze the

Page 18: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 23

B

0

B

0

Station 1fails

Station 1repaired

Station 2fails

Station 2repaired

Station 2fails

Station 2repaired

Station 2fails

Station 2repaired

Station 1fails

Station 1repaired

Station 1fails

Station 1repaired

Station 1fails

Station 1repaired

Buf

fer

cont

ent

Buf

fer

cont

ent

Time

Time

Figure 8: For S1 > S2, there are seven cases of buffer-content evolution from a station failure to its subsequentrepair.

case S1 < S2:(1) Swap S1, �1, and �1 for S2, �2, and �2, respec-

tively.(2) Calculate the results using the equations for the

case S1 > S2.(3) Swap buffer state probabilities as follows:

blocking with starving, full with empty, and fillingwith emptying.(4) Swap the expected buffer contents of the flow

model, E�WIB�, with B−E�WIB�.We can use this analysis approach (or L’Hospital’s

rule) to derive results for the case of equal speeds,S1 = S2. Also, a search for possible zero dividesreveals one case, when �2 = 0, for which we can useL’Hospital’s rule to derive useful results (Alden 2002).We also developed a method of using the two-

station analysis as a building block to analyze longerlines: we analyzed each buffer in an iterative fash-ion while summarizing all stations upstream and allstations downstream of the buffer as two separateaggregate stations; this allowed us to use the resultsof the two-station analysis. We derived aggregate sta-tion parameters that satisfied conservation of flow,expected aggregate station-repair rate, and expectedaggregate station-failure rate. These expected ratesconsidered the actual station within an aggregate sta-

tion closest to a buffer and the smaller “nested” aggre-gate station two stations away from the same buffer.We calculated the resulting equations (in aggre-gate parameters), while iteratively passing thoughall buffers until we satisfied a convergence criterion.Convergence is not guaranteed and occasionally is notachieved, even with a small step size. However, itis very rare that convergence is too poor to provideuseful results. Gershwin (1987) describes this gen-eral approach, called decomposition, to derive expres-sions for aggregate station parameters (repair rate,failure rate, and speed), which we extended to handlestations with unequal speeds. Due to the similarityof approaches, we omit the details of our particularsolution.

Activity-Network Representation of Job FlowAn activity network is a graphical representation ofevents or activities (represented as nodes) and theirprecedence relationships (represented with directedarcs). Muth (1979) and Dallery and Towsley (1991)used a network representation of job flow through aproduction line to prove the reversibility and symme-try properties of production lines. These networks aresometimes referred to as sample path diagrams.To analyze serial production lines, the C-MORE

network simulation uses an activity network with a

Page 19: A1

Alden et al.: General Motors Increases Its Production Throughput24 Interfaces 36(1), pp. 6–25, © 2006 INFORMS

Figure 9: In the activity-network representation (top) of a closed-loop serial production line (bottom), each noderepresents a job being processed by a workstation. Each row of nodes corresponds to the sequence of jobs pro-cessed by a specific station, and each column of nodes represents a single job (moved by a specific carrier) as itprogresses through all five stations. The arcs in the network indicate precedence relationships between adjacentnodes (for clarity, we have omitted some arcs from the diagram). Solid arcs represent the availability of jobs(vertical arcs) and the availability of workstations (horizontal arcs). Dashed arcs represent time dependency dueto potential blocking between neighboring workstations, where the number of columns spanned by an arc indi-cates the number of intermediate buffer spaces between the workstations. We can calculate the time when a jobenters a workstation as the maximum time among nodes with arcs leading into the specific node correspondingto that workstation. The update order for information in the activity network depends on the initial position of jobcarriers (dashed circles).

simple repeatable structure, where each node repre-sents a station processing a job (Figure 9). Concep-tually, the nodes in the network are arranged in agrid, so that each row of nodes corresponds to thesequence of jobs processed by a specific workstationand each column of nodes corresponds to a specificjob as it progresses through the series of workstations.The arcs in the activity network represent the time-dependent precedence of events. Within a given row,an arc between nodes in adjacent columns representsthe time dependency of a specific workstation’s avail-ability to process a new job upon that workstation’searlier completion of the previous job. Within a givencolumn, an arc between nodes in adjacent rows repre-sents the time dependency of a specific job’s availabil-

ity to be processed by a new workstation upon thatjob’s completion at the upstream workstation. A diag-onal arc between nodes in adjacent rows and differentcolumns represents the dependency of an upstreamstation’s ability to release a job upon downstreamblocking, where the number of columns spanned bythe arc indicates the number of intermediate bufferspaces. We use the time dependencies represented bythese arcs to analyze the dynamic flow of jobs throughthe production line.For C-MORE, the activity-network representation

provides the conceptual foundation for a very effi-cient simulation that avoids using global time-sortedevent queues and other overhead functions foundin traditional discrete-event simulation. Although we

Page 20: A1

Alden et al.: General Motors Increases Its Production ThroughputInterfaces 36(1), pp. 6–25, © 2006 INFORMS 25

can view the hybrid simulation algorithm in C-MOREas a generalization of the network-simulation method,in our implementation, we use very different rep-resentations of the internal data structures for theactivity network, which improve computational per-formance when combined with a global event queuefor managing the time-synchronized events.

ReferencesAlden, J. M. 2002. Estimating performance of two workstations in

series with downtime and unequal speeds. GM technical reportR&D-9434, Warren, MI.

Altiok, T. 1997. Performance Analysis of Manufacturing Systems.Springer, New York.

Buzacott, J. A., J. G. Shantikumar. 1993. Stochastic Models of Manu-facturing Systems. Prentice Hall, Englewood Cliffs, NJ.

Chen, J. C., L. Chen. 1990. A fast simulation approach for tandemqueueing systems. O. Balci, R. P. Sadowski, R. E. Nance, eds.Proc. 22nd Conf. Winter Simulation. IEEE Press, Piscataway, NJ,539–546.

Dallery, Y., S. B. Gershwin. 1992. Manufacturing flow line systems:A review of models and analytical results. Queuing Systems12(1–2) 3–94.

Dallery, Y., D. Towsley. 1991. Symmetry property of the throughputin closed tandem queueing networks with finite buffers. Oper.Res. Lett. 10(9) 541–547.

General Motors Corporation. 1992. 1991 Annual Report. Detroit, MI.General Motors Corporation. 2005a. 2004 Annual Report. Detroit,

MI.General Motors Corporation. 2005b. Company profile. Retrieved

February 25, 2005 from http://www.gm.com/company/corp_info/profiles/.

Gershwin, S. B. 1987. An efficient decomposition method for the

approximate evaluation of tandem queues with finite storagespace and blocking. Oper. Res. 35(2) 291–305.

Gershwin, S. B. 1994. Manufacturing Systems Engineering. PrenticeHall, Englewood Cliffs, NJ.

Goldratt, E. M., J. Cox. 1986. The Goal: A Process of Ongoing Improve-ment. North River Press, Croton-on-Hudson, NY.

Govil, M. C., M. C. Fu. 1999. Queueing theory in manufacturing:A survey. J. Manufacturing Systems 18(3) 214–240.

Harbour and Associates, Inc. 1992. The Harbour Report: CompetitiveAssessment of the North American Automotive Industry 1989–1992.Troy, MI.

Harbour and Associates, Inc. 1999. The Harbour Report 1999—NorthAmerica. Troy, MI.

Harbour and Associates, Inc. 2000. The Harbour Report North America2000. Troy, MI.

Harbour and Associates, Inc. 2001. The Harbour Report North America2001. Troy, MI.

Harbour and Associates, Inc. 2002. The Harbour Report North America2002. Troy, MI.

Harbour and Associates, Inc. 2003. The Harbour Report North America2003. Troy, MI.

Harbour Consulting. 2004. The Harbour Report North America 2004.Troy, MI.

Li, J., D. E. Blumenfeld, J. M. Alden. 2003. Throughput analysisof serial production lines with two machines: Summary andcomparisons. GM technical report R&D-9532, Warren, MI.

Muth, E. J. 1979. The reversibility property of production lines.Management Sci. 25(2) 152–158.

Papadopoulos, H. T., C. Harvey. 1996. Queueing theory in manu-facturing systems analysis and design: A classification of mod-els for production and transfer lines. Eur. J. Oper. Res. 92(1)1–27.

Viswanadham, N., Y. Narahari. 1992. Performance Modeling of Auto-mated Manufacturing Systems. Prentice Hall, Englewood Cliffs,NJ.