Amdahl's Law

download Amdahl's Law

of 5

description

Amdahl's Law

Transcript of Amdahl's Law

  • Amdahl's law 1

    Amdahl's law

    The speedup of a program using multiple processors in parallel computing islimited by the sequential fraction of the program. For example, if 95% of theprogram can be parallelized, the theoretical maximum speedup using parallel

    computing would be 20 as shown in the diagram, no matter how many processorsare used.

    Amdahl's law, also known as Amdahl'sargument,[1] is named after computerarchitect Gene Amdahl, and is used to findthe maximum expected improvement to anoverall system when only part of the systemis improved. It is often used in parallelcomputing to predict the theoreticalmaximum speedup using multipleprocessors. It was presented at the AFIPSSpring Joint Computer Conference in 1967.

    The speedup of a program using multipleprocessors in parallel computing is limitedby the time needed for the sequentialfraction of the program. For example, if aprogram needs 20 hours using a singleprocessor core, and a particular portion of 1hour cannot be parallelized, while theremaining promising portion of 19 hours(95%) can be parallelized, then regardless ofhow many processors we devote to a parallelized execution of this program, the minimum execution time cannot beless than that critical 1 hour. Hence the speedup is limited up to 20, as the diagram illustrates.

    DescriptionAmdahl's law is a model for the relationship between the expected speedup of parallelized implementations of analgorithm relative to the serial algorithm, under the assumption that the problem size remains the same whenparallelized. For example, if for a given problem size a parallelized implementation of an algorithm can run 12% ofthe algorithm's operations arbitrarily quickly (while the remaining 88% of the operations are not parallelizable),Amdahl's law states that the maximum speedup of the parallelized version is 1/(1 0.12) = 1.136 times as fast as thenon-parallelized implementation.More technically, the law is concerned with the speedup achievable from an improvement to a computation thataffects a proportion P of that computation where the improvement has a speedup of S. (For example, if 30% of thecomputation may be the subject of a speed up, P will be 0.3; if the improvement makes the portion affected twice asfast, S will be 2.) Amdahl's law states that the overall speedup of applying the improvement will be:

    To see how this formula was derived, assume that the running time of the old computation was 1, for some unit oftime. The running time of the new computation will be the length of time the unimproved fraction takes (which is 1 P), plus the length of time the improved fraction takes. The length of time for the improved part of the computationis the length of the improved part's former running time divided by the speedup, making the length of time of theimproved part P/S. The final speedup is computed by dividing the old running time by the new running time, whichis what the above formula does.Here's another example. We are given a sequential task which is split into four consecutive parts: P1, P2, P3 and P4with the percentages of runtime being 11%, 18%, 23% and 48% respectively. Then we are told that P1 is not speed

  • Amdahl's law 2

    up, so S1 = 1, while P2 is speed up 5, P3 is speed up 20, and P4 is speed up 1.6. By using the formula P1/S1 +P2/S2 + P3/S3 + P4/S4, we find the new sequential running time is:

    or a little less than 12 the original running time. Using the formula (P1/S1 + P2/S2 + P3/S3 + P4/S4)1, the overall

    speed boost is 1 / 0.4575 = 2.186, or a little more than double the original speed. Notice how the 20 and 5 speedupdon't have much effect on the overall speed when P1 (11%) is not sped up, and P4 (48%) is sped up only 1.6 times.

    ParallelizationIn the case of parallelization, Amdahl's law states that if P is the proportion of a program that can be made parallel(i.e., benefit from parallelization), and (1 P) is the proportion that cannot be parallelized (remains serial), then themaximum speedup that can be achieved by using N processors is

    .

    In the limit, as N tends to infinity, the maximum speedup tends to 1 / (1 P). In practice, performance to price ratiofalls rapidly as N is increased once there is even a small component of (1 P).As an example, if P is 90%, then (1 P) is 10%, and the problem can be sped up by a maximum of a factor of 10, nomatter how large the value of N used. For this reason, parallel computing is only useful for either small numbers ofprocessors, or problems with very high values of P: so-called embarrassingly parallel problems. A great part of thecraft of parallel programming consists of attempting to reduce the component (1 P) to the smallest possible value.P can be estimated by using the measured speedup (SU) on a specific number of processors (NP) using

    .

    P estimated in this way can then be used in Amdahl's law to predict speedup for a different number of processors.

    Relation to law of diminishing returnsAmdahl's law is often conflated with the law of diminishing returns, whereas only a special case of applyingAmdahl's law demonstrates 'law of diminishing returns'. If one picks optimally (in terms of the achieved speed-up)what to improve, then one will see monotonically decreasing improvements as one improves. If, however, one picksnon-optimally, after improving a sub-optimal component and moving on to improve a more optimal component, onecan see an increase in return. Note that it is often rational to improve a system in an order that is "non-optimal" inthis sense, given that some improvements are more difficult or consuming of development time than others.Amdahl's law does represent the law of diminishing returns if you are considering what sort of return you get byadding more processors to a machine, if you are running a fixed-size computation that will use all availableprocessors to their capacity. Each new processor you add to the system will add less usable power than the previousone. Each time you double the number of processors the speedup ratio will diminish, as the total throughput headstoward the limit of .This analysis neglects other potential bottlenecks such as memory bandwidth and I/O bandwidth, if they do not scalewith the number of processors; however, taking into account such bottlenecks would tend to further demonstrate thediminishing returns of only adding processors.

  • Amdahl's law 3

    Speedup in a sequential program

    Assume that a task has two independent parts, A and B. B takes roughly 25% of the timeof the whole computation. By working very hard, one may be able to make this part 5

    times faster, but this only reduces the time for the whole computation by a little. Incontrast, one may need to perform less work to make part A be twice as fast. This will

    make the computation much faster than by optimizing part B, even though B's speed-up isgreater by ratio, (5 versus 2).

    The maximum speedup in an improvedsequential program, where some partwas sped up times is limited byinequality

    where ( ) is the fraction oftime (before the improvement) spent inthe part that was not improved. Forexample (see picture on right):

    If part B is made five times faster (), , , and

    , then

    If part A is made to run twice as fast ( ), , , and , then

    Therefore, making A twice as fast is better than making B five times faster. The percentage improvement in speedcan be calculated as

    Improving part A by a factor of two will increase overall program speed by a factor of 1.6, which makes it 37.5%faster than the original computation.

    However, improving part B by a factor of five, which presumably requires more effort, will only achieve anoverall speedup factor of 1.25, which makes it 20% faster.

    Relation to Gustafson's LawJohn L. Gustafson pointed out in 1988 what is now known as Gustafson's Law: people typically are not interested insolving a fixed problem in the shortest possible period of time, as Amdahl's Law describes, but rather in solving thelargest possible problem (e.g., the most accurate possible approximation) in a fixed "reasonable" amount of time. Ifthe non-parallelizable portion of the problem is fixed, or grows very slowly with problem size (e.g., O(log n)), thenadditional processors can increase the possible problem size without limit.

  • Amdahl's law 4

    Notes[1][1] Rodgers 85, p.226

    References Amdahl, Gene (1967). "Validity of the Single Processor Approach to Achieving Large-Scale Computing

    Capabilities" (http:/ / www-inst. eecs. berkeley. edu/ ~n252/ paper/ Amdahl. pdf) (PDF). AFIPS ConferenceProceedings (30): 483485.

    Rodgers, David P. (June 1985). "Improvements in multiprocessor system design" (http:/ / portal. acm. org/citation. cfm?id=327215). ACM SIGARCH Computer Architecture News archive (New York, NY, USA: ACM)13 (3): 225231. doi: 10.1145/327070.327215 (http:/ / dx. doi. org/ 10. 1145/ 327070. 327215). ISSN 0163-5964(http:/ / www. worldcat. org/ issn/ 0163-5964).

    External links Cases where Amdahl's law is inapplicable (http:/ / www. futurechips. org/ thoughts-for-researchers/

    parallel-programming-gene-amdahl-said. html) Oral history interview with Gene M. Amdahl (http:/ / purl. umn. edu/ 104341) Charles Babbage Institute,

    University of Minnesota. Amdahl discusses his graduate work at the University of Wisconsin and his design ofWISC. Discusses his role in the design of several computers for IBM including the STRETCH, IBM 701, andIBM 704. He discusses his work with Nathaniel Rochester and IBM's management of the design process.Mentions work with Ramo-Wooldridge, Aeronutronic, and Computer Sciences Corporation

    Reevaluating Amdahl's Law (http:/ / www. scl. ameslab. gov/ Publications/ Gus/ AmdahlsLaw/ Amdahls. html) Reevaluating Amdahl's Law and Gustafson's Law (http:/ / spartan. cis. temple. edu/ shi/ public_html/ docs/

    amdahl/ amdahl. html) A simple interactive Amdahl's Law calculator (http:/ / www. julianbrowne. com/ article/ viewer/ amdahls-law) "Amdahl's Law" (http:/ / demonstrations. wolfram. com/ AmdahlsLaw/ ) by Joel F. Klein, Wolfram

    Demonstrations Project, 2007. Amdahl's Law in the Multicore Era (http:/ / www. cs. wisc. edu/ multifacet/ amdahl/ ) Amdahl's Law explanation (http:/ / www. gordon-taft. net/ Amdahl_Law. html) Blog Post: "What the $#@! is Parallelism, Anyhow?" (http:/ / www. cilk. com/ multicore-blog/ bid/ 5365/

    What-the-is-Parallelism-Anyhow) Amdahl's Law applied to OS system calls on multicore CPU (http:/ / www. multicorepacketprocessing. com/

    how-should-amdahl-law-drive-the-redesigns-of-socket-system-calls-for-an-os-on-a-multicore-cpu)

  • Article Sources and Contributors 5

    Article Sources and ContributorsAmdahl's law Source: http://en.wikipedia.org/w/index.php?oldid=548490358 Contributors: 2001:660:330F:A4:D6BE:D9FF:FE15:81A1, Aaronbrick, Alexf, Arcann, Arensb, Bender235, BerekHalfhand, Bfroehler, Blue Dream, Bovlb, CRGreathouse, Chowbok, Constructive editor, Conversion script, Csari, Daniels220, Dgies, DopefishJustin, Dwarf Kirlston, Dyl, Ebryn, Edward, Ejrh,Elwikipedista, FedericoMenaQuintero, Fflanner, Fnordpig, Fred Hsu, Fresheneesz, Gareth Owen, Ggenellina, Ghewgill, Gonzalo Diethelm, Hadal, Hga, Hooperbloob, Hyad, IRWolfie-, ItajSherman, Jgonion, Jleedev, JonHarder, Jwiley80, Khazadum, Kjkolb, KrakatoaKatie, Liao, Ligulem, Lproven, MSGJ, Madmardigan53, Malcolm rowe, Masgatotkaca, Maximus Rex, MichaelHardy, Michaelbrawn, MichaelsProgramming, Mike Schwartz, Mlpkr, Mrdice, Mschlindwein, Mshonle, Newone, Nickpowerz, Nikai, Nikola Smolenski, No Guru, Odd bloke, Orphic,PanagosTheOther, Pdxdaved, Pion, Pleasantville, Poszwa, Prohlep, RPHv, Randomalious, Rbonvall, Reedy, Ripe, Rjtech, Sayakbhowmick, Skylogic, Spleenk, Steve Mohan, Stw, Svick, Swarm,Tagishsimon, Teamxtra, Telofy, The Anome, The wub, Theimmaculatechemist, Thom2729, Threadman, Thumperward, TimBentley, Timwi, TittoAssini, Toby Bartels, Torla42, Utar, Velella,Vthiru, Wikiuser1239, William Avery, Wiwaxia, Wolfgang Kufner, Woohookitty, Xaven, YuryKirienko, 184 anonymous edits

    Image Sources, Licenses and ContributorsImage:AmdahlsLaw.svg Source: http://en.wikipedia.org/w/index.php?title=File:AmdahlsLaw.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: AnonMoos,Bender235, Ebraminio, JRGom, Phatom87, Senator2029, UtarImage:Optimizing-different-parts.svg Source: http://en.wikipedia.org/w/index.php?title=File:Optimizing-different-parts.svg License: Public Domain Contributors: Gorivero

    LicenseCreative Commons Attribution-Share Alike 3.0 Unported//creativecommons.org/licenses/by-sa/3.0/

    Amdahl's lawDescriptionParallelization Relation to law of diminishing returnsSpeedup in a sequential program Relation to Gustafson's Law NotesReferencesExternal links

    License