CSC 7600 Lecture 5 : Capacity Computing, Spring 2011 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, &...
-
Upload
ambrose-hopkins -
Category
Documents
-
view
217 -
download
0
Transcript of CSC 7600 Lecture 5 : Capacity Computing, Spring 2011 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, &...
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
HIGH PERFORMANCE COMPUTING: MODELS, METHODS, & MEANS
CAPACITY COMPUTING
Prof. Thomas SterlingDepartment of Computer ScienceLouisiana State UniversityFebruary 1, 2011
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
2
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
3
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Key Terms and Concepts
4
Problem
instructionsinstructions
CPU
Conventional serial execution where the problem is represented as a series of instructions that are executed by the CPU (also sequential execution)
CPU CPU CPU CPU
instructionsinstructions
Task Task Task TaskProblemProblem
Parallel execution of a problem involves partitioning of the problem into multiple executable parts that are mutually exclusive and collectively exhaustive represented as a partially ordered set exhibiting concurrency.
Parallel computing takes advantage of concurrency to :• Solve larger problems within
bounded time• Save on Wall Clock Time• Overcoming memory constraints• Utilizing non-local resources
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Key Terms and Concepts
• Scalable Speedup : Relative reduction of execution time of a fixed size workload through parallel execution
• Scalable Efficiency : Ratio of the actual performance to the best possible performance.
5
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
6
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Defining the 3 C’s …• Main Classes of computing :
– High capacity parallel computing : A strategy for employing distributed computing resources to achieve high throughput processing among decoupled tasks. Aggregate performance of the total system is high if sufficient tasks are available to be carried out concurrently on all separate processing elements. No single task is accelerated. Uses increased workload size of multiple tasks with increased system scale.
– High capability parallel computing : A strategy for employing tightly coupled structures of computing resources to achieve reduced execution time of a given application through partitioning into concurrently executable tasks. Uses fixed workload size with increased system scale.
– Cooperative computing : A strategy for employing moderately coupled ensemble of computing resources to increase size of the data set of a user application while limiting its execution time. Uses a workload of a single task of increased data set size with increased system scale.
7
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Strong Scaling Vs. Weak Scaling
8
Machine Scale (# of nodes)
Wor
k pe
r ta
sk
Weak Scaling
Strong Scaling
1 2 4 8
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Strong Scaling, Weak Scaling
9
Strong ScalingWeak Scaling
Strong Scaling
Weak Scaling
Tot
al P
robl
em S
ize
Machine Scale (# of nodes)
Gra
nula
rity
(siz
e /
node
)
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Defining the 3 C’s …
• High capacity computing systems emphasize the overall work performed over a fixed time period. Work is defined as the aggregate amount of computation performed across all functional units, all threads, all cores, all chips, all coprocessors and network interface cards in the system.
• High capability computing systems emphasize improvement (reduction) in execution time of a single user application program of fixed data set size.
• Cooperative computing systems emphasize single application weak scaling– Performance increase through increase in problem size
(usually data set size and # of task partitions) with increase in system scale
10
Adapted from : High-performance throughput computing S Chaudhry, P Caprioli, S Yip, M Tremblay - IEEE Micro, 2005 - doi.ieeecomputersociety.org
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Strong Scaling, Weak Scaling
11
Weak Scaling Strong Scaling
Capacity CapabilityCooperativeSingle Job
Workload Size Scaling
• Capability• Primary scaling is decrease in response time proportional to increase in resources
applied• Single job, constant size – goal: response-time scaling proportional to machine size• Tightly-coupled concurrent tasks making up single job
• Cooperative• Single job, (different nodes working on different partitions of the same job)• Job size scales proportional to machine• Granularity per node is fixed over range of system scale• Loosely coupled concurrent tasks making up single job
• Capacity• Primary scaling is increase in throughput proportional to increase in resources
applied• Decoupled concurrent tasks, each a separate job, increasing in number of instances
– scaling proportional to machine.
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
12
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Models of Parallel Processing
• Conventional models of parallel processing– Decoupled Work Queue (covered in segment 1 of the course)– Communicating Sequential Processing (CSP message passing)
(covered in segment 2)– Shared memory multiple thread (covered in segment 3)
• Some alternative models of parallel processing– SIMD
• Single instruction stream multiple data stream processor array
– Vector Machines• Hardware execution of value sequences to exploit pipelining
– Systolic• An interconnection of basic arithmetic units to match algorithm
– Data Flow• Data precedent constraint self-synchronizing fine grain execution units
supporting functional (single assignment) execution
13
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Shared memory multiple Thread
• Static or dynamic• Fine Grained• OpenMP• Distributed shared memory systems• Covered in Segment 3
14
Network
CPU 1 CPU 2 CPU 3
memory memory memory
Network
CPU 1 CPU 2 CPU 3
memory memory memory
Symmetric Multi Processor (SMP usually cache coherent )
Distributed Shared Memory (DSM usually cache coherent)
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Communicating Sequential Processes
• One process is assigned to each processor
• Work done by the processor is performed on the local data
• Data values are exchanged by messages
• Synchronization constructs for inter process coordination
• Distributed Memory• Coarse Grained• MPI application programming interface• Commodity clusters and MPP
– MPP is acronym for “Massively Parallel Processor”
• Covered in Segment 2
15
Network
CPU 1 CPU 2 CPU 3
memory memory memory
Distributed Memory (DM often not cache coherent)
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Decoupled Work Queue Model
• Concurrent disjoint tasks – Job stream parallelism– Parametric Studies
• SPMD (single program multiple data)
• Very coarse grained• Example software package : Condor• Processor farms and commodity clusters • This lecture covers this model of parallelism
16
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
17
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Ideal Speedup Issues
18
• W is total workload measured in elemental pieces of work (e.g. operations, instructions, subtasks, tasks, etc.)
• T(p) is total execution time measured in elemental time steps (e.g. clock cycles) where p is # of execution sites (e.g. processors, threads)
• wi is work for a given task I, measured in operations
• Example: here we divide a million (really Mega) operation workload, W, into a thousand tasks, w1 to w1024 each of a 1 K operations
• Assume 256 processors performing workload in parallel• T(256) = 4096 steps, speedup = 256, Eff = 1
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Ideal Speedup Example
19
W
220
w1 w210 210
P28
210 210 210 210
Processors
212
P1
T(1)=220
T(28)=212
Units : steps
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Granularities in ParallelismOverhead
• The additional work that needs to be performed in order to manage the parallel resources and concurrent abstract tasks that is in the critical time path.
Coarse Grained• Decompose problem into large independent
tasks. Usually there is no communication between the tasks. Also defined as a class of parallelism where: “relatively large amounts of computational work is done between communication”
Fine Grained • Decompose problem into smaller inter-
dependent tasks. Usually these tasks are communication intensive. Also defined as a class of parallelism where: “relatively small amounts of computational work are done between communication events” –www.llnl.gov/computing/tutorials/parallel_comp
20
Images adapted from : http://www.mhpcc.edu/training/workshop/parallel_intro/
Overhead
Computation
Coarse Grained
Overhead
Computation
Finely Grained
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Overhead
21
• Overhead: Additional critical path (in time) work required to manage parallel resources and concurrent tasks that would not be necessary for purely sequential execution
• V is total overhead of workload execution
• vi is overhead for individual task wi
• Each task takes vi +wi time steps to complete
• Overhead imposes upper bound on scalability
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Overhead
22
vv ww
V+W=4v+4wV+W=4v+4w
v = overheadV = Total overheadw = work unitW = Total workTi = execution time with i processorsP = # processors
Assumption : Workload is infinitely divisible
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Scalability and Overhead for fixed sized work tasks
23
• W is divided in to J tasks of size wg
• Each task requires v overhead work to manage• For P processors there are approximates J/P tasks to be
performed in sequence so,
• TP is J(wg + v)/P
• Note that S = T1 / TP
• So, S = P / (1 + v / wg)
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Scalability & Overhead
24
when W >> v
v = overheadwg = work unitW = Total workTi = execution time with i processorsP = # ProcessorsJ = # Tasks
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
25
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Capacity Computing with basic Unix tools
• Combination of common Unix utilities such as ssh, scp, rsh, rcp can be used to remotely create jobs (to get more information about these commands try man ssh, man scp, man rsh, man rcp on any Unix shell)
• For small workloads it can be convenient to translate the execution of the program into a simple shell script.
• Relying on simple Unix utilities poses several application management constraints for cases such as :– Aborting started jobs – Querying for free machines– Querying for job status – Retrieving job results – etc..
26
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
BOINC , Seti@Home
• BOINC (Berkley Open Infrastructure for Network Computing)• Opensource software that enables distributed coarse grained
computations over the internet. • Follows the Master-Worker model, in BOINC : no
communication takes place among the worker nodes • SETI@Home• Einstein@Home• Climate prediction• And many more…
27
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
28
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Management Middleware : Condor
• Designed, developed and maintained at University of Wisconsin Madison by a team lead by Miron Livny
• Condor is a versatile workload management system for managing pool of distributed computing resources to provide high capacity computing.
• Assists distributed job management by providing mechanisms for job queuing, scheduling, priority management, tools that facilitate utilization of resources across Condor pools
• Condor also enables resource management by providing monitoring utilities, authentication & authorization mechanisms, condor pool management utilities and support for Grid Computing middleware such as Globus.
• Condor Components• ClassAds• Matchmaker• Problem Solvers
29
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Condor Components : Class Ads• ClassAds (Classified Advertisements) concept is
very similar to the newspaper classifieds concepts where buyers and sellers advertise their products using abstract yet uniquely defining named expressions. Example : Used Car Sales
• ClassAds language in Condor provides well defined means of describing the User Job and the end resources ( storage / computational ) so that the Condor MatchMaker can match the job with the appropriate pool of resources.
Management Middleware : Condor
Src : Douglas Thain, Todd Tannenbaum, and Miron Livny, "Distributed Computing in Practice: The Condor Experience" Concurrency and Computation: Practice and
Experience, Vol. 17, No. 2-4, pages 323-356, February-April, 2005.http://www.cs.wisc.edu/condor/doc/condor-practice.pdf
30
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Job ClassAd & Machine ClassAd
31
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Condor MatchMaker• MatchMaker, a crucial part of the Condor
architecture, uses the job description classAd provided by the user and matches the Job to the best resource based on the Machine description classAd
• MatchMaking in Condor is performed in 4 steps : 1. Job Agent (A) and resources (R) advertise themselves.
2. Matchmaker (M) processes the known classAds and generates pairs that best match resources and jobs
3. Matchmaker informs each party of the job-resource pair of their prospective match.
4. The Job agent and resource establish connection for further processing. (Matchmaker plays no role in this step, thus ensuring separation between selection of resources and subsequent activities)
Management Middleware : Condor
Src : Douglas Thain, Todd Tannenbaum, and Miron Livny, "Distributed Computing in Practice: The Condor Experience" Concurrency and
Computation: Practice and Experience, Vol. 17, No. 2-4, pages 323-356, February-April, 2005.
http://www.cs.wisc.edu/condor/doc/condor-practice.pdf
32
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
33
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Condor Problem Solvers• Master-Worker (MW) is a problem solving system that is
useful for solving a coarse grained problem of indeterminate size such as parameter sweep etc.
• The MW Solver in Condor consists of 3 main components : work-list, a tracking module, and a steering module. The work-list keeps track of all pending work that master needs done. The tracking module monitors progress of work currently in progress on the worker nodes. The steering module directs computation based on results gathered and the pending work-list and communicates with the matchmaker to obtain additional worker processes.
• DAGMan is used to execute multiple jobs that have dependencies represented as a Directed Acyclic Graph where the nodes correspond to the jobs and edges correspond to the dependencies between the jobs. DAGMan provides various functionalities for job monitoring and fault tolerance via creation of rescue DAGs.
Management Middleware : Condor
34
MasterMaster
w1w1 w..Nw..N
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Management Middleware : Condor
Indepth Coverage : http://www.cs.wisc.edu/condor/publications.html
Recommended Reading :Douglas Thain, Todd Tannenbaum, and Miron Livny, "Distributed Computing in Practice: The Condor Experience"
Concurrency and Computation: Practice and Experience, Vol. 17, No. 2-4, pages 323-356, February-April, 2005. [PDF]
Todd Tannenbaum, Derek Wright, Karen Miller, and Miron Livny, "Condor - A Distributed Job Scheduler", in Thomas Sterling, editor, Beowulf Cluster Computing with Linux, The MIT Press, 2002.
ISBN: 0-262-69274-0 [Postscript] [PDF]
35
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Core components of Condor• condor_master: This program runs constantly and ensures that all other parts of Condor
are running. If they hang or crash, it restarts them. • condor_collector: This program is part of the Condor central manager. It collects
information about all computers in the pool as well as which users want to run jobs. It is what normally responds to the condor_status command. It's not running on your computer, but on the main Condor pool host (Arete head node).
• condor_negotiator: This program is part of the Condor central manager. It decides what jobs should be run where. It's not running on your computer, but on the main Condor pool host (Arete head node).
• condor_startd: If this program is running, it allows jobs to be started up on this computer--that is, Arete is an "execute machine". This advertises Arete to the central manager (more on that later) so that it knows about this computer. It will start up the jobs that run.
• condor_schedd If this program is running, it allows jobs to be submitted from this computer--that is, desktron is a "submit machine". This will advertise jobs to the central manager so that it knows about them. It will contact a condor_startd on other execute machines for each job that needs to be started.
• condor_shadow For each job that has been submitted from this computer (e.g., desktron), there is one condor_shadow running. It will watch over the job as it runs remotely. In some cases it will provide some assistance. You may or may not see any condor_shadow processes running, depending on what is happening on the computer when you try it out.
36
Source : http://www.cs.wisc.edu/condor/tutorials/cw2005-condor/intro.html
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Condor : A Walkthrough of Condor commands
condor_status : provides current pool status
condor_q : provides current job queue
condor_submit : submit a job to condor pool
condor_rm : delete a job from job queue
37
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
What machines are available ? (condor_status)
condor_status queries resource information sources and provides the current status of the condor pool of resources
38
Some common condor_status command line options : -help : displays usage information -avail : queries condor_startd ads and prints information about available
resources -claimed : queries condor_startd ads and prints information about claimed
resources -ckptsrvr : queries condor_ckpt_server ads and display checkpoint server
attributes -pool hostname queries the specified central manager (by default queries
$COLLECTOR_HOST) -verbose : displays entire classads For more options and what they do run “condor_status –help”
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
condor_status : Resource States
• Owner : The machine is currently being utilized by a user. The machine is currently unavailable for jobs submitted by condor until the current user job completes.
• Claimed : Condor has selected the machine for use by other users.
• Unclaimed : Machine is unused and is available for selection by condor.
• Matched : Machine is in a transition state between unclaimed and claimed
• Preempting : Machine is currently vacating the resource to make it available to condor.
39
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Example : condor_status
40
[cdekate@celeritas ~]$ condor_status
Name OpSys Arch State Activity LoadAv Mem ActvtyTime
vm1@compute-0 LINUX X86_64 Unclaimed Idle 0.000 1964 3+13:42:23vm2@compute-0 LINUX X86_64 Unclaimed Idle 0.000 1964 3+13:42:24vm3@compute-0 LINUX X86_64 Unclaimed Idle 0.010 1964 0+00:45:06vm4@compute-0 LINUX X86_64 Owner Idle 1.000 1964 0+00:00:07vm1@compute-0 LINUX X86_64 Unclaimed Idle 0.000 1964 3+13:42:25vm2@compute-0 LINUX X86_64 Unclaimed Idle 0.000 1964 1+09:05:58vm3@compute-0 LINUX X86_64 Unclaimed Idle 0.000 1964 3+13:37:27vm4@compute-0 LINUX X86_64 Unclaimed Idle 0.000 1964 0+00:05:07……vm3@compute-0 LINUX X86_64 Unclaimed Idle 0.000 1964 3+13:42:33vm4@compute-0 LINUX X86_64 Unclaimed Idle 0.000 1964 3+13:42:34
Total Owner Claimed Unclaimed Matched Preempting Backfill
X86_64/LINUX 32 3 0 29 0 0 0
Total 32 3 0 29 0 0 0
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
What jobs are currently in the queue? condor_q
• condor_q provides a list of job that have been submitted to the Condor pool
• Provides details about jobs including which cluster the job is running on, owner of the job, memory consumption, the name of the executable being processed, current state of the job, when the job was submitted and how long has the job been running.
41
Some common condor_q command line options : -global : queries all job queues in the pool -name : queries based on the schedd name provides a queue listing of the named
schedd -claimed : queries condor_startd ads and prints information about claimed resources -goodput : displays job goodput statistics (“goodput is the allocation time when an
application uses a remote workstation to make forward progress.” – Condor Manual)
-cputime : displays the remote CPU time accumulated by the job to date... For more options run : “condor_q –help”
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
[cdekate@celeritas ~]$ condor_q
-- Submitter: celeritas.cct.lsu.edu : <130.39.128.68:40472> : celeritas.cct.lsu.edu ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 30.0 cdekate 1/23 07:52 0+00:01:13 R 0 9.8 fib 100 30.1 cdekate 1/23 07:52 0+00:01:09 R 0 9.8 fib 100 30.2 cdekate 1/23 07:52 0+00:01:07 R 0 9.8 fib 100 30.3 cdekate 1/23 07:52 0+00:01:11 R 0 9.8 fib 100 30.4 cdekate 1/23 07:52 0+00:01:05 R 0 9.8 fib 100
5 jobs; 0 idle, 5 running, 0 held[cdekate@celeritas ~]$
42
Example : condor_q
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
How to submit your Job ? condor_submit
• Create a job classAd (condor submit file) that contains Condor keywords and user configured values for the keywords.
• Submit the job classAd using “condor_submit”
• Example :condor_submit matrix.submit
• condor_submit –h provides additional flags
43
[cdekate@celeritas NPB3.2-MPI]$ condor_submit -hUsage: condor_submit [options] [cmdfile] Valid options: -verbose verbose output -name <name> submit to the specified schedd -remote <name> submit to the specified remote schedd (implies -spool) -append <line> add line to submit file before processing (overrides submit file; multiple -a lines ok) -disable disable file permission checks -spool spool all files to the schedd -password <password> specify password to MyProxy server -pool <host> Use host as the central manager to query If [cmdfile] is omitted, input is read from stdin
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
condor_submit : Example
44
[cdekate@celeritas ~]$ condor_submit fib.submit Submitting job(s).....Logging submit event(s).....5 job(s) submitted to cluster 35.[cdekate@celeritas ~]$ condor_q
-- Submitter: celeritas.cct.lsu.edu : <130.39.128.68:51675> : celeritas.cct.lsu.edu ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 35.0 cdekate 1/24 15:06 0+00:00:00 I 0 9.8 fib 10 35.1 cdekate 1/24 15:06 0+00:00:00 I 0 9.8 fib 15 35.2 cdekate 1/24 15:06 0+00:00:00 I 0 9.8 fib 20 35.3 cdekate 1/24 15:06 0+00:00:00 I 0 9.8 fib 25 35.4 cdekate 1/24 15:06 0+00:00:00 I 0 9.8 fib 30
5 jobs; 5 idle, 0 running, 0 held[cdekate@celeritas ~]$
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
How to delete a submitted job ? condor_rm
• condor_rm : Deletes one or more jobs from Condor job pool. If a particular Condor pool is specified as one of the arguments then the condor_schedd matching the specification is contacted for job deletion, else the local condor_schedd is contacted.
45
[cdekate@celeritas ~]$ condor_rm -h Usage: condor_rm [options] [constraints] where [options] is zero or more of: -help Display this message and exit -version Display version information and exit -name schedd_name Connect to the given schedd -pool hostname Use the given central manager to find daemons -addr <ip:port> Connect directly to the given "sinful string" -reason reason Use the given RemoveReason -forcex Force the immediate local removal of jobs in the X state (only affects jobs already being removed) and where [constraints] is one or more of: cluster.proc Remove the given job cluster Remove the given cluster of jobs user Remove all jobs owned by user -constraint expr Remove all jobs matching the boolean expression -all Remove all jobs (cannot be used with other constraints)[cdekate@celeritas ~]$
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
[cdekate@celeritas ~]$ condor_q-- Submitter: celeritas.cct.lsu.edu : <130.39.128.68:51675> :
celeritas.cct.lsu.edu ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 41.0 cdekate 1/24 15:43 0+00:00:03 R 0 9.8 fib 100 41.1 cdekate 1/24 15:43 0+00:00:01 R 0 9.8 fib 150 41.2 cdekate 1/24 15:43 0+00:00:00 R 0 9.8 fib 200 41.3 cdekate 1/24 15:43 0+00:00:00 R 0 9.8 fib 250 41.4 cdekate 1/24 15:43 0+00:00:00 R 0 9.8 fib 300
5 jobs; 0 idle, 5 running, 0 held[cdekate@celeritas ~]$ condor_rm 41.4Job 41.4 marked for removal[cdekate@celeritas ~]$ condor_rm 41 Cluster 41 has been marked for removal.[cdekate@celeritas ~]$
46
condor_rm : Example
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Creating Condor submit file ( Job a ClassAd )
• Condor submit file contains key-value pairs that help describe the application to condor.
• Condor submit files are job ClassAds. • Some of the common descriptions found in the job
ClassAds are :
47
executable = (path to the executable to run on Condor)input = (standard input provided as a file)output = (standard output stored in a file)log = (output to log file)arguments = (arguments to be supplied to the queue)
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
DEMO : Steps involved in running a job on Condor.
1. Creating a Condor submit file
2. Submitting the Condor submit file to a Condor pool
3. Checking the current state of a submitted job
4. Job status Notification
48
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Condor Usage Statistics
49
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Montage workload implemented and executed using Condor ( Source : Dr. Dan Katz )
• Mosaicking astronomical images : • Powerful Telescopes taking high resolution (and highest zoom) pictures of the sky can cover small region over time• Problem being solved in this project is “stitching” these images together to make a high-resolution zoomed in snapshot of the sky.• Aggregate requirements of 140000 CPU hours (~16 years on a single machine) output ranging in the order of 6 TeraBytes
50
Example DAG for 10 input files
mAdd
mBackground
mBgModel
mProject
mDiff
mFitPlane
mConcatFit
Data Stage-in nodes
Montage compute nodes
Data stage-out nodes
Registration nodes
Pegasus
Grid Information Systems
Information about available resources,
data location
Grid
Condor DAGMan
Maps an abstract workflow to an executable form
Executes the workflow
MyProxy
User’s grid credentials
http://pegasus.isi.edu/
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Montage Use By IPHAS: The INT/WFC Photometric H-alpha Survey of the Northern Galactic Plane
(Source : Dr. Dan Katz)
Supernova remnant S147
Nebulosity in vicinity of HII region, IC 1396B, in Cepheus
Crescent Nebula NGC 6888
Study extreme phases of stellar
evolution that involve very large
mass loss
51
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
52
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
53
• Throughput computing• Performance measured as total workload performed over time
to complete• Overhead factors
– Start up time
– Input data distribution
– Output result data collection
– Terminate time
– Inter-task coordination overhead (No task coupling)
• Starvation– Insufficient work to keep all processors busy
– Inadequate parallelism of coarse grained task parallelism
– Poor or uneven load distribution
Capacity Computing Performance Issues
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Topics
• Key terms and concepts• Basic definitions• Models of parallelism• Speedup and Overhead• Capacity Computing & Unix utilities• Condor : Overview• Condor : Useful commands• Performance Issues in Capacity Computing• Material for Test
54
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
Summary : Material for the Test
• Key terms & Concepts (4,5,7,8,9,10,11)• Decoupled work-queue model (16)• Ideal speedup (18,19)• Overhead and Scalability (20,21,22,23,24)• Understand Condor concepts detailed in slides (30,
31,32, 34,35, 36,37) • Capacity computing performance issues (53)• Required reading materials :
– http://www.cct.lsu.edu/~cdekate/7600/beowulf-chapter-rev1.pdf
– Specific pages to focus on : 3-16
55
CSC 7600 Lecture 5 : Capacity Computing, Spring 2011
56