High Performance Computing Update 0908 -InCOSE
-
Upload
sandip-gajera -
Category
Documents
-
view
221 -
download
0
Transcript of High Performance Computing Update 0908 -InCOSE
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
1/19
High Performance Computingat
UCF
Brian Goldiez, Ph.D.
[email protected], 2008
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
2/19
Background
Directed Federal Program Funded for 2 years
Maximize Campus Participation
Competitive Procurement (7 Bids) IBM Selected Machine Named STOKES
After Sir George Gabriel Stokes (Mathematician & Physicist)
Participate in the HPC Community SURAgrid Supercomputer Conference
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
3/19
UCF Objectives
Support Scientific Exploration & Interaction Science Based M&S Human Centered M&S
Synergies Between the Above Build a Diverse Community of Users Increase System Capabilities
Increase Research Scope & Funding Attract External Faculty & Users Become Self Sufficient in 2010
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
4/19
Current Management Approach Research Computing is a Specialize Field Research Computing Needs to be Professionally Managed (e.g. GSU,
Purdue) We Have Some Unique Opportunities
Interaction in HPC Real Time Storm Effects on Coastal Areas Crowd Modeling Games (Serious & Entertainment)
UCF Can Become a Major Player and Be Viable for Funding Recommendations:
UCF Centrally Facilitate/Manage Research Computing for Improved Efficiency &Use of Resources
UCF Designate a Person to Become Active in SURA HPC Group & Work WithCampus Entities on Rsch Computing Use Existing Grant Resources to Fund the Initial Effort Plan for University & External Support and Growth over the Next 3 Years
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
5/19
Stokes Current Capabilities Current System
(90% Utilized) Processor, Xeon 3 GHz,
64b ~2.2 Tflops 240 Cores
4 Visualization Nodes 528 GB Memory 22+ TB Storage O/S RHEL 5.0
Interconnect IB 20Gbps GigE
NFS ~220MB/s
Expanded System Processor, Xeon 3 GHz,
64b ~6.4 Tflops 648 Cores
4 Visualization Nodes 1.424 TB Memory 42+ TB Storage O/S RHEL 5.1 Interconnect
IB 20Gbps GigE GPFS w/RDMA ~500 MB/s
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
6/19
Usage Groupings Science Based M&S
Usage Nano Technology Civil Engineering
Physics Batch Processing Existing Programs (e.g.,
MatLab)
New Data Large Runs Segue to Larger Systems
Human Centered M&SUsage IST Army
Partnering Industry Interactive
Human in the Loop Modeling Human Activity
Multi-modal I/O Multi-user No Existing HPC
Programs or Data
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
7/19
Interactive Simulation Needs
Real time capability using fast processors and high-speed interconnects High fidelity Low latency/High bandwidth interconnects Real time I/O Connection to real world assets Fixed frame rates (some apps)
Strategies Message Passing Interface (MPI) or Scalable Link Interface (SLI) Ltd shared memory processing (SMP) or distributed processing
Interfaces with sensory processors (e.g., interactive visualization,haptics, )
Scalability in terms of HPC architecture and simulation entities
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
8/19
Other Considerations
Lets remember the human factor How will a user interact with an HPC? How will multiple users interact with an HPC &
maintain coherence of I/O? How will interim results be gathered? How can timely and relevant HF experiments
be developed to influence the design? Get developers involved
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
9/19
Current Users IST Physics Mathematics Chemistry Nanoscience Civil Engr Mech. Engr Industrial Engr Electrical & Computer
Engr CREOL
SAIC Forterra
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
10/19
Current Human Centered M&SResearch
Apparent Parallelizable Systems (SAF/Games) Approaches to Parallelization Spatial & Temporal Coherency
Performance Assessment & Optimization Interactive & Visualization
Review Lit in Sci Vis & Comp Steering Leverage Existing Software (e.g., OLIVE, DCV) Consider & Baseline Different Approaches
LVC Modeling
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
11/19
Possible Areas of Future Research
Multi-core Programming for M&S Applications Tight Timing Constraints Low Latency I/O Bound
Use of Cell Processor for M&S Multi-World Systems LVC Implementations/Experimentation Terrain Correlation Granular Propagation Mitigation Methods Multi-scale Simulations Benchmarks De-coupling SAF Models ????
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
12/19
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
13/19
Issues
Facilities Power & Cooling Infrastructure
Obsolescence Parallel Programming Long Term Support
State Funding? Other Sources?
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
14/19
Back Up Charts
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
15/19
High Performance Computing for SimulationTraining Systems
Purpose1. Enhance the Universitys facilities in the area of HPCC
systems2. Support faculty research for parallel simulation of complex
scientific data in the areas of Physics, Chemistry, Civil andNano-technology
3. Study large scale interactive simulations that require real-time processing of hundreds of entities on complex terraindatabases
4. Support RDECOM research on gaming and training systemdevelopment such as OneSAF
Benefits to the Army1. Establish a capability to address M&S relevant issues in
Multi-scale simulation, interactivity and visualization.2. Offer a unique opportunity to synthesize the research efforts
of the various departments at the University by facilitating ashared high performance computing infrastructure
Federal and Private Endorsements1. Project funded and supported by RDECOM and PEO-STRI2. Association with national super-computing grids such as
Southeastern Universities Research Association (SURA)3. Collaboration with private companies like Forterra systems
Deliverables
1. HPCC computing platform with quad-coreprocessors, 4GB memory, 10 TB storage,high-speed interconnect and graphicscapabilities.
2. Scientific studies on using HPC in interactiveM&S
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
16/19
The Top 10 MachinesNovember 2007
Rmax is in TeraFLOPS = One Trillion (10 12) Floating Point Operations per second
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
17/19
Projected Top 500 computing power
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
18/19
-
8/11/2019 High Performance Computing Update 0908 -InCOSE
19/19
Areas for Investigation Extents of single image environments Terrain/Environment
Interacting entities
Live, virtual, constructive experimentation Scalable simulations
Multi-scale simulations Control of propagating granularity
HPC architectures for interaction Map HPC types to applications
Techniques for porting interactive applications to HPCplatforms Tools for interaction