supercomputer

29
SUPER COMPUTER Submitted To: Prof. Satvir Madam Submitted By: Rashpal Singh Class: BCA 3 rd

Transcript of supercomputer

SUPER COMPUTER

Submitted To:Prof. Satvir Madam

Submitted By:Rashpal SinghClass: BCA 3rd

Rollno:4654

Overview

Definition History of Supercomputer Uses of Supercomputer Supercomputer challenges Operating system of Supercomputer Processing speed Top 10 Supercomputer Supercomputer in India

Definition • A supercomputer is the fastest type of

computer. Supercomputers are very expensive and are employed for specialized applications that require large amounts of mathematical calculations.

History • 1946: John Mauchly and J. Presper Eckert construct ENIAC

(Electronic Numerical Integrator And Computer) at the University of Pennsylvania.

• 1956: IBM develops the Stretch supercomputer for Los Alamos National Laboratory. It remains the world's fastest computer until 1964.

• 1957: Seymour Cray co-founds Control Data Corporation (CDC) and pioneers fast.

• 1976: First Cray-1 supercomputer is installed at Los Alamos National Laboratory. It manages a speed of about 160 MFLOPS

• 1989: Seymour Cray starts a new company, Cray Computer, where he develops the Cray-3 and Cray-4.

• 1993: Fujitsu Numerical Wind Tunnel becomes the world's fastest computer using 166 vector processors.

• 1997: ASCI Red, a supercomputer made from Pentium processors by Intel and Sandia National Laboratories, becomes the world's first teraflop (TFLOP) supercomputer.

• 2008: The Jaguar supercomputer built by Cray Research and Oak Ridge National Laboratory becomes the world's first petaflop (PFLOP) scientific supercomputer. Briefly the world's fastest computer, it is soon superseded by machines from Japan and China.

Uses of Supercomputers• Supercomputers are used for highly calculation-intensive

tasks such as problems involving quantum mechanical physics, weather forecasting, climate research, molecular modeling, physical simulations, Major universities, military agencies and scientific research laboratories depend on and make use of supercomputers very heavily. Some Common Uses more of Supercomputers in industry.

1)Predicting climate change2)Testing nuclear weapons3)Recreating the Big Bang 4)Forecasting hurricanes

• Predicting climate change:The challenge of predicting global climate is immense. There are hundreds of variables, from the reflectivity of the earth's surface(high for icy spots, low for dark forests) to the vagaries of ocean currents. Dealing with these variables requires supercomputing capabilities.

One model, created in 2008 at Brookhaven National Laboratory in New York, mapped the aerosol particles and turbulence of clouds to a resolution of 30 square feet.

• Testing nuclear weaponsThe Stockpile Stewardship program uses non-nuclear lab tests and, yes, computer simulations to ensure that the country's cache of nuclear weapons are functional and safe. In 2012, IBM plans to unveil a new supercomputer, Sequoia, at Lawrence Livermore National Laboratory in California.

According to IBM, Sequoia will be a 20 petaflop machine, meaning it will be capable of performing twenty thousand trillion calculations each second.

• Recreating the Big BangThe Big Bang Researchers at the Texas Advanced Computing Center (TACC) at the University of Texas in Austin have also used supercomputers to simulate the formation of the first galaxy, while scientists at

NASA’s Ames Research Center in Mountain View, Calif., have simulated the creation of stars from cosmic dust and gas.

• The "Big Bang," or the initial expansion of all energy and matter in the universe, happened more than 13 billion years ago in trillion-degree Celsius temperatures, but supercomputer simulations make it possible to observe what went on during the universe's birth.

Forecasting hurricanesThis supercomputer, with its cowboy moniker and 579 trillion calculations per second processing power, resides at the TACC in Austin, Texas. Using data directly from National

Oceanographic and Atmospheric Agency airplanes, Ranger calculated likely paths for the storm. According to a TACC report, Ranger improved the five-day hurricane forecast by 15 percent.

Supercomputer challenges

• A supercomputer generates large amounts of heat and therefore must be cooled with complex cooling systems.

• Another issue is the speed at which information can be transferred or written to a storage device, as the speed of data transfer will limit the supercomputer's performance.

• Supercomputers consume and produce massive amounts of data in a very short period of time. Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

Operating system of Supercomputer

• Most supercomputers run on a Linux or Unix operating system, as these operating systems are extremely flexible, stable, and efficient. Supercomputers typically have multiple processors and a variety of other technological tricks to ensure that they run smoothly.

Processing Speeds

• Supercomputer computational power is rated in FLOPS (Floating Point Operations Per Second).

• The first commercially available supercomputers reached speeds of 10 to 100 million FLOPS. The next generation of supercomputers is predicted to break the petaflop level.

• This would represent computing power more than 1,000 times faster than a teraflop machine.

• A relatively old supercomputer such as the Cray C90(1990s) has a processing speed of only 8 gigaflops. It can solve a problem, which takes a personal computer a few hours, in .002 seconds.

Top 10 Supercomputers

1. Tianhe-2 National Supercomputing Center in Guangzhou , China, 2013.

• Manufacture: NUDT• Cores: 3,120,000 cores• Power: 17.6 megawatts• Interconnect: Custom• Operating System : Kylie Linux

Tianhe –Specification: There are a total of 3,120,000 Intel cores and 1.404 petabytes of RAM, making Tianhe-2

by far the largest installation of Intel CPUs in the world.Each compute node has a total of 88GB of RAM.

Total having 125 cabinets housing 16,000 compute nodes each of which contains two Intel Xeon (Ivy Bridge) CPUs

CoolingSystem

2 Titan Oak Ridge National Laboratory , United States2012.• Manufacturer:Cray• Cores:299,008 CPU cores• Interconnect: Gemini• Operating System: Cray Linux Environment

• Titan Specification: Total having 200 cabinets. Inside each cabinets are Cray XK7 boards, each of which has four AMD G34 sockets and four PCI slots.

• Having 20PF Plus peak performance or more than 20,000 trillion calculations per second.

• Having total 299,008 CPU cores.• Total CPU memory is 710Tera bytes .• Having 18,688 compute nodes.

3 Sequoia Lawrence Livermore National Laboratory United- States, 2013

• Manufacture: IBM• Cores:1,572,864 processor cores• Power:7.9 MW• Interconnect:• 5-dimensional torus topology• Operating System: Red Hat Enterprise Linux

Sequoia Specification:• 96 racks containing 98,304 compute nodes. The compute

nodes are 16-corePowerPC A2 processor chips with 16 GB of DDR3 memory each

• Sequoia is a Blue Gene/Q design, building off previous Blue Gene designs

4 K computer RIKEN Japan, 2011• Manufacture: Fujitsu• Cores:640,000 cores• Power:12.6 MW• Interconnect: six-dimensional torus interconnect• Operating System: Linux Kernel

K Computer Specification: The K computer comprises over 80,000 2.0 GHz 8-core SPARC64 VIII

fx processors contained in 864 cabinets, for a total of over 640,000 cores. Each cabinet contains 96 computing nodes, in addition to 6 I/O nodes. Each

computing node contains a single processor and 16 GB of memory The computer's water cooling system is designed to minimize failure

rate and power consumption. K had set a record with a performance of 8.162petaflops, making it the

fastest supercomputer K computer has the most complex water cooling system in the world. Target performance was 10pFlops.but it was recorded at 8.162pFlops.

Applications of K-computers

• Mira(Blue Gene/Q) Argonne National Laboratory United- States, 2013

• Manufacture: IBM• Cores:786,432 cores• Power:3.9 MW• Interconnect: five-dimensional torus interconnect Operating System: CNK (for Compute Node Kernel) is the node

level operating system for the IBM Blue Gene supercomputer.

Mira(Blue Gene/Q) Specification:• The Blue Gene/Q Compute chip is an 18 core chip. The 64-bit PowerPC

A2 processor cores and run at 1.6 GHz.• 16 Processor cores are used for computing, and a 17th core for operating

system assist functions such as interrupts, asynchronous I/O, MPIpacing and RAS. The 18th core is used as a redundant spare, used to increase manufacturing yield.

• Mira having total of 1024 compute nodes, 16,384 user cores and 16 TB RAM

6 pizdaint (Cray XC30)Swiss National Supercomputing Centre Switzerland, 2013.

• Manufacture: Cray• Corers: 115,984-cores• Power:90KW• Interconnects: Aries interconnect• Operating System: Cray Linux Environment.

Pizdaint (Cray XC30)Specification: 64-bit Intel Xeon processor E5 family; up to 384 per cabinet. Peak performance: initially up to 99 TFLOPS per system cabinet. For added performance the Piz Daint supercomputer features the NVIDIA

graphical processing units. Therefore, though it has 116,000 cores it is capable of 6.3 petaflops of performance.

Memory 16 TB RAM. Storage 768 TiB.

Cooling system

7 Stampede Texas Advanced Computing Center United States,

2013 • Manufacture: Dell• Corers: 102400 CPU cores• Power:4.5 Megawatts• Operating System: Linux (CentOS).

Stampede Specification: Stampede has 6,400 Dell C8220 compute nodes that are housed in 160

racks; each node has two Intel E5 8-core (Sandy Bridge) processors and an Intel Xeon Phi 61-core (Knights Corner) coprocessor.

Stampede is a multi-use, cyber infrastructure resource offering large memory, large data transfer, and graphic processor unit (GPU) capabilities for data-intensive, accelerated or visualization computing.

Stampede can complete 9.6 quadrillion floating point operations per second.

8 JUQUEEN Germany, 2013• Manufacture: IBM• Core:458,752• Power:2,301.00 kW• Interconnect : Taurus Interconnect• Operating System: SUSE Linux Enterprise Server.

JUQUEEN specification 294,912 processor cores, 144 terabyte memory, 6 petabyte storage in 72

racks. With a peak performance of about one PetaFLOPS. 2 links (4GB/s in 4GB/s out) feed an I/O PCI-e port.

9 Vulcan (Blue Gene/Q) Lawrence Livermore National Laboratory United States, 2013

• Manufacture: IBM• Cores:393,216• Power:1,972.00 kW• Interconnect: Taurus Interconnect • Operating System: CNK (for Compute Node Kernel) is the node

level operating system for the IBM Blue Gene supercomputer.

Vulcan Specification Vulcan is equipped with power BQC 16 core 1.6 Ghz processors. The Vulcan supercomputer has 400,000 cores that perform at 4.3

petaflops. This supercomputer is used by the US department of Energy’s National Nuclear Safety Administration at the Livermore National Laboratory.

Vulcan uses a massively parallel architecture and PowerPC processors – more than 393,000 cores, in all.

10 Cray CS Strom United States, 2014 • Manufacture: Cray Inc.• Cores:72,800• Power:1,498.90 kW• Interconnect: Infiniband FDR• Operating System: Linux

Cray CS Strom Specification: The system can support both single- and double-precision floating-point

applications. Up to eight NVIDIA Tesla K40 GPU accelerators per node Each of these servers integrates eight accelerators and two Intel Xeon

processors, delivering 246 GPU teraflops of compute performance in one 48 rack.

Optional passive or active chilled cooling rear-door heat exchangers

Supercomputer in India

Aaditya: Indian Institute of Tropical Meteorology, Pune, has a machine with a theoretical peak of 790.7 teraflop/s, called Aaditya, which is used for climate research and operational forecasting.

• It ranked 36th among the world's top 500 supercomputers June 2013 list.

PARAM Yuva II This supercomputer was made by Centre for Development of Advanced Computing in a period of three months, at a cost of ₹160 million (US$2 million). It performs at a peak of 524 teraflop/s, about 10 times faster than the present facility. Param Yuva II will be used for research in space, bioinformatics, weather

forecasting, seismic data analysis, aeronautical engineering, scientific data processing and pharmaceutical development.