slides (PPT)

download slides (PPT)

If you can't read please download the document

  • date post

    08-May-2015
  • Category

    Documents

  • view

    1.144
  • download

    2

Embed Size (px)

Transcript of slides (PPT)

  • 1.The CrossGrid Project Marcel Kunze, FZKrepresenting the X#-Collaboration

2. Main Objectives

  • New category of Grid enabled applications
    • Computing and data intensive
    • Distributed
    • Interactive, near real time response (a person in a loop)
    • Layered
  • New programming tools
  • Grid more user friendly, secure and efficient
  • Interoperability with other Grids
  • Implementation of standards

3. CrossGridCollaboration Poland: Cyfronet & INP Cracow PSNC Poznan ICM & IPJ Warsaw Portugal: LIP Lisbon Spain: CSIC Santander Valencia &RedIris UAB Barcelona USC Santiago& CESGA Ireland: TCD Dublin Italy: DATAMAT Netherlands: UvA Amsterdam Germany: FZK Karlsruhe TUM Munich USTU Stuttgart Slovakia: II SAS Bratislava Greece: Algosystems Demo Athens AuTh Thessaloniki Cyprus: UCY Nikosia Austria: U.Linz 21 institut e s 11 countries 4. IST Grid Project Space Applications Middleware & Tools Underlying Infrastructu re s - Links with European National efforts - Links with US projects (GriPhyN, PPDG, iVDGL,) GRIDLAB GRIA EGSO DATATAG CROSSGRID DATAGRID GRIP EUROGRID DAMIEN Science Industry / business 5. Collaboration with other # Projects

  • Objective Exchange of
    • Information
    • Software components
  • Partners
    • DATAGRID
    • DATATAG
    • GRIDLAB
    • EUROGRID and GRIP
  • GRIDSTART
  • Participation in GGF

6. Project Phases M1 -3: requirements definition and merging M4 - 12: first development phase:design, 1st prototypes, refinement of requirements M 13 -24: second development phase:integration of components, 2nd prototypes M 25 -32: third development phase:complete integration, final code versions M 33 -36: final phase:demonstration and documentation 7. Structure Overview Network infrastructure, archivers, HPC/HPV systems,Labour instruments, etc. Local domain services Protocols, Authentication, Authorization,Access policy, Resource management, etc. Remote Data Access Optimization Monitoring Schedulers RoamingAccess Portals Interactive simulation and visualisation of abiomedical system Flooding crisis team support Distributed Data Analysis in High Energy Physics Weather forecast and air pollution modeling FABRIC INFRASTRUCTURE Grid SERVICES TOOLS APPLICATIONS GLOBUS TOOLKIT , Condor-G, ...DATAGRID SET OF TOOLS Grid Visualization Kernel Benchmarks 8. CrossGrid Architecture Applications And Supporting Tools Applications Development Support Grid Common Services Grid Visualisation Kernel Data Mining on GridInteractive Distributed Data Access Globus Replica Manager Roaming Access Grid Resource ManagementGrid Monitoring MPICH-G Distributed Data Collection User Interaction Service DataGrid Replica Manager Datagrid Job Manager GRAM GSI Replica Catalog GASS MDS GridFTP Globus-IO ResourceManager CPU Resource Manager ResourceManager Secondary Storage Resource Manager Scientific Instruments (Medical Scaners, Satelites, Radars) Resource Manager Detector Local High Level Trigger Resource Manager VR systems (Caves, immerse desks) Resource Manager Visualization tools Optimization of Data Access Tertiary Storage Local Resources Biomedical Application Portal Performance Analysis MPI Verification Metrics and Benchmarks HEP High LevelTrigger Flood Application HEP Interactive Distributed Data Access Application HEP Data Mining on Grid ApplicationWeather Forecast application 9. Layered Structure Interactive and Data Intensive Applications (WP1)

  • I nteractive simulation and visualization of
  • abiomedical system
  • Flooding crisis team support
  • Distributed data analysis in HEP
  • Weather forecast and air pollution modeling

Grid Application Programming Environment(WP2)

  • MPI code debugging and
  • verification
  • Metrics and benchmarks
  • Interactive and semiautomatic
  • performance evaluation tools

Grid Visualization Kernel Data Mining New CrossGrid Services (WP3) Globus Middleware Fabric Infrastructure (Testbed WP4) Data G rid GriPhyN ... Services HLA

  • Portals and roaming access
  • Grid resource management
  • Grid monitoring
  • Optimization of data access

10. Scope of Applications

  • Applications in health and environment
    • Data federation, processing and interpretation in geographically distributed locations
    • Fast, interactive decision making
  • Interactive access to distributed
    • Databases
    • Super computers and High Performance Clusters
    • Visualisation engines
    • Medical scanners
    • Environmental data input devices

11. Application Requirements

  • High quality presentation
  • High frame rate
  • Intuitive interaction
  • Real-time response
  • Interactive algorithms
  • High performance computing and networking
  • Distributed resources and data

12. Role of Network Latency Communication delay and rendering delay are negligible 13. CrossGrid Application Development (WP1)

  • Interactive simulation and visualisation of a biomedical system
    • Grid-based system for pre-treatment planning in vascular interventional and surgical procedures through real-time interactive simulation of vascular structure and flow.
  • Flooding crisis team support
  • Distributed interactive data analysis in HEP
    • F ocus on LHC experiments (ALICE, ATLAS, CMS and LHCb)
  • Weather forecast and air pollution modelling
    • Porting distributed/parallel codes on Grid
    • Coupled Ocean/Atmosphere Mesoscale Prediction System
    • STEM-II Air Pollution Code

14. Interactive Simulation and Visualisationof a Biomedical System

  • Grid-based prototype system for treatment planning in vascular interventional and surgical procedures through near real-time interactive simulation of vascular structure and flow.
  • The system will consist of a distributed near real-time simulation environment, in which a user interacts in Virtual Reality (VR) and other interactive display environments.
  • A 3D model of the arteries, derived using medical imaging techniques, will serve as input to a simulation environment for blood flow calculations.
  • The user will be allowed to change the structure of the arteries, thus mimicking an interventional or surgical procedure.

15. Current Situation Observation Diagnosis & Planning Treatment 16. Experimental Setup 17. Simulation Based Planning and Treatment Angio w/Fem-Fem & Fem-Pop AFB w/E-S Prox. Anast. Angio w/ Fem-Fem AFB w/E-E Prox. Anast. Preop Alternatives 18. VR-Interaction 19. Flood Crisis Prevention

  • Support system for establishment and operation of Virtual Organization for Flood Forecasting associating a set of individuals and institutions involved in flood prevention and protection.
  • The system will employ a Grid technology to seamlessly connect together the experts, data and computing resources needed for quick and correct flood management decisions.
  • The main component of the system will be a highly automated early warning system based on hydro-meteorological (snowmelt) rainfall-runoff simulations.
  • System will integrate the advanced communication techniques allowing the crisis management teams to consult the decisions with various experts. The experts will be able to run the simulations with changed parameters and analyze the impact.

20. Virtual Organization for Flood Forecasting Storage systems databases surface automatic meteorological and hydrological stations systems for acquisition and processing of satellite information meteorological radars

  • External sources of information
  • Global and regional centers GTS
  • EUMETSAT and NOAA
  • Hydrological services of other countries

Data sources meteorological models hydrological models hydraulic models HPC, HTC Grid infrastructure Flood crisis teams

  • meteorologists
  • hydrologists
  • hydraulic engineers

Users

  • river authorities
  • energy
  • insurance companies
  • navigation
  • media
  • public

21. Flood Crisis Prevention V h River Pilot Site

  • Water stages/dischargesin thereal time operating hydrological stations
  • Mappingof the flooded areas

V h River Catchment Area: 19700km 2 , 1/3 of Slovakia (Inflow point) Nosice Streno (Outflowpoint ) Pilot SiteCatchmentArea: 2500km 2 (aboveStreno :5500km 2 ) 22. F loodS imulations R esults Flow+ water depths 23. Distributed Analysis in High Energy Physics

  • Challenging points
    • Access to largedistributed databasesin the Grid
    • Development of distributeddata-miningtechniques
    • Definition of alayered applicationstructure
    • Integration ofuser-friendlyinteractive access(based on PROOF)
  • F ocus onLHCexperiments(ALICE, ATLAS, CMS and LHCb)

24. PROOF Local Remote Selection Parameters Procedure Proc.C Proc.C Proc.C Proc.C Proc.C PROOF CPU CPU CPU CPU CPU CPU TagDB RDB DB1 DB4 DB5 DB6 DB3 DB2 25. Weather Forecast and Air Pollution Modeling

  • Integration of distributed databases into Grid
  • Migration of data mining algorithms to Grid
  • Porting distributed atmospheric & wave models to Grid
  • Porting parallel codes for air quality models to Grid
  • Integration, testing and demonstration of the application in the testbed environment

26. COAMPS Coupled Ocean/Atmosphere Mesoscale Prediction System:Atmospheric Components

  • Complex Data Quality Control
  • Analysis:
    • Multivariate Optimum Interpolation Analysis of Winds and Heights
    • Univariate Analyses of Temperature and Moisture
    • Optimum Interpolation Analysis of Sea Surface Temperature
  • Initialization:
    • Variational Hydrostatic Constraint on Analysis Increments
    • Digital Filter
  • Atmospheric Model:
    • Numerics: Nonhydrostatic, Scheme C, Nested Grids, Sigma-z
    • Physics: Convection, Explicit Moist Physics, Radiation, Surface Layer
  • Features:
    • Globally Relocatable (5 Map Projections)
    • User-Defined Grid Resolutions, Dimensions, and Number of Nested Grids
    • 6 or 12 Hour Incremental Data Assimilation Cycle
    • Can be Used for Idealized or Real-Time Applications
    • Single Configuration Managed System for All Applications
    • Operational at:
      • 7 Areas, Twice Daily, using 81/27/9 km or 81/27 km grids
      • Forecasts to 72 hours
    • Operational at all Navy Regional Centers (w/GUI Interface)

27. Status Quo Quo Vadis ?

  • Current state (briefly)
  • S imulation done on a single system or local clusters
  • V isualisation on a single system, locally
  • What we are going to achieve
  • HPC, HTC, HPVin geographically distributed environment
  • I mproved interaction with the end user
  • N ear real time simulations
  • D ifferent visualisation equipments(adaptive according to the end-user needs), like
    • PDA
    • W orkstations
    • VR studio (e.g. CAVE)

28. Grid Application Programming Environment (WP2)

  • MPI code debugging and verification
  • Metrics and benchmarks
  • Interactive and semiautomatic performance evaluation tools
  • Specify, develop, integrate, test tools for HPC and HTC applications on the Grid

29. New Grid Services and Tools (WP3)

  • Portals and roaming access
  • Grid resource management
  • Grid monitoring
  • Optimisation of data access
  • Objectives
    • To develop interactive compute- and data-intensive applications
    • To develop user-friendly Grid environments
    • To offer easy access to the applications and Grid
    • To have reasonable trade-off between resource usage efficiency and application speedup
    • To support management issues while accessing resources

30. International Testbed Organisation (WP4) 15 si t e s

  • Testbed setup & incremental evolution
  • Integration with DataGrid
  • Infrastructure support
  • Verification & quality control

Auth Thessaloniki U v Amsterdam FZK Karlsruhe TCD Dublin U A Barcelona LIP Lisbon CSIC Valencia CSIC Madrid USC Santiago CSIC Santander DEMO Athens UCY Nikosia CYFRONET Cracow II SAS Bratislava PSNC Poznan ICM&IPJ Warsaw 31. Summary

  • Layered structure of all X# applications
  • Reuse of SW from DataGrid and other # projects
  • Globus as the bottom layer of the middleware
  • Heterogeneous computer and storage systems
  • Distributed development and testing of SW
    • 12 partners in applications
    • 14 partners in middleware
    • 15 partners in testbeds

32. 1980s: Internet1990s: Web2000s: Grid

  • Where do we need to get to ?
    • Applications to support an e-society (Cyber-Infrastructure)
    • An international Grid infrastructure which hides the complexities from the users(Invisible Computing)
    • A powerful and flexible network infrastructure
  • Where do we need to invest ?
    • Applications targeted at realistic problems in e-science
    • Prototypes of Grid infrastructures
    • Maintain and improve the GEANT network
  • Expression of Interest for EU FP6 program:
    • Enabling Grids and e-Science in Europe (EGEE)

Grid-enabled Applications Prototype Grid Infrastructures G ant: World Class Networking 33. EGEE Project Space Applications Middleware & Tools Underlying Infrastructu re s GRIDLAB GRIA EGSO DATATAG CROSSGRID DATAGRID GRIP EUROGRID DAMIEN Science Industry / business Enabling Grids and E-Science in Europe (EGEE) 34. First results of EGEE Brainstorming C Jones, D O Williams, et al