PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Porting the OPATM-BFM Application to a Grid e-Infrastructure – Optimization of Communication and I/O Patterns

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
OPATM-BFM is an off-line three-dimensional coupled eco-hydrodynamic simulation model used for biogeochemical and ecosystem-level predictions. This paper presents the first results of research activities devoted to the adaptation of the parallel OPATMBFM application for an efficient usage in modern Grid-based e-Infrastructures. For the application performance on standard Grid architectures providing generic clusters of workstations, such results are important. We propose a message-passing analysis technique for communication-intensive parallel applications based on a preliminary application run analysis. This technique was successfully used for the OPATM-BFM application and allowed us to identify several optimization proposals for the current realization of the communication pattern. As the suggested improvements are quite generic, they can be potentially useful for other parallel scientific applications.
Twórcy
autor
autor
autor
autor
autor
  • High Performance Computing Center University of Stuttgart (HLRS) Nobelstrasse 19, 70569 Stuttgart, Germany, cheptsov@hlrs.de
Bibliografia
  • [1] See the web page of the GMES project http://www.gmes.info
  • [2] See the web page of the Marine Environment and Security for the European Area (MERSEA) European Integrated project, http://www.mersea.eu.org
  • [3] A. Crise, P. Lazzari, S. Salon and A. Teruzzi, MERSEA deliverable D11.2.1.3 – Final report on the BFM OGSOPA Transport module. 21 pp., 2008.
  • [4] See the description of the IBM SP5 machine on he CINECA’s web page,https://hpc.cineca.it/docs/user-guide-zwiki/SP5UserGuide
  • [5] See the web page of the DORII project,http://www.dorii.org
  • [6] I. Foster, Service-Oriented Science. Science 6 May 2005: Vol. 308. no. 5723, pp. 814-817.
  • [7] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury and, S. Tuecke, The Data Grid: Towards an Architecture for the Distributed Management and Analysis of Large Scientific Datasets. Journal of Network and Computer Applications, 23, 187-200 (2001) (based on conference publication from Proceedings of NetStore Conference 1999).
  • [8] B. Simo, O. Habala, E. Gatial and L. Hluchy, Leveraging interactivity and MPI for environmental applications.Computing and Informatics 27, 271-284 (2008).
  • [9] See the web page of the Italian Group of Operational Oceanography, http://gnoo.bo.ingv.it
  • [10] M. Vichi, N. Pinardi and S. Masina, A generalized model of pelagic biogeochemistry for the global ocean. Part I: Theory. Jou. Mar. Sys. 64, 89-109, 2007.
  • [11] See the web page of the OGS short term forecasting system of the Mediterranean Marine Ecosystem, http://poseidon.ogs.trieste.it/cgi-bin/opaopech/mersea
  • [12] J. J. Dongarra, S. W. Otto, M. Snir, D. Walker, A message passing standard for MPP and workstations. Communications of the ACM. 39 (7), 84-90 (July 1996).
  • [13] See the web page of the Mediterranean Ocean ObservingNetwork, http://www.moon-oceanforecasting.eu
  • [14] A. Teruzzi, P. Lazzari, S. Salon, A. Crise, C. Solidoro,V. Mosetti, R. Santoleri, S. Colella and G. Volpe, 2008.Assessment of predictive skill of an operational forecast for the Mediterranen marine ecosystem: comparison with satellite chlorophyll observations. MERSEA Final Meeting, Paris, 28-30 April 2008.
  • [15] R. L. Graham, G. M. Shipman, B. W. Barrett, R. H. Castain,G. Bosilca and A. Lumsdaine, Open MPI: A High-Performance,Heterogeneous MPI. Proceeding of the conference HeteroPar '06, September 2006, in Barcelona, Spain, http://www.open-mpi.org/papers/heteropar-2006/heteropar-2006-paper.pdf
  • [16] MPI: A Message-Passing Interface Standard Version 2.1.Message Passing Interface Forum, June 23, 2008. http://www.mpi-forum.org/docs/mpi21-report.pdf
  • [17] See the description of the cluster “Cacau” on the web page of HLRS, http://www.hlrs.de/hw-access/platforms/cacau/
  • [18] A. Knüpfer, H. Brunst, J. Doleschal, M. Jurenz, M. Lieber, H. Mickler, M. S. Müller and W. E. Nagel, The Vampir Performance Analysis Tool-Set. Tools for High Performance Computing, Springer 139-156 (2008).
  • [19] R. Riesen. Communication patterns. IEEE 2006, http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01639567
  • [20] J. Weidendorfer, Sequential Performance analysis with Callgrind and KCachegrind. Tools for High Performance Computing, Springer 93-114 (2008).
  • [21] J. Seward, N. Nethercote, J. Weidendorfer and the Valgrind Development Team, Valgrind 3.3 – Advanced Debugging and Profiling for GNU/Linux applications. http://www.network-theory.co.uk/valgrind/manual/
  • [22] O. Hartmann, M. Kühnemann, T. Rauber and G. Rünger, Adaptive Selection of Communication Methods to Optimize Collective MPI Operations. Parallel Computing: Current &Future Issues of High-End Computing, Proceedings of the International Conference ParCo 2005, G. R. Joubert, W. E. Nagel, F. J. Peters, O. Plata, P. Tirado and E. Zapata (Editors), John von Neumann Institute for Computing, Jülich, NIC Series 33, 457-464 (2006).
  • [23] J. Pjesivac-Grbovic, T. Angskun, G. Bosilca, G. E. Fagg, E. Gabriel and J. J. Dongarra, Performance Analysis of MPI Collective Operations. Cluster Computing archive 10 (2) 127-143 (2007).
  • [24] E. Hartnett and R. K. Rew, Experience with an enhanced NetCDF data model and interface for scientific data access. http://www.unidata.ucar.edu/software/netcdf/papers/AMS_2008.pdf
  • [25] R. Rew, E. Hartnett and J. Caron, 2006. NetCDF-4: software implementing an enhanced data model for the geosciences AMS. http://www.unidata.ucar.edu/software/netcdf/papers/2006-ams.pdf
  • [26] Hakan Taki and Gil Utard, MPI-IO on a Parallel File System for Cluster of Workstations. IWCC, pp. 150, 1st IEEE Computer Society International Workshop on Cluster Computing, 1999.
  • [27] F. Hoffmann, Parallel NetCDF. Linux Magazin Julay 2004, http://cucis.ece.northwestern.edu/projects/PNETCDF/pnetCDF_linux.html
  • [28] J. Li, W. Liao, A. Choudhary, R. Ross, R. Thakur, W. Gropp, R. Latham, A. Siegel, B. Gallagher and M. Zingale, 2003: Parallel netCDF: A High-Performance Scientific I/O Interface.SC2003, Phoenix, Arizona, ACM. [29] R. Latham, R Ross and R. Thakur, The Impact of File Systems on MPIIO Scalability. Preprint ANL/MCS-P1182-0604, June 2004.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BUJ7-0007-0036
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.