Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
The paper presents a hybridMPI+OpenMP (Message Passing Interface/Open Multi-Processor) algorithm used for parallel programs based on the high-order compact method.The main tools used to implement parallelism in computations are OpenMP andMPI whichdiffer in terms of memory on which they are based. OpenMP works on shared-memory and the MPIon distributed-memory whereas the hybrid model is based on a combination of those methods. The tests performed and described in this paper present significant advantages provided by a combination of the MPI/OpenMP approach. The test computations needed for verifying possibilities ofMPI, Open-MP and Hybrid of both tools were carried out using anacademic high-order SAILORsolver. The obtained results seem to be very promising to accelerate simulations of fluid flows as well as for application using high order methods.
Słowa kluczowe
Wydawca
Rocznik
Tom
Strony
179--193
Opis fizyczny
Bibliogr. 24 poz., rys., tab.
Twórcy
autor
- Czestochowa University of Technology, Faculty of Mechanical Engineering and Computer Science, Armii Krajowej 21, 42-201 Czestochowa, Poland
autor
- Czestochowa University of Technology, Faculty of Mechanical Engineering and Computer Science, Armii Krajowej 21, 42-201 Czestochowa, Poland
autor
- Czestochowa University of Technology, Faculty of Mechanical Engineering and Computer Science, Armii Krajowej 21, 42-201 Czestochowa, Poland
autor
- Czestochowa University of Technology, Faculty of Mechanical Engineering and Computer Science, Armii Krajowej 21, 42-201 Czestochowa, Poland
Bibliografia
- [1] Wang Z, Fidkowski K, Abgrall R, Bassi F, Caraeni D, Cary A, Deconinck H, Hartmann R,Hillewaert K, Huynh H, Kroll N, May G, Persson P, Leer B and Visbal M 2013 Int. J.Numer. Methods Fluids 72 811
- [2] Lele S 1992 J. Comput. Phys.103 16
- [3] Hockney R 1965 J.ACM12 16
- [4] Wang H 1981 ACMTrans. Math. Softw.7 170
- [5] Mattor N, Williams T and Hewett D 1995 Parallel Comput. 21 1769
- [6] Sun H, Zhang H and Ni L 1992 IEEEon Trans.Comput. 41 286
- [7] Belov P, Nugumanov E and Yakovlev S 2015 Preprint, arXiv 1505.06864
- [8] Stone H 1975 ACMTrans.Math. Softw.1 289
- [9] Rao S C S 2008 Parallel Comput.34 177
- [10] Qin J and Nguyen D 1998 Adv. Eng. Softw. 29 395
- [11] Afzal A, Ansari Z, Faizabadi A and Ramis M 2017 Arch. Comput. Method E 24 337
- [12] Dagum Land Menon R 1998 IEEEComput. Sci. Eng.5 46
- [13] William D, Lusk E and Skjellum A 1999 UsingMPI: Portable Parallel Programming withthe Message-Passing Interface 2nd edition,MIT Press, Cambridge
- [14] Rabenseifner R, Hager G and Jost G 2009 Proc. Euromicro Int. Conf. Parallel Distrib.Netw. Based Process 17 427
- [15] Amritkar A, Deb S and Tafti D 2014 J. Comput. Phys.25 6501
- [16] Mavriplis D 2002 Int. J. High Perform. Comput. Appl.16 395
- [17] Maknickas A, Kačeniauskas A, Kačianauskas R, Balevičius R and Algis Džiugys A 2006 Informatic a17 207
- [18] Jia R and Sundén B 2004 Comput. Fluids 33 57
- [19] Mininni P, Rosenberg D, Reddy R and Pouquet A 2011 Parallel Comput. 37 316
- [20] Gropp W, Kaushik D, Keyes D and Smith B 2001 Parallel Comput. 27 337
- [21] Wawrzak K, Boguslawski A and Tyliszczak A 2015 Flow Turbul. Combust. 95 437
- [22] Aniszewski W, Boguslawski A, Marek M and Tyliszczak A 2012 J. Comput. Phys. 231 7368
- [23] Wawrzak A and Tyliszczak A 2017 Arch. Mech .69 157
- [24] Tyliszczak A 2014 J. Comput. Phys. 276 438
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-274053bb-1d96-4fc3-aab1-87dc8dbd5023