PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Adding parallelism to sequential programs : a combined method

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The article outlines a contemporary method for creating software for multi-processor computers. It describes the identification of parallelizable sequential code structures. Three structures were found and then carefully examined. The algorithms used to determine whether or not certain parts of code may be parallelized result from static analysis. The techniques demonstrate how, if possible, existing sequential structures might be transformed into parallel-running programs. A dynamic evaluation is also a part of our process, and it can be used to assess the efficiency of the parallel programs that are developed. As a tool for sequential programs, the algorithms have been implemented in C#. All proposed methods were discussed using a common benchmark.
Twórcy
  • Warsaw University of Technology, Instituteof Computer Science, Warsaw, Poland
  • Fayetteville State University, Department of Mathematics and Computer Science, Fayetteville, USA
  • Warsaw University of Technology, Instituteof Computer Science, Warsaw, Poland
Bibliografia
  • [1] C. Campbell, R. Johnson, A. Miller, and S. Toub, Parallel Programming with Microsoft .NET: Design Patterns for Decomposition and Coordination on Multicore Architectures, 1st ed. Redmond, WA: Microsoft Press, 2010. ISBN: 978-0-7356-5159-3
  • [2] ECMA_International, “C# language speification,” 2002. [Online]. Available: https://www.ecma-international.org/publications-and-standards/standards/ecma-334/.
  • [3] A. J. Bernstein, “Analysis of Programs for Parallel Processing,” IEEE Trans. Electron. Comput., vol. EC-15, no. 5, pp. 757-763, Oct. 1966. doi:10.1109/PGEC.1966.264565
  • [4] G. M. Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities,” in spring joint computer conference on - AFIPS ’67 Atlantic City, NJ, April 18-20, 1967, 1967, p. 483. doi:10.1145/1465482.1465560
  • [5] J. L. Gustafson, “Reevaluating Amdahl’s law,” Commun. ACM, vol. 31, no. 5, pp. 532-533, May 1988. doi:10.1145/42411.42415
  • [6] M. Danelutto, J. D. Garcia, L. M. Sanchez, R. Sotomayor, and M. Torquati, “Introducing Parallelism by Using REPARA C++11 Attributes,” in 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP), Heraklion, Greece, 17-19 Feb 2016, 2016, pp. 354-358. doi:10.1109/PDP.2016.115
  • [7] B. Chapman, G. Jost, and R. van der Pas, Using OpenMP - Portable Shared Memory Parallel Programming. Cambridge, MA: MIT Press, 2007. ISBN: 978-0-262-53302-7
  • [8] R. Atre, A. Jannesari, and F. Wolf, “Brief Announcement: Meeting the Challenges of Parallelizing Sequential Programs,” in 29th ACM Symposium on Parallelism in Algorithms and Architectures, Washington, DC, 24-26 July 2017, 2017, pp. 363-365. doi:10.1145/3087556.3087592
  • [9] D. Dig, “A Refactoring Approach to Parallelism,” IEEE Softw., vol. 28, no. 1, pp. 17-22, Jan. 2011. doi:10.1109/MS.2011.1
  • [10] Z. Li, “Discovery of Potential Parallelism in Sequential Programs,” PhD thesis, Technische Universität Darmstadt, Department of Computer Science, 2016. [Online] Available: https://tuprints.ulb.tu-darmstadt.de/5741/7/thesis.pdf
  • [11] F. Allen, M. Burke, R. Cytron, J. Ferrante, and W. Hsieh, “A framework for determining useful parallelism,” in 2nd international conference on Supercomputing - ICS ’88, St. Malo, France, 1 June 1988, 1988, pp. 207-215. doi:10.1145/55364.55385
  • [12] T.-W. Huang, C.-X. Lin, G. Guo, and M. Wong, “Cpp-Taskflow: Fast Task-Based Parallel Programming Using Modern C++,” in 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Rio de Janeiro, Brazil, 20-24 May 2019, 2019, pp. 974-983. doi:10.1109/IPDPS.2019.00105
  • [13] Hongtao Zhong, Mojtaba Mehrara, Steve Lieberman, and Scott Mahlke, “Uncovering hidden loop level parallelism in sequential applications,” in 2008 IEEE 14th International Symposium on High Performance Computer Architecture, Salt Lake City, UT, 16-20 Feb 2008, 2008, pp. 290-301. doi:10.1109/HPCA.2008.4658647
  • [14] A. W. Lim and M. S. Lam, “Maximizing parallelism and minimizing synchronization with affine partitions,” Parallel Comput., vol. 24, no. 3-4, pp. 445-475, May 1998. doi:10.1016/S0167-8191(98)00021-0
  • [15] Z. Li, R. Atre, Z. Huda, A. Jannesari, and F. Wolf, “Unveiling parallelization opportunities in sequential programs,” J. Syst. Softw., vol. 117, pp. 282-295, Jul. 2016. doi:10.1016/j.jss.2016.03.045
  • [16] G. Tagliavini, D. Cesarini, and A. Marongiu, “Unleashing Fine-Grained Parallelism on Embedded Many-Core Accelerators with Lightweight OpenMP Tasking,” IEEE Trans. Parallel Distrib. Syst., vol. 29, no. 9, pp. 2150-2163, Sep. 2018. doi:10.1109/TPDS.2018.2814602
  • [17] S. Rul, H. Vandierendonck, and K. De Bosschere, “A profile-based tool for finding pipeline parallelism in sequential programs,” Parallel Comput., vol. 36, no. 9, pp. 531-551, Sep. 2010. doi:10.1016/j.parco.2010.05.006
  • [18] A. Fonseca, B. Cabral, J. Rafael, and I. Correia, “Automatic Parallelization: Executing Sequential Programs on a Task-Based Parallel Runtime,” Int. J. Parallel Program., vol. 44, no. 6, pp. 1337-1358, Dec. 2016. doi:10.1007/s10766-016-0426-5
  • [19] Z.-H. Du, C.-C. Lim, X.-F. Li, C. Yang, Q. Zhao, and T.-F. Ngai, “A cost-driven compilation framework for speculative parallelization of sequential programs,” ACM SIGPLAN Not., vol. 39, no. 6, pp. 71-81, Jun. 2004. doi:10.1145/996893.996852
  • [20] Chen Tian, Min Feng, V. Nagarajan, and R. Gupta, “Copy or Discard execution model for speculative parallelization on multicores,” in 41st IEEE/ACM International Symposium on Microarchitecture, Como, Italy, 08-12 Nov. 2008, 2008, pp. 330-341. doi:10.1109/MICRO.2008.4771802
  • [21] Microsoft, “Thread-Safe Collections.” [Online]. Available: https://learn.microsoft.com/en-us/dotnet/standard/collections/thread-safe/.
  • [22] M. Harman and R. Hierons, “An overview of program slicing,” Softw. Focus, vol. 2, no. 3, pp. 85-92, 2001. doi:10.1002/swf.41
  • [23] Y. Shen, M. Peng, S. Wang, and Q. Wu, “Towards parallelism detection of sequential programs with graph neural network,” Futur. Gener. Comput. Syst., vol. 125, pp. 515-525, Dec. 2021. doi:10.1016/j.future.2021.07.001
  • [24] OpenAI, “ChatGPT: Optimizing Language Models for Dialogue.” [Online]. Available: https://openai.com/blog/chatgpt/.
  • [25] A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Inf. Fusion, vol. 58, pp. 82-115, Jun. 2020. doi:10.1016/j.inffus.2019.12.012
  • [26] D. B. Czejdo, W. B. Daszczuk, and W. Grześkowiak, “Practical Approach to Introducing Parallelism in Sequential Programs,” in 18th International Conference on Dependability of Computer Systems DepCoS-RELCOMEX, Brunów, Poland, 3-7 July 2023, 2023, pp. 13-27. doi:10.1007/978-3-031-37720-4_2
Uwagi
1. Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
2. This article is an extended version of the paper presented on Depcos-Relcomex 2023 conference [26].
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-6bf5cede-0a62-4524-afb9-2f545b372bf5
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.