![]() Our open multi-methods support both repeated and virtual inheritance. We present the rationale, design, implementation, performance, programming guidelines, and experiences of working with a language feature, called open multi-methods, for C++. Open multi-methods generalize multiple dispatch towards open-class extensions, which improve separation of concerns and provisions for retroactive design. Multiple dispatch–the selection of a function to be invoked based on the dynamic type of two or more arguments–is a solution to several classical problems in object-oriented programming. Finally, we provide a full set of experiments showing that SPar provides better coding productivity without significant performance degradation in multi-core systems as well as transformation rules that are able to achieve code portability (for cluster architectures) through its generalized attributes. ![]() Also, SPar targets source-to-source transformation producing parallel pattern code built on top of FastFlow and MPI. SPar supports the software developer with productivity, performance, and code portability while CINCLE provides sufficient support to generate new DSLs. The main goal of this thesis was to create a DSL and support tools for high-level stream parallelism in the context of a programming framework that is compiler-based and domain-oriented. In fact, parallel programmers can use their expertise without having to design and implement low-level code. Therefore, our motivation is to simplify this path for other researchers (experts in their domain) with support tools (our tool is CINCLE) to create high-level and productive DSLs through powerful and aggressive source-to-source transformations. This is even harder for those who are not familiar with compiler technology. However, the implementation of DSLs using compiler-based tools is difficult, complicated, and usually requires a significant learning curve. We used the C++ attribute mechanism to design a "de-facto" standard C++ embedded DSL named SPar. ![]() The aim is to offer a set of attributes (through annotations) that preserves the program's source code and is not architecture-dependent for annotating parallelism. To solve this problem, we provide a new Domain-Specific Language (DSL) that naturally/on-the-fly captures and represents parallelism for stream-based applications. ![]() Moreover, when new software has to be developed, programmers often face a trade-off between coding productivity, code portability, and performance. Nevertheless, most of them are still not parallelized. Stream programs may run on different kinds of parallel architectures (desktop, servers, cell phones, and supercomputers) and represent significant workloads on our current computing systems. Stream-based systems are representative of several application domains including video, audio, networking, graphic processing, etc. Preliminary results have shown that semantics of abstractions can help extend the applicability of automatic parallelization to modern applications and expose more opportunities to take advantage of multicore processors. Several representative parallelization candidate kernels are used to study semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. In this paper, we use a source-to-source compiler infrastructure, ROSE, to explore compiler techniques to recognize high-level abstractions and to exploit their semantics for automatic parallelization. Modern applications using high-level abstractions, such as C++ STL containers and complex user-defined class types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. However, most previous research has only focused on C and Fortran applications operating on primitive data types. Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |