Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software




Скачать 273.5 Kb.
НазваниеArchitecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software
страница1/10
Дата24.09.2012
Размер273.5 Kb.
ТипДокументы
  1   2   3   4   5   6   7   8   9   10


Architecture Independent Parallel Design Tool: A Refinement-Based Methodology for the Design and Development of Parallel Software

July 1996


A Design Methodology for Data-Parallel Applications

by


Lars Nyland

Allen Goldberg

Peter Mills

Jan Prins




John Reif







Robert Wagner










Dept. of Computer Science

Kestrel Institute

Dept. of Computer Science

Univ. of North Carolina

3260 Hillview Av.

Duke University

Chapel Hill, NC 27599

Palo Alto, CA 95060

Durham, NC 27708



Report for Task II
CDRL A005

for contract #F30602-94-C-0037

UNC Technical Report TR96-032

Abstract 3

1. Introduction 3

1.1 Overview of the Design Methodology 4

1.2 Taxonomy of Parallel Architectures 7

1.3 Demonstration Problem 8

1.4 Tools and System Support 8

2. Methodology 8

2.1 Details of the Design Methodology 9

2.2 Translation of Prototypes to a Lower Level Language 15

2.3 Managing the Results of the Design Process 16

3. Comparison of the Proposed Methodology to Existing Parallel Software Development Tools 17

3.1 Design Tool Support for Parallel Software 17

3.2 Post-development support of parallel software: analysis tools 19

4. Case Studies: Multi-target Tracking 20

4.1 The Multi-target Tracking Problem 21

4.2 Undocumented characteristics about parallelism in MTT Studies 21

4.3 What Support Exists (Templates, Libraries, Pre-Developed Software)? 22

4.4 Development hierarchy to solve multi-target tracking problems 22

4.5 Further data-parallel refinements of the JPDA recursive matrix permanent formulation 23

4.6 The Proteus Programming Language, Briefly Explained 24

4.7 The JPDA, as described by Games et al of MITRE and Bar-Shalom 25

4.8 The ZB-JPDAF, as described by Zhou & Bose 31

4.9 A Message-passing Study of the JPDA 34

4.10 Conclusion of MTT Case Studies 39

5. Tools to support parallel software development 39

5.1 Powerful Parallel Programming Prototyping Language 39

5.2 Repository version manager 39

5.3 Program transformation tools 40

5.4 Multi-lingual programs 40

5.5 Information Gathering Tools 40

5.6 Tools for equational verification and refinement 41

5.7 Portable Compilation Targets 41

6. Conclusions 41

7. References 42

Abstract

Data-parallelism is a relatively well-understood form of parallel computation, yet developing simple applications can involve substantial efforts to express the problem in low-level data-parallel notations. We describe a process of software development for data-parallel applications starting from high-level specifications, generating repeated refinements of designs to match different architectural models and performance constraints, supporting a development activity with cost-benefit analysis. Primary issues are algorithm choice, correctness and efficiency, followed by data decomposition, load balancing and message-passing coordination. Development of a data-parallel multitarget tracking application is used as a case study, showing the progression from high to low-level refinements. We conclude by describing tool support for the process.

1. Introduction

Data-parallelism can be generally defined as the concurrent application of an arbitrary function to all items in a collection of data, yielding a degree of parallelism that typically scales with problem size. This definition permits many computationally intensive problems to be expressed in a data-parallel fashion, but is far more general than the data-parallel constructs found in typical parallel programming notations. As a result, the development of a data-parallel application can involve substantial effort to recast the problem to meet the limitations of the programming notation and target architecture. From a methodological point of view, the problems faced developing data-parallel applications are:

Target Architecture. Different target parallel architectures may require substantially different algorithms to achieve good performance, hence the target architecture has an early and pervasive effect on application development.

Multiplicity Of Target Architectures. For the same reasons just cited, the frequent requirement that an application must operate on a variety of different architectures (parallel or sequential) substantially complicates the development.

Changes in Problem Specification or Target Architecture(s). Changes in problem specification and/or target environment must be accommodated in a systematic fashion, because of the large impact that either causes for parallel applications.

By far, the most important reason for developing data-parallel applications is the potential to increase performance by scaling the parallelism to a larger number of processors. Even in the face of the longer development times, the payoff of such increased performance is often worth the effort. For problems that are extremely time-consuming, the time taken to achieve results can be reduced by factors of 10’s or 100’s (week-long weather simulations take an afternoon) with sufficient processors. Real-time applications that are not possible with sequential computers can be achieved with data-parallel hardware (for instance, the 33 millisecond cycle of video signals represents a hard deadline). Other reasons for using parallel processing [Prins and Goldberg 1995], such as fault-tolerance or multi-function computing, do not drive data-parallel development, due to the paradigm shift necessary to express such applications.

Data-parallel programs may themselves be part of a larger process-parallel context in which their execution and mutual interactions are managed. Thus problems that can be decomposed into a number of successive data-parallel stages can be arranged into a pipeline to process successive problem instances concurrently. In such a context, issues of latency— the time between presentation of the input and delivery of the output for a problem instance, and throughput— the total rate at which problem instances are solved, are important measurements. For example in video image processing or tracking, the throughput determines whether the system operates in real-time, and latency determines the freshness of the resulting analysis. The interaction of data-parallel design with process parallel design is the subject of a future study.

A refinement methodology is proposed that generates a tree of data-parallel applications whose level of development effort varies with the parallel architecture(s) targeted and the required level of performance. The methodology explicates activities whose time and expense can be regulated via a cost-benefit analysis. The main features of the methodology are:

High-Level Design Capture, where an executable description of the design can be evaluated for its complexity and scalability.

A Data-Parallel Design Notation that supports a fully generalized definition of data-parallelism and eliminates dependence on specific architectures.

Analysis and Refinement of the design based on fundamental considerations of complexity, communication, locality, and target architecture.

Prototyping. Disciplined experimentation with the design at various levels of abstraction to gain information used to further direct the refinement process.

1.1 Overview of the Design Methodology

1.1.1 High-level Expression

In current practice, parallel architectures typically influence software design early and heavily, and in doing so, overconstrain it. Since parallel architectures are rapidly evolving, architecture-specific software has a limited lifetime. Alternatively, architecture-independent environments allow the exploration of families of designs without the constraints imposed by specific architectures. Once developed without constraint, they can be slowly refined to classes of architectures (such as vector or message-passing), and can eventually be transformed to architecture-specific instances. Then as architectures change, and newer machines become faster, the original higher-level designs and prototypes will still be useful as a basis for developing new versions. Thus central to this methodology is the high-level, architecture-independent expression of the design in a design notation.

Many of the defense-critical computational problems, such as sensor processing, filtering, data fusion, track association, signal processing, fluid flow, etc., are naturally cast as data-parallel computations. Solutions to these problems use complex, sophisticated algorithms that are subject to improvements and refinements. Thus not only is the target architecture undergoing rapid and drastic changes, but so are the algorithms. By starting the design activity at a high level, many variations and alternative algorithms can be explored, and new algorithms considered. This exploration is not haphazard or aimless, but directed. The exploration answers questions prescribed in the methodology where the focus subsequent design and development.

The effect in current practice of architecture dependence is readily apparent; one relevant example is the Rome Laboratory sponsored Sensor-processing Case Studies report from MITRE [Games, et al. 1994] where the authors comment on one of their designs, saying, “... partitioning for the different MPL versions were all done ‘by hand’...,” indicating not only the use of an architecture-specific language, but a choice of manual data decomposition specific to the hardware chosen. While there is no doubt that their actions were initially led by high-level dataflow analysis, that they are experimenting with partitioning ‘by hand’ in a one-architecture programming language to achieve good results is a recent demonstration of the prevalent effect architecture has on software design.

Parallel architectures are rapidly evolving, and their limited lifetimes further reduce the applicability of architecture-dependent parallel software. Again, this is shown in the MITRE report by their choice of programming languages: MPL for programming the MasPar MP1 and C* for programming the Thinking Machines CM-2. The machines on which these languages are supported, while still available at some computation centers, are no longer preferred due to the difficulty of programming with only modest performance gains over RISC-based workstations. The specific programming models used have diminishing importance, as they are confined to the two architectures on which they run. The Sisal implementation described in the MITRE report, however, is not bound to a particular architecture0, and can run on a variety of machines available today with comparable performance to the highest performance programming models available.

The proposed methodology eliminates dependence on specific parallel architectures, putting great emphasis on high-level design where there is much greater leverage. Much better performance can be obtained by good algorithm choice and refinement than by low-level optimization of a poor algorithm.

1.1.2 Data Parallel Design Notations

There is a general consensus on concise high-level notations for data parallel problems. These notations are expression-oriented, as exemplified in FP by Backus [Backus 1978] and the Bird-Meerten notation that posit an algebra of predefined data-parallel operations. Proteus, Nesl, Sisal [Blelloch 1996, Cann 1993, Levin and Nyland 1993] and other programming languages support this by providing a data-parallel constructor. These operations operate directly on data aggregates such as sets or sequences. Example operations are map and reduce. Map takes an aggregate (e.g., a sequence) and a function and creates a new sequence computed by applying the function to each element of the initial sequence. An example is [x in V: -x], a Proteus expression to negate every element of a sequence V. Reduce takes an aggregate whose elements have a particular base type, and a function that combines two elements of that type, producing a value of the same type. An example is +/V, which sums the elements of V. In addition to reduction with standard operators, user-defined functions may also be used.

There are two variants of data-parallel notations. One uses iterators that introduce bound variables varying over an index set (e.g., a set former); in the other index sets are implicit. This paper will use iterator notation of the Proteus language (described in section 4.5). Parallelism is implicit in this notation. It has simple semantics, and can be refined by algebraic methods.

For all but the most trivial problems, specifications will employ nested parallel data structures. That is, the elements of aggregate data structures are themselves aggregates, and use sequence operations or sequence values within larger sequences for computing some result. By creating nested sequence expressions, large amounts of work are specified with few or no ordering restrictions. Algorithms operating on such data structures are called nested data parallel (NDP) algorithms.

In a refinement/translation context, lower-level notations introduce other aggregate abstractions. For example, HPF, C* and MPL aggregate data structures are arrays, shapes, and plural variables, where additional bookkeeping data is required to provide the functionality of the higher-level sets and sequences. Lower-level languages also allow explication of further refinement choices, for example by making communication and processor scheduling explicit.

1.1.3 Prototyping

Our methodology is based on the spiral model of software development with prototyping, a model favored by the software engineering community. During the risk analysis phase of development, development relies on concise high-level, executable notations for data parallel problems, such as FP, Sisal, Nesl and Proteus. The next phase is migration and integration (perhaps automatic) to achieve an efficient application that can be tested and deployed. As new requirements or target architectures are introduced, the development repeats the cycle.

Information discovered in later stages of the design process often leads to revision of previous design choices or even modification of requirements. Prototyping is an effective means of gathering information early in the design process thus limiting the impact of changes. The design notation for data parallel computation discussed above is succinct and executable, making prototyping an attractive methodology.

  1   2   3   4   5   6   7   8   9   10

Похожие:

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconThe following sequence occurs in a parallel Universe, in a parallel Milky Way, on a parallel Erthe far, far away

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconA novel method for the design of 2-dof parallel mechanisms for machining applications

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconTi improving nested loops' ilp on a parallel asic design

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconThe Union Cabinet on Thursday approved the design, which includes both the Devnagiri 'Ra' and the Roman capital 'R' and has two parallel lines running at the

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconDigital Computer Architecture & Parallel Processing

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconParallel implementation of ray tracing based global illumination algorithms

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconKeywords: graphic design history, envisioning, information design, timeline design

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconTo manage information technology efforts, including the design, development and maintenance of internet or intranet-based products and services in the financial services Industry. Summary

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconUnit1: Design of Machine Tool Drives

Architecture Independent Parallel Design Tool: a refinement-Based Methodology for the Design and Development of Parallel Software iconLibrary technical ecology, methodology and regional design

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница