White Papers

Leveraging Multi-core Processors Through Parallel Programming.

Executive Summary

Swift and inexorable advances in hardware have historically challenged software developers to maximize system capabilities and meet users' soaring expectations.

This is especially true in microprocessor technology, where the state of the art has quickly moved from dual-, tri-, quad-, hexa-, octo-core chips to units with tens or even hundreds of cores. The good news is that processors are going to continue to become more powerful. The flip side is that, at least in the short term, growth will come mostly in directions that do not take most current applications along for their customary free ride.

This white paper offers an outlook for parallel software development and focuses in particular on application development for shared memory multi-core systems using an Ateji® PX open source preprocessor for Java.

And then there is the prospect of an exponential rise in trade volumes. This will put further pressure on trading systems that are under pressure to meet the need for accuracy and speed, while keeping costs low.

Ateji® PX provides a smooth transition path from today's sequential programming languages to future parallel languages, likely to be exceptionally different and require new thinking and techniques for designing programs. Code written in Ateji® PX is both compatible with today's languages and ready for tomorrow's hardware.

Why Parallel Computing?

In 1965, Intel co-founder Gordon Moore observed that the number of transistors available to semiconductor manufacturers would double approximately every 18 to 24 months. Today's new and faster multi-core and chip multi-threading processor designs are making that level of scalability more easily attainable and affordable.

Parallel Thinking

Flynn's taxonomy is a specific classification of parallel computer architectures that is based on the number of concurrent instruction (single or multiple) and data streams (single or multiple) available in the architecture

The first dimension is the number of instruction streams that particular computer architecture may be capable of processing at a single point in time. The second dimension is the number of data streams that can be processed at a single point in time. In this way, any given computing system can be described in terms of how instructions and data are processed.

What This Means for Developers

The traditional approach to programming (sequential programming) will not efficiently take advantage of multi-core systems. Sequential programming proved beneficial when computers had single-core architectures, but with multi-core systems the approach is ineffi cient. To fully exploit the capability of multi-core machines, developers need to redesign applications so that each microprocessor can treat code instructions as multiple threads of execution. One way to resolve this is by utilizing parallel programming.

What Is Parallel Programming?

Parallel Programming5 is a form of computation in which program instructions are divided among multiple processors (cores, computers) in combination to solve a single problem, thus running a program in less time. The single-core and multi-core architectures, along with the instructions executions, are highlighted above.

Designing and developing parallel programs has typically been a very manual process. The programmer is usually responsible for both identifying and actually implementing parallelism.

Parallel programming presents many pitfalls — race conditions are the most common and difficult multi-threaded programming problem. Other potential issues include mutual exclusion and deadlock. Overhead due to thread synchronization and load balancing can severely impact run-time performance and can be very hard to fix. Thus, very often manually developing parallel codes is a time-consuming, complex, error-prone and iterative process.

Quick links
Services
Industries