Enterprise Desktop.com

message passing interface (MPI)

By Alexander S. Gillis

What is the message passing interface (MPI)?

The message passing interface (MPI) is a standardized means of exchanging messages between multiple computers running a parallel program across distributed memory.

In parallel computing, multiple computers – or even multiple processor cores within the same computer – are called nodes. Each node in the parallel arrangement typically works on a portion of the overall computing problem. The challenge then is to synchronize the actions of each parallel node, exchange data between nodes, and provide command and control over the entire parallel cluster. The message passing interface defines a standard suite of functions for these tasks. The term message passing itself typically refers to the sending of a message to an object, parallel process, subroutine, function or thread, which is then used to start another process.

MPI isn't endorsed as an official standard by any standards organization, such as the Institute of Electrical and Electronics Engineers (IEEE) or the International Organization for Standardization (ISO), but it's generally considered to be the industry standard, and it forms the basis for most communication interfaces adopted by parallel computing programmers. Various implementations of MPI have been created by developers as well.

MPI defines useful syntax for routines and libraries in programming languages including Fortran, C, C++ and Java.

Benefits of the message passing interface

The message passing interface provides the following benefits:

Some organizations are also able to offload MPI to make their programming models and libraries faster.

MPI terminology: Key concepts and commands

The following list includes some basic key MPI concepts and commands:

History and versions of MPI

A small group of researchers in Austria began discussing the concept of a message passing interface in 1991. A Workshop on Standards for Message Passing in a Distributed Memory Environment, sponsored by the Center for Research on Parallel Computing, was held a year later in Williamsburg, Va. A working group was established to create the standardization process.

In November 1992, a draft for MPI-1 was created and in 1993 the standard was presented at the Supercomputing '93 conference. With additional feedback and changes, MPI version 1.0 was released in 1994. Since then, MPI has been open to all members of the high-performance computing community, including more than 40 participating organizations.

The older MPI 1.3 standard, dubbed MPI-1, provides over 115 functions. The later MPI 2.2 standard, or MPI-2, offers over 500 functions and is largely backward compatible with MPI-1. However, not all MPI libraries provide a full implementation of MPI-2. MPI-2 included new parallel I/O, dynamic process management as well as remote memory operations. The MPI-3 standard released in November 2014 improves scalability, enhances performance, includes multicore and cluster support and interoperates with more applications. In 2021, The MPI Forum released MPI 4.0. It introduced partitioned communications, a new tool interface, Persistent Collectives and other new additions. MPI 5.0 is currently under development.

Learn about 12 other commonly used IoT standards in this article.

29 Jul 2022

All Rights Reserved, Copyright 2008 - 2024, TechTarget | Read our Privacy Statement