Rookie HPC

About

Docs

Tools

Tests

MPI_Iallgatherv

Definition

MPI_Iallgatherv is the non-blocking version of MPI_Allgatherv; it collects data from all processes in a given communicator and stores the data collected in the receive buffer of each process, also allowing the messages received to have different lengths and be stored at arbitrary locations in the receive buffer. Unlike MPI_Allgatherv however, it will not wait for the collection to complete and will return immediately instead. The user must therefore check for completion with MPI_Wait or MPI_Test before the buffers passed can be safely reused. MPI_Iallgatherv is a collective operation; all processes in the communicator must invoke this routine. Other variants of MPI_Iallgatherv are MPI_Igather, MPI_Igatherv and MPI_Iallgather. Refer to MPI_Allgatherv to see the blocking counterpart of MPI_Iallgatherv.

Copy

Feedback

int MPI_Iallgatherv(void* buffer_send,
                    int count_send,
                    MPI_Datatype datatype_send,
                    void* buffer_recv,
                    const int* count_recv,
                    const int* displacements,
                    MPI_Datatype datatype_recv,
                    MPI_Comm communicator,
                    MPI_Request* request);

Parameters

buffer_send
The buffer containing the data to send.
count_send
The number of elements in the send buffer.
datatype_send
The type of one send buffer element.
buffer_recv
The buffer in which store the gathered data for the root process. For other processes, the receiving parameters like this one are ignored.
count_recv
An array containing the number of elements in the message to receive from each process, not the total number of elements to receive from all processes altogether. For non-root processes, the receiving parameters like this one are ignored.
displacements
An array containing the displacement to apply to the message received by each process. Displacements are expressed in number of elements, not bytes. For non-root processes, the receiving parameters like this one are ignored.
datatype_recv
The type of one receive buffer element. For non-root processes, the receiving parameters like this one are ignored.
communicator
The communicator in which the gather takes place.
request
The variable in which store the handler on the non-blocking operation.

Returned value

MPI_SUCCESS
The routine successfully completed.

Example

Copy

Feedback

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

/**
 * @brief Illustrates how to use the non-blocking variable version of an all
 * gather.
 * @details This application is meant to be run with 3 MPI processes. Every MPI
 * process begins with a value, each process will gather all these values and
 * moves on doing another job while the gather progresses. Once complete, they
 * print the data collected. The example is designed to cover all cases:
 * - Different displacements
 * - Different receive counts
 * It can be visualised as follows:
 *
 * +-----------+ +-----------+ +-------------------+ 
 * | Process 0 | | Process 1 | |     Process 2     |
 * +-+-------+-+ +-+-------+-+ +-+-------+-------+-+
 *   | Value |     | Value |     | Value | Value |
 *   |  100  |     |  101  |     |  102  |  103  |
 *   +-------+     +-------+     +-------+-------+
 *      |                |            |     |
 *      |                |            |     |
 *      |                |            |     |
 *      |                |            |     |
 *      |                |            |     |
 *      |                |            |     |
 *   +-----+-----+-----+-----+-----+-----+-----+
 *   | 100 |  0  |  0  | 101 |  0  | 102 | 103 |
 *   +-----+-----+-----+-----+-----+-----+-----+
 *   |                Process 0                |
 *   +-----------------------+-----+-----+-----+
 **/
int main(int argc, char* argv[])
{
    MPI_Init(&argc, &argv);

    // Get number of processes and check only 3 processes are used
    int size;
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    if(size != 3)
    {
        printf("This application is meant to be run with 3 processes.\n");
        MPI_Abort(MPI_COMM_WORLD, EXIT_FAILURE);
    }

    // Get my rank
    int my_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

    // Define the receive counts
    int counts[3] = {1, 1, 2};

    // Define the displacements
    int displacements[3] = {0, 3, 5};

    // Buffer in which receive the data collected
    int buffer[7] = {0};

    // Buffer containing our data to send
    int* my_values;
    int my_values_count;

    switch(my_rank)
    {
        case 0:
        {
            // Define my values
            my_values_count = 1;
            my_values = (int*)malloc(sizeof(int) * my_values_count);
            *my_values = 100;
            printf("Value sent by process %d: %d.\n", my_rank, *my_values);
            break;
        }
        case 1:
        {
            // Define my values
            my_values_count = 1;
            my_values = (int*)malloc(sizeof(int) * my_values_count);
            *my_values = 101;
            printf("Value sent by process %d: %d.\n", my_rank, *my_values);
            break;
        }
        case 2:
        {
            // Define my values
            my_values_count = 2;
            my_values = (int*)malloc(sizeof(int) * my_values_count);
            my_values[0] = 102;
            my_values[1] = 103;
            printf("Values sent by process %d: %d and %d.\n", my_rank, my_values[0], my_values[1]);
            break;
        }
    }

    MPI_Request request;
    MPI_Iallgatherv(my_values, my_values_count, MPI_INT, buffer, counts, displacements, MPI_INT, MPI_COMM_WORLD, &request);

    // Do another job while the non-blocking variable all gather progresses
    // ...

    // Wait for the completion
    MPI_Wait(&request, MPI_STATUS_IGNORE);
    printf("Values gathered in the buffer on process %d:", my_rank);
    for(int i = 0; i < 7; i++)
    {
        printf(" %d", buffer[i]);
    }
    printf("\n");
    free(my_values);

    MPI_Finalize();

    return EXIT_SUCCESS;
}