Collectives

C | Fortran-2008 | Fortran-90

MPI_Gatherv

Definition

MPI_Gatherv is a variant of MPI_Gather; it collects data from all processes in a given communicator and concatenates them in the given buffer on the specified process. Unlike MPI_Gather however, MPI_Gatherv allows the messages received to have different lengths and be stored at arbitrary locations in the root process buffer. MPI_Gatherv is a collective operation; all processes in the communicator must invoke this routine. Other variants of MPI_Gatherv are MPI_Gather, MPI_Allgather and MPI_Allgatherv. Refer to MPI_Igatherv to see the non-blocking counterpart of MPI_Gatherv.

Copy

Feedback

int MPI_Gatherv(void* buffer_send,
                int count_send,
                MPI_Datatype datatype_send,
                void* buffer_recv,
                const int* counts_recv,
                const int* displacements,
                MPI_Datatype datatype_recv,
                int root,
                MPI_Comm communicator);

Parameters

buffer_send

The buffer containing the data to send. The “in place” option for intra-communicators is specified by passing MPI_IN_PLACE as the value of buffer_send at the root. In such a case, count_send and datatype_send are ignored, and the contribution of the root to the gathered vector is assumed to be already in the correct place in the receive buffer.

count_send

The number of elements in the send buffer.

datatype_send

The type of one send buffer element.

buffer_recv

The buffer in which store the gathered data for the root process. For other processes, the receiving parameters like this one are ignored.

counts_recv

An array containing the number of elements in the message to receive from each process, not the total number of elements to receive from all processes altogether. For non-root processes, the receiving parameters like this one are ignored.

displacements

An array containing the displacement to apply to the message received by each process. Displacements are expressed in number of elements, not bytes. For non-root processes, the receiving parameters like this one are ignored.

datatype_recv

The type of one receive buffer element. For non-root processes, the receiving parameters like this one are ignored.

root

The rank of the root process, which will collect the data gathered.

communicator

The communicator in which the gather takes place.

Return value

The error code returned from the variable gathering.

Example

Copy

Feedback

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

/**
 * @brief Illustrates how to use the variable version of a gather.
 * @details Every MPI process begins with a value, the MPI process 0 will gather
 * all these values and print them. The example is designed to cover all cases:
 * - Different displacements
 * - Different receive counts
 * It can be visualised as follows:
 * This application is meant to be run with 3 processes.
 *
 * +-----------+ +-----------+ +-------------------+ 
 * | Process 0 | | Process 1 | |     Process 2     |
 * +-+-------+-+ +-+-------+-+ +-+-------+-------+-+
 *   | Value |     | Value |     | Value | Value |
 *   |  100  |     |  101  |     |  102  |  103  |
 *   +-------+     +-------+     +-------+-------+
 *      |                |            |     |
 *      |                |            |     |
 *      |                |            |     |
 *      |                |            |     |
 *      |                |            |     |
 *      |                |            |     |
 *   +-----+-----+-----+-----+-----+-----+-----+
 *   | 100 |  0  |  0  | 101 |  0  | 102 | 103 |
 *   +-----+-----+-----+-----+-----+-----+-----+
 *   |                Process 0                |
 *   +-----------------------+-----+-----+-----+
 **/
int main(int argc, char* argv[])
{
    MPI_Init(&argc, &argv);

    // Get number of processes and check only 3 processes are used
    int size;
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    if(size != 3)
    {
        printf("This application is meant to be run with 3 processes.\n");
        MPI_Abort(MPI_COMM_WORLD, EXIT_FAILURE);
    }

    // Get my rank
    int my_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

    // Determine root's process rank
    int root_rank = 0;

    switch(my_rank)
    {
        case 0:
        {
            // Define my value
            int my_value = 100;

            // Define the receive counts
            int counts[3] = {1, 1, 2};

            // Define the displacements
            int displacements[3] = {0, 3, 5};

            int* buffer = (int*)calloc(7, sizeof(int));
            printf("Process %d, my value = %d.\n", my_rank, my_value);
            MPI_Gatherv(&my_value, 1, MPI_INT, buffer, counts, displacements, MPI_INT, root_rank, MPI_COMM_WORLD);
            printf("Values gathered in the buffer on process %d:", my_rank);
            for(int i = 0; i < 7; i++)
            {
                printf(" %d", buffer[i]);
            }
            printf("\n");
            free(buffer);
            break;
        }
        case 1:
        {
            // Define my value
            int my_value = 101;

            printf("Process %d, my value = %d.\n", my_rank, my_value);
            MPI_Gatherv(&my_value, 1, MPI_INT, NULL, NULL, NULL, MPI_INT, root_rank, MPI_COMM_WORLD);
            break;
        }
        case 2:
        {
            // Define my values
            int my_values[2] = {102, 103};

            printf("Process %d, my values = %d %d.\n", my_rank, my_values[0], my_values[1]);
            MPI_Gatherv(my_values, 2, MPI_INT, NULL, NULL, NULL, MPI_INT, root_rank, MPI_COMM_WORLD);
            break;
        }
    }

    MPI_Finalize();

    return EXIT_SUCCESS;
}