MPI 的糟糕表现

Crummy performance with MPI

我正在学习 MPI,我有一个问题,关于在下面的简单实现中几乎没有性能提升。

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char **argv)
{
        int mpirank, mpisize;
        int tabsize = atoi(*(argv + 1));

        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &mpirank);
        MPI_Comm_size(MPI_COMM_WORLD, &mpisize);

        unsigned long int sum = 0;
        int rcvsize = tabsize / mpisize;
        int *rcvbuf = malloc(rcvsize * sizeof(int));
        int *tab = malloc(tabsize * sizeof(int));
        int totalsum = 0;

        if(mpirank == 0){
            for(int i=0; i < tabsize; i++){
               *(tab + i) = 1;
            }
        }
        MPI_Scatter(tab, tabsize/mpisize, MPI_INT, rcvbuf, tabsize/mpisize, MPI_INT, 0, MPI_COMM_WORLD);

        for(int i=0; i < tabsize/mpisize; i++){
                sum += *(rcvbuf + i);
        }

        MPI_Reduce(&sum, &totalsum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);

        if(mpirank == 0){
                printf("The totalsum = %li\n", totalsum);
        }

        MPI_Finalize();

        return 0;
}

以上实现的执行时间为:

$ /usr/bin/time mpirun -np 1 test1 2000000000 
The totalsum = 2000000000
13.76user 3.31system 0:17.30elapsed 98%CPU (0avgtext+0avgdata 15629824maxresident)k 0inputs+8outputs (0major+21720minor)pagefaults 0swaps
$ /usr/bin/time mpirun -np 1 test1 2000000000
The totalsum = 2000000000
13.78user 3.29system 0:17.31elapsed 98%CPU (0avgtext+0avgdata 15629824maxresident)k 0inputs+8outputs (0major+21717minor)pagefaults 0swaps
$ /usr/bin/time mpirun -np 1 test1 2000000000
The totalsum = 2000000000
13.78user 3.32system 0:17.33elapsed 98%CPU (0avgtext+0avgdata 15629828maxresident)k 0inputs+8outputs (0major+20697minor)pagefaults 0swaps

$ /usr/bin/time mpirun -np 20 test1 2000000000
The totalsum = 2000000000
218.42user 6.10system 0:12.99elapsed 1727%CPU (0avgtext+0avgdata 8209484maxresident)k 0inputs+17400outputs (118major+82587minor)pagefaults 0swaps
$ /usr/bin/time mpirun -np 20 test1 2000000000
The totalsum = 2000000000
216.17user 6.37system 0:12.89elapsed 1726%CPU (0avgtext+0avgdata 8209488maxresident)k 0inputs+17168outputs (126major+81092minor)pagefaults 0swaps
$ /usr/bin/time mpirun -np 20 test1 2000000000
The totalsum = 2000000000
216.16user 6.09system 0:12.88elapsed 1724%CPU (0avgtext+0avgdata 8209492maxresident)k 0inputs+17192outputs (111major+81665minor)pagefaults 0swaps

这仅提供了大约 25% 的性能提升。我这里的猜测是瓶颈可能是由竞争访问内存的进程引起的。然后我尝试了相同但没有使用内存来获取数据。

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char **argv)
{
        int mpirank, mpisize;
        int tabsize = atoi(*(argv + 1));

        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &mpirank);
        MPI_Comm_size(MPI_COMM_WORLD, &mpisize);

        unsigned long int sum = 0;

        for(int i=0; i < tabsize/mpisize; i++){
                sum += 1;
        }

        MPI_Reduce(&sum, &totalsum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);

        if(mpirank == 0){
                printf("The totalsum = %li\n", totalsum);
        }

        MPI_Finalize();

        return 0;
}

结果如下:

$ /usr/bin/time mpirun -np 1 test2 2000000000  
The totalsum = 2000000000
6.17user 0.11system 0:06.49elapsed 96%CPU (0avgtext+0avgdata 5660maxresident)k 0inputs+8outputs (0major+4005minor)pagefaults 0swaps 
$ /usr/bin/time mpirun -np 1 test2 2000000000 
The totalsum = 2000000000
6.16user 0.12system 0:06.49elapsed 96%CPU (0avgtext+0avgdata 5660maxresident)k 0inputs+8outputs (0major+4007minor)pagefaults 0swaps 
$ /usr/bin/time mpirun -np 1 test2 2000000000 
The totalsum = 2000000000
6.15user 0.11system 0:06.47elapsed 96%CPU (0avgtext+0avgdata 5664maxresident)k 0inputs+8outputs (0major+4005minor)pagefaults 0swaps

$ /usr/bin/time mpirun -np 20 test2 2000000000
The totalsum = 2000000000
8.67user 2.41system 0:01.06elapsed 1040%CPU (0avgtext+0avgdata 6020maxresident)k 0inputs+16824outputs (128major+49952minor)pagefaults 0swaps
$ /usr/bin/time mpirun -np 20 test2 2000000000
The totalsum = 2000000000
8.59user 2.74system 0:01.05elapsed 1076%CPU (0avgtext+0avgdata 6028maxresident)k 0inputs+16792outputs (131major+49960minor)pagefaults 0swaps
$ /usr/bin/time mpirun -np 20 test2 2000000000
The totalsum = 2000000000
8.65user 2.61system 0:01.06elapsed 1058%CPU (0avgtext+0avgdata 6024maxresident)k 0inputs+16792outputs (116major+50002minor)pagefaults 0swaps

这显示了大约 83% 的性能提升,并证实了我的猜测。那你能告诉我我的猜测是否正确,如果是的话,有什么方法可以改进内存访问的第一个实现吗?

代码已在具有 20 个物理内核的机器上 运行。

EDIT1: 2、5 和 10 个进程的第一次实施的附加结果:

$ /usr/bin/time mpirun -np 2 test1 2000000000
The totalsum = 2000000000
24.05user 3.40system 0:14.03elapsed 195%CPU (0avgtext+0avgdata 11724552maxresident)k 0inputs+960outputs (6major+23195minor)pagefaults 0swaps

$ /usr/bin/time mpirun -np 5 test1 2000000000
The totalsum = 2000000000
55.27user 3.54system 0:12.88elapsed 456%CPU (0avgtext+0avgdata 9381132maxresident)k 0inputs+4512outputs (26major+31614minor)pagefaults 0swaps

$ /usr/bin/time mpirun -np 10 test1 2000000000
The totalsum = 2000000000
106.43user 4.07system 0:12.44elapsed 887%CPU (0avgtext+0avgdata 8599952maxresident)k 0inputs+8720outputs (51major+50059minor)pagefaults 0swaps

编辑2:

我已将 MPI_Wtime() 用于测量第一个实现的 MPI_Scatter 部分,如下所示:

...
                for(int i=0; i < tabsize; i++){
                        *(tab + i) = 1;
                }
        }

        MPI_Barrier(MPI_COMM_WORLD);
        double start = MPI_Wtime();

        MPI_Scatter(tab, tabsize/mpisize, MPI_INT, rcvbuf, tabsize/mpisize, MPI_INT, 0, MPI_COMM_WORLD);

        MPI_Barrier(MPI_COMM_WORLD);
        double end = MPI_Wtime();

        for(int i=0; i < tabsize/mpisize; i++){
                sum += *(rcvbuf + i);
...

得到以下结果:

$ /usr/bin/time mpirun -np 1 test1 400000000
The MPI_Scatter time = 0.576 (14% of total)
3.13user 0.74system 0:04.08elapsed 95%CPU 
$ /usr/bin/time mpirun -np 2 test1 400000000
The MPI_Scatter time = 0.580 (18% of total)
5.19user 0.79system 0:03.25elapsed 183%CPU 
$ /usr/bin/time mpirun -np 4 test1 400000000
The MPI_Scatter time = 0.693 (22.5% of total)
9.99user 1.05system 0:03.07elapsed 360%CPU
$ /usr/bin/time mpirun -np 5 test1 400000000
The MPI_Scatter time = 0.669 (22.3% of total)
12.41user 1.01system 0:03.00elapsed 446%CPU 
$ /usr/bin/time mpirun -np 8 test1 400000000
The MPI_Scatter time = 0.696 (23.7% of total)
19.67user 1.25system 0:02.95elapsed 709%CPU 
$ /usr/bin/time mpirun -np 10 test1 400000000
The MPI_Scatter time = 0.701 (24% of total)
24.21user 1.45system 0:02.92elapsed 876%CPU

$ /usr/bin/time mpirun -np 1 test1 1000000000
The MPI_Scatter time = 1.434 (15% of total)
7.64user 1.71system 0:09.57elapsed 97%CPU
$ /usr/bin/time mpirun -np 2 test1 1000000000
The MPI_Scatter time = 1.441 (19% of total)
12.72user 1.75system 0:07.52elapsed 192%CPU 
$ /usr/bin/time mpirun -np 4 test1 1000000000
The MPI_Scatter time = 1.710 (25% of total)
24.16user 1.93system 0:06.84elapsed 381%CPU
$ /usr/bin/time mpirun -np 5 test1 1000000000
The MPI_Scatter time = 1.675 (25% of total)
30.29user 2.10system 0:06.81elapsed 475%CPU 
$ /usr/bin/time mpirun -np 10 test1 1000000000
The MPI_Scatter time = 1.753 (26.6% of total)
59.89user 2.47system 0:06.60elapsed 943%CPU

$ /usr/bin/time mpirun -np 10 test1 100000000
The MPI_Scatter time = 0.182 (15.8% of total)
6.75user 1.07system 0:01.15elapsed 679%CPU 
$ /usr/bin/time mpirun -np 10 test1 200000000
The MPI_Scatter time = 0.354 (20% of total)
12.50user 1.12system 0:01.71elapsed 796%CPU 
$ /usr/bin/time mpirun -np 10 test1 300000000
The MPI_Scatter time = 0.533 (22.8% of total)
18.54user 1.30system 0:02.33elapsed 849%CPU
$ /usr/bin/time mpirun -np 10 test1 400000000
The MPI_Scatter time = 0.702 (23.95% of total)
24.38user 1.37system 0:02.93elapsed 879%CPU 
$ /usr/bin/time mpirun -np 10 test1 1000000000
The MPI_Scatter time = 1.762 (26% of total)
60.17user 2.42system 0:06.62elapsed 944%CPU

Which gives only about 25% performance gain. My guess here is that the bottleneck may be caused by processes that compete to access the memory. (..)

您的代码主要受通信和 CPU 约束。而且,根据你对2、5、10个进程的结果:

 $ /usr/bin/time mpirun -np 2 test1 2000000000
The totalsum = 2000000000
24.05user 3.40system 0:14.03elapsed 195%CPU (0avgtext+0avgdata 11724552maxresident)k 0inputs+960outputs (6major+23195minor)pagefaults 0swaps

$ /usr/bin/time mpirun -np 5 test1 2000000000
The totalsum = 2000000000
55.27user 3.54system 0:12.88elapsed 456%CPU (0avgtext+0avgdata 9381132maxresident)k 0inputs+4512outputs (26major+31614minor)pagefaults 0swaps

$ /usr/bin/time mpirun -np 10 test1 2000000000
The totalsum = 2000000000
106.43user 4.07system 0:12.44elapsed 887%CPU (0avgtext+0avgdata 8599952maxresident)k 0inputs+8720outputs (51major+50059minor)pagefaults 0swaps

代码在大约五个进程时停止扩展,(此时)内存边界宽度不太可能饱和。

Then I tried same but without using memory to get to data. (..) This shows about 83% performance gain and would confirm my guesses.

但是您还删除了 MPI_Scatter 调用。因此,减少了通信开销,同时保持基本相同的并行执行工作量。

我已经在我的机器(2 个物理内核;4 个逻辑内核)中分析了您的代码。为了测量时间,我使用 MPI_Wtime(); 如下:

int main(int argc, char **argv)
{
        int mpirank, mpisize;
        int tabsize = atoi(*(argv + 1));

        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &mpirank);
        MPI_Comm_size(MPI_COMM_WORLD, &mpisize);

        MPI_Barrier(MPI_COMM_WORLD);
        double start = MPI_Wtime();
        ...
                if(mpirank == 0){
                printf("The totalsum = %li\n", totalsum);
        }
        MPI_Barrier(MPI_COMM_WORLD);
        double end = MPI_Wtime();
        if(mpirank == 0)
          printf("Time:%f\n",end-start);
}

对于与您相同的输入( 2000000000),结果为:

1 process : 25.158740 seconds
2 processes : 19.116490 seconds
4 processes : 15.971734 seconds 

大约 40% 的改进和我机器的内存层次结构应该比具有 20 个物理内核的机器差很多。

现在让我们显着减小输入大小,从而减少内存占用,从 2000000000(8 GB)到 250000000(1 GB),然后再次重新测试:

1 process : 1.312354 seconds
2 processes : 1.229174 seconds
4 processes : 1.232522 seconds 

提升约6%;如果瓶颈是竞争内存的进程,我不会期望在减少内存占用后加速会出现这样的下降。尽管如此,这种减少可以很容易地解释为通过减少输入大小我增加了 比率 通信 per 计算。

让我们回到具有 2000000000 个元素的测试,但这次测量花费在 MPI_Scatter 通信例程(您已删除的例程)上的时间:

2 processes : 7.487354 seconds
4 processes : 8.728969 seconds 

从 2 和 4 个进程可以看出,大约 40%( 7.487354/ 19.116490)和 54%( 8.728969/ 15.971734) 的应用程序执行时间分别花费在 MPI_Scatter 上。这就是为什么当您删除该例程时,您会注意到加速有所提高。

现在对输入 250000000(1 GB)进行相同的测试:

2 processes ::0.679913 seconds (55% of the time)
4 processes : 0.691987 seconds (56% of the time)

如您所见,即使内存占用较小,MPI_scatter 的开销在百分比方面保持不变(对于 4 个进程)。结论是进程越多,进程per的计算量越少,因此通信per的ratio越高 计算 -- 不包括可能随更多进程 运行 弹出的其他开销。此外,在您的代码中,随着进程的增加,内存使用量不会线性增长,除了 main 进程(包含整个数据)之外,扩孔进程将在其中散布数据。

通常,一个好的 MPI_scatter 实现的时间复杂度为 O(n log p),其中 n 是输入的大小,p 是进程的数量.因此,MPI_scatter 的开销将通过增加输入大小然后通过增加该通信中涉及的进程数来更快地增加。但是,通过增加输入大小,您可以per 进程并行执行更多的计算,而如果增加进程的数量,您将减少 per[=67] 的计算=] 正在执行的过程。

但是请记住,我执行的测试并不是最准确的,因为我所处的环境 运行、我的 MPI 实现可能与您的不同,等等。尽管如此,我相信如果您对您的设置执行相同的测试,您将得出相同的结论。