Python 1bit SGD 与普通 SGD 在 4 个 GPU 中的 CNTK 速度比较
Python CNTK speed comparation of 1bit SGD with normal SGD in 4 GPUs
我在带有 Ubuntu (python 3.4) 的 Azure NC24 GPU VM 中安装了来自 CNTK 的版本 2.0.beta7。该机器有 4 个 NVIDIA K80 GPU。构建信息:
Build type: release
Build target: GPU
With 1bit-SGD: yes
With ASGD: yes
Math lib: mkl
CUDA_PATH: /usr/local/cuda-8.0
CUB_PATH: /usr/local/cub-1.4.1
CUDNN_PATH: /usr/local
Build Branch: HEAD
Build SHA1: 8e8b5ff92eff4647be5d41a5a515956907567126
Built by Source/CNTK/buildinfo.h$[=11=] on bbdadbf3455d
Build Path: /home/philly/jenkins/workspace/CNTK-Build-Linux
我运行在分布式模式下使用 CIFAR 示例:
mpiexec -n 4 python TrainResNet_CIFAR10_Distributed.py -n resnet20 -q 32
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.018s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.3 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.4 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.8 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.6 samples per second)
...
...
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.883s (6300.4 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.883s (6299.7 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.884s (6299.7 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.884s (6299.2 samples per second)
然而,当我 运行 它与 1 位 SGD 我得到:
mpiexec -n 4 python TrainResNet_CIFAR10_Distributed.py -n resnet20 -q 1 -a 50000
...
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.055s (4939.1 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)
正如所解释的 here 1bit 应该比普通的对应物更快。感谢任何帮助。
当 GPU 之间的通信时间与小批量的计算时间相比很大时,1 位 sgd 是一种有效的策略。
你上面的实验有两个"issues":你正在训练的模型参数很少(计算量不大),4个GPU在同一台机器上(与说通过网络)。 此外,在机器内部 CNTK 使用 NVIDIA nccl,它比 1 位使用的通用 MPI 实现优化得更好。 更新: 在撰写此评论时,默认情况下不使用 NCCL。
我在带有 Ubuntu (python 3.4) 的 Azure NC24 GPU VM 中安装了来自 CNTK 的版本 2.0.beta7。该机器有 4 个 NVIDIA K80 GPU。构建信息:
Build type: release
Build target: GPU
With 1bit-SGD: yes
With ASGD: yes
Math lib: mkl
CUDA_PATH: /usr/local/cuda-8.0
CUB_PATH: /usr/local/cub-1.4.1
CUDNN_PATH: /usr/local
Build Branch: HEAD
Build SHA1: 8e8b5ff92eff4647be5d41a5a515956907567126
Built by Source/CNTK/buildinfo.h$[=11=] on bbdadbf3455d
Build Path: /home/philly/jenkins/workspace/CNTK-Build-Linux
我运行在分布式模式下使用 CIFAR 示例:
mpiexec -n 4 python TrainResNet_CIFAR10_Distributed.py -n resnet20 -q 32
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.018s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.3 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.4 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.8 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.6 samples per second)
...
...
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.883s (6300.4 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.883s (6299.7 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.884s (6299.7 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.884s (6299.2 samples per second)
然而,当我 运行 它与 1 位 SGD 我得到:
mpiexec -n 4 python TrainResNet_CIFAR10_Distributed.py -n resnet20 -q 1 -a 50000
...
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.055s (4939.1 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)
正如所解释的 here 1bit 应该比普通的对应物更快。感谢任何帮助。
当 GPU 之间的通信时间与小批量的计算时间相比很大时,1 位 sgd 是一种有效的策略。
你上面的实验有两个"issues":你正在训练的模型参数很少(计算量不大),4个GPU在同一台机器上(与说通过网络)。 此外,在机器内部 CNTK 使用 NVIDIA nccl,它比 1 位使用的通用 MPI 实现优化得更好。 更新: 在撰写此评论时,默认情况下不使用 NCCL。