R 中的算术在数字上比在整数上更快。这是怎么回事?
Arithmetic in R faster on numerics as opposed to integers. What's going on?
我正在将一些主要使用数字数据(即双精度)的代码转换为整数,并做了一个快速基准测试以查看我获得了多少效率。
令我惊讶的是它慢了......大约 20%。我以为我做错了什么,但原始代码只是对中等大小的向量进行一些基本的算术运算,所以我知道不是这样。也许我的环境搞砸了?我重新启动,结果相同......整数效率较低。
这开始了一系列测试和跳入兔子洞。这是我的第一个测试。我们使用基数 R sum
对一百万个元素求和。请注意,对于 R 版本 3.5.0
,时间有很大不同,对于 v 3.5.1,时间大致相同(仍然不是人们所期望的):
set.seed(123)
int1e6 <- sample(1:10, 1e6, TRUE)
dbl1e6 <- runif(1e6, 1, 10)
head(int1e6)
# [1] 5 3 6 8 6 2
class(int1e6)
# [1] "integer"
head(dbl1e6)
# [1] 5.060628 2.291397 2.992889 5.299649 5.217105 9.769613
class(dbl1e6)
#[1] "numeric"
mean(dbl1e6)
# [1] 5.502034
mean(int1e6)
# [1] 5.505185
## R 3.5.0
library(microbenchmark)
microbenchmark(intSum = sum(int1e6), dblSum = sum(dbl1e6), times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intSum 1033.677 1043.991 1147.9711 1111.438 1200.725 2723.834 1000
dblSum 817.719 835.486 945.6553 890.529 998.946 2736.024 1000
## R 3.5.1
Unit: microseconds
expr min lq mean median uq max neval
intSum 836.243 877.7655 966.4443 950.1525 997.9025 2077.257 1000
dblSum 866.939 904.7945 1015.3445 986.4770 1046.4120 2541.828 1000
class(sum(int1e6))
# [1] "integer"
class(sum(dbl1e6))
#[1] "numeric"
从现在开始,版本 3.5.0 和 3.5.1 的结果几乎相同。
这是我们对 rabbit hole. Along with the documentation for sum
(see ?sum
), we see that sum
is simply a generic function that is dispatched via standardGeneric
. Digging deeper, we see it eventually calls R_execMethod
here on line 516. This is where I get lost. It looks to me, like R_execClosure
is called next followed by many different possible branches. I think the standard path is to call eval
next, but I'm not sure. My guess is that eventually, a function is called in arithimetic.c 的第一次深入研究,但我找不到任何专门对数字向量求和的内容。无论哪种方式,基于我对方法调度和一般 C
的有限了解,我天真的假设是调用如下所示的函数:
template <typename T>
T sum(vector<T> x) {
T mySum = 0;
for (std::size_t i = 0; i < x.size(); ++i)
mySum += x[i];
return mySum;
}
我知道 C
中没有函数重载或向量,但你明白我的意思。我的信念是,最终,一堆相同类型的元素被添加到相同类型的元素并最终返回。在 Rcpp
中,我们会有类似的东西:
template <typename typeReturn, typename typeRcpp>
typeReturn sumRcpp(typeRcpp x) {
typeReturn mySum = 0;
unsigned long int mySize = x.size();
for (std::size_t i = 0; i < mySize; ++i)
mySum += x[i];
return mySum;
}
// [[Rcpp::export]]
SEXP mySumTest(SEXP Rx) {
switch(TYPEOF(Rx)) {
case INTSXP: {
IntegerVector xInt = as<IntegerVector>(Rx);
int resInt = sumRcpp<int>(xInt);
return wrap(resInt);
}
case REALSXP: {
NumericVector xNum = as<NumericVector>(Rx);
double resDbl = sumRcpp<double>(xNum);
return wrap(resDbl);
}
default: {
Rcpp::stop("Only integers and numerics are supported");
}
}
}
基准测试证实了我对整数继承效率优势的正常思考:
microbenchmark(mySumTest(int1e6), mySumTest(dbl1e6))
Unit: microseconds
expr min lq mean median uq max neval
mySumTest(int1e6) 103.455 160.776 185.2529 180.2505 200.3245 326.950 100
mySumTest(dbl1e6) 1160.501 1166.032 1278.1622 1233.1575 1347.1660 1644.494 100
二元运算符
这让我进一步思考。也许正是围绕 standardGeneric
的复杂性使得不同的数据类型表现得 奇怪 。那么,让我们跳过所有爵士乐,直接进入二元运算符 (+, -, *, /, %/%
)
set.seed(321)
int1e6Two <- sample(1:10, 1e6, TRUE)
dbl1e6Two <- runif(1e6, 1, 10)
## addition
microbenchmark(intPlus = int1e6 + int1e6Two,
dblPlus = dbl1e6 + dbl1e6Two, times = 1000)
Unit: milliseconds
expr min lq mean median uq max neval
intPlus 2.531220 3.214673 3.970903 3.401631 3.668878 82.11871 1000
dblPlus 1.299004 2.045720 3.074367 2.139489 2.275697 69.89538 1000
## subtraction
microbenchmark(intSub = int1e6 - int1e6Two,
dblSub = dbl1e6 - dbl1e6Two, times = 1000)
Unit: milliseconds
expr min lq mean median uq max neval
intSub 2.280881 2.985491 3.748759 3.166262 3.379755 79.03561 1000
dblSub 1.302704 2.107817 3.252457 2.208293 2.382188 70.24451 1000
## multiplication
microbenchmark(intMult = int1e6 * int1e6Two,
dblMult = dbl1e6 * dbl1e6Two, times = 1000)
Unit: milliseconds
expr min lq mean median uq max neval
intMult 2.913680 3.573557 4.380174 3.772987 4.077219 74.95485 1000
dblMult 1.303688 2.020221 3.078500 2.119648 2.299145 10.86589 1000
## division
microbenchmark(intDiv = int1e6 %/% int1e6Two,
dblDiv = dbl1e6 / dbl1e6Two, times = 1000)
Unit: milliseconds
expr min lq mean median uq max neval
intDiv 2.892297 3.210666 3.720360 3.228242 3.373456 62.12020 1000
dblDiv 1.228171 1.809902 2.558428 1.842272 1.990067 64.82425 1000
类也被保留:
unique(c(class(int1e6 + int1e6Two), class(int1e6 - int1e6Two),
class(int1e6 * int1e6Two), class(int1e6 %/% int1e6Two)))
# [1] "integer"
unique(c(class(dbl1e6 + dbl1e6Two), class(dbl1e6 - dbl1e6Two),
class(dbl1e6 * dbl1e6Two), class(dbl1e6 / dbl1e6Two)))
# [1] "numeric"
在每种情况下,我们都看到数字数据类型的算术运算速度提高了 40% - 70%。真正奇怪的是,当操作的两个向量相同时,我们会得到更大的差异:
microbenchmark(intPlus = int1e6 + int1e6,
dblPlus = dbl1e6 + dbl1e6, times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intPlus 2522.774 3148.464 3894.723 3304.189 3531.310 73354.97 1000
dblPlus 977.892 1703.865 2710.602 1767.801 1886.648 77738.47 1000
microbenchmark(intSub = int1e6 - int1e6,
dblSub = dbl1e6 - dbl1e6, times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intSub 2236.225 2854.068 3467.062 2994.091 3214.953 11202.06 1000
dblSub 893.819 1658.032 2789.087 1730.981 1873.899 74034.62 1000
microbenchmark(intMult = int1e6 * int1e6,
dblMult = dbl1e6 * dbl1e6, times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intMult 2852.285 3476.700 4222.726 3658.599 3926.264 78026.18 1000
dblMult 973.640 1679.887 2638.551 1754.488 1875.058 10866.52 1000
microbenchmark(intDiv = int1e6 %/% int1e6,
dblDiv = dbl1e6 / dbl1e6, times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intDiv 2879.608 3355.015 4052.564 3531.762 3797.715 11781.39 1000
dblDiv 945.519 1627.203 2706.435 1701.512 1829.869 72215.51 1000
unique(c(class(int1e6 + int1e6), class(int1e6 - int1e6),
class(int1e6 * int1e6), class(int1e6 %/% int1e6)))
# [1] "integer"
unique(c(class(dbl1e6 + dbl1e6), class(dbl1e6 - dbl1e6),
class(dbl1e6 * dbl1e6), class(dbl1e6 / dbl1e6)))
# [1] "numeric"
每种运算符类型都增加了近 100%!!!
基础 R 中的常规 for 循环怎么样?
funInt <- function(v) {
mySumInt <- 0L
for (element in v)
mySumInt <- mySumInt + element
mySumInt
}
funDbl <- function(v) {
mySumDbl <- 0
for (element in v)
mySumDbl <- mySumDbl + element
mySumDbl
}
microbenchmark(funInt(int1e6), funDbl(dbl1e6))
Unit: milliseconds
expr min lq mean median uq max neval
funInt(int1e6) 25.44143 25.75075 26.81548 26.09486 27.60330 32.29436 100
funDbl(dbl1e6) 24.48309 24.82219 25.68922 25.13742 26.49816 29.36190 100
class(funInt(int1e6))
# [1] "integer"
class(funDbl(dbl1e6))
# [1] "numeric"
差异并不惊人,但人们仍然希望整数和优于双精度和。我真的不知道该怎么想。
所以我的问题是:
Why exactly do numeric data types outperform integer data types on basic arithmetical operations in base R?
编辑。忘了提这个:
sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6
评论里的F.Privé的"random guess"真不错!功能
do_arith
seems to be the starting point within arithmetic.c
. First for scalars we see that the case of REALSXP
is simple: e.g., standard +
is used. For INTSXP
there is a dispatch to, for example, R_integer_plus
,确实检查整数溢出:
static R_INLINE int R_integer_plus(int x, int y, Rboolean *pnaflag)
{
if (x == NA_INTEGER || y == NA_INTEGER)
return NA_INTEGER;
if (((y > 0) && (x > (R_INT_MAX - y))) ||
((y < 0) && (x < (R_INT_MIN - y)))) {
if (pnaflag != NULL)
*pnaflag = TRUE;
return NA_INTEGER;
}
return x + y;
}
其他二元运算类似。对于向量也是类似的。在 integer_binary
there is a dispatch to the same method, while in real_binary
中使用标准操作而不进行任何检查。
我们可以使用以下 Rcpp 代码看到这一点:
#include <Rcpp.h>
// [[Rcpp::plugins(cpp11)]]
#include <cstdint>
using namespace Rcpp;
// [[Rcpp::export]]
IntegerVector sumInt(IntegerVector a, IntegerVector b) {
IntegerVector result(no_init(a.size()));
std::transform(a.begin(), a.end(), b.begin(), result.begin(),
[] (int32_t x, int32_t y) {return x + y;});
return result;
}
// [[Rcpp::export]]
IntegerVector sumIntOverflow(IntegerVector a, IntegerVector b) {
IntegerVector result(no_init(a.size()));
std::transform(a.begin(), a.end(), b.begin(), result.begin(),
[] (int32_t x, int32_t y) {
if (x == NA_INTEGER || y == NA_INTEGER)
return NA_INTEGER;
if (((y > 0) && (x > (INT32_MAX - y))) ||
((y < 0) && (x < (INT32_MIN - y))))
return NA_INTEGER;
return x + y;
});
return result;
}
// [[Rcpp::export]]
NumericVector sumReal(NumericVector a, NumericVector b) {
NumericVector result(no_init(a.size()));
std::transform(a.begin(), a.end(), b.begin(), result.begin(),
[] (double x, double y) {return x + y;});
return result;
}
/*** R
set.seed(123)
int1e6 <- sample(1:10, 1e6, TRUE)
int1e6two <- sample(1:10, 1e6, TRUE)
dbl1e6 <- runif(1e6, 1, 10)
dbl1e6two <- runif(1e6, 1, 10)
microbenchmark::microbenchmark(int1e6 + int1e6two,
sumInt(int1e6, int1e6two),
sumIntOverflow(int1e6, int1e6two),
dbl1e6 + dbl1e6two,
sumReal(dbl1e6, dbl1e6two),
times = 1000)
*/
结果:
Unit: microseconds
expr min lq mean median uq max neval
int1e6 + int1e6two 1999.698 2046.2025 2232.785 2061.7625 2126.970 5461.816 1000
sumInt 812.560 846.1215 1128.826 861.9305 892.089 44723.313 1000
sumIntOverflow 1664.351 1690.2455 1901.472 1702.6100 1760.218 4868.182 1000
dbl1e6 + dbl1e6two 1444.172 1501.9100 1997.924 1526.0695 1641.103 47277.955 1000
sumReal 1459.224 1505.2715 1887.869 1530.5995 1675.594 5124.468 1000
在 C++ 代码中引入溢出检查会显着降低性能。尽管它没有标准 +
那么糟糕。所以如果你知道你的整数是 "well behaved",你可以通过直接进入 C/C++ 跳过 R 的错误检查来获得相当多的性能。这让我想起了 也有类似的结论。 R 执行的错误检查可能代价高昂。
对于具有相同向量的情况,我得到以下基准测试结果:
Unit: microseconds
expr min lq mean median uq max neval
int1e6 + int1e6 1761.285 2000.720 2191.541 2011.5710 2029.528 47397.029 1000
sumInt 648.151 761.787 1002.662 767.9885 780.129 46673.632 1000
sumIntOverflow 1408.109 1647.926 1835.325 1655.6705 1670.495 44958.840 1000
dbl1e6 + dbl1e6 1081.079 1119.923 1443.582 1137.8360 1173.807 44469.509 1000
sumReal 1076.791 1118.538 1456.917 1137.2025 1250.850 5141.558 1000
双打(R 和 C++)的性能显着提高。对于整数,也有一些性能提升,但不如双精度那么可捕获。
我正在将一些主要使用数字数据(即双精度)的代码转换为整数,并做了一个快速基准测试以查看我获得了多少效率。
令我惊讶的是它慢了......大约 20%。我以为我做错了什么,但原始代码只是对中等大小的向量进行一些基本的算术运算,所以我知道不是这样。也许我的环境搞砸了?我重新启动,结果相同......整数效率较低。
这开始了一系列测试和跳入兔子洞。这是我的第一个测试。我们使用基数 R sum
对一百万个元素求和。请注意,对于 R 版本 3.5.0
,时间有很大不同,对于 v 3.5.1,时间大致相同(仍然不是人们所期望的):
set.seed(123)
int1e6 <- sample(1:10, 1e6, TRUE)
dbl1e6 <- runif(1e6, 1, 10)
head(int1e6)
# [1] 5 3 6 8 6 2
class(int1e6)
# [1] "integer"
head(dbl1e6)
# [1] 5.060628 2.291397 2.992889 5.299649 5.217105 9.769613
class(dbl1e6)
#[1] "numeric"
mean(dbl1e6)
# [1] 5.502034
mean(int1e6)
# [1] 5.505185
## R 3.5.0
library(microbenchmark)
microbenchmark(intSum = sum(int1e6), dblSum = sum(dbl1e6), times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intSum 1033.677 1043.991 1147.9711 1111.438 1200.725 2723.834 1000
dblSum 817.719 835.486 945.6553 890.529 998.946 2736.024 1000
## R 3.5.1
Unit: microseconds
expr min lq mean median uq max neval
intSum 836.243 877.7655 966.4443 950.1525 997.9025 2077.257 1000
dblSum 866.939 904.7945 1015.3445 986.4770 1046.4120 2541.828 1000
class(sum(int1e6))
# [1] "integer"
class(sum(dbl1e6))
#[1] "numeric"
从现在开始,版本 3.5.0 和 3.5.1 的结果几乎相同。
这是我们对 rabbit hole. Along with the documentation for sum
(see ?sum
), we see that sum
is simply a generic function that is dispatched via standardGeneric
. Digging deeper, we see it eventually calls R_execMethod
here on line 516. This is where I get lost. It looks to me, like R_execClosure
is called next followed by many different possible branches. I think the standard path is to call eval
next, but I'm not sure. My guess is that eventually, a function is called in arithimetic.c 的第一次深入研究,但我找不到任何专门对数字向量求和的内容。无论哪种方式,基于我对方法调度和一般 C
的有限了解,我天真的假设是调用如下所示的函数:
template <typename T>
T sum(vector<T> x) {
T mySum = 0;
for (std::size_t i = 0; i < x.size(); ++i)
mySum += x[i];
return mySum;
}
我知道 C
中没有函数重载或向量,但你明白我的意思。我的信念是,最终,一堆相同类型的元素被添加到相同类型的元素并最终返回。在 Rcpp
中,我们会有类似的东西:
template <typename typeReturn, typename typeRcpp>
typeReturn sumRcpp(typeRcpp x) {
typeReturn mySum = 0;
unsigned long int mySize = x.size();
for (std::size_t i = 0; i < mySize; ++i)
mySum += x[i];
return mySum;
}
// [[Rcpp::export]]
SEXP mySumTest(SEXP Rx) {
switch(TYPEOF(Rx)) {
case INTSXP: {
IntegerVector xInt = as<IntegerVector>(Rx);
int resInt = sumRcpp<int>(xInt);
return wrap(resInt);
}
case REALSXP: {
NumericVector xNum = as<NumericVector>(Rx);
double resDbl = sumRcpp<double>(xNum);
return wrap(resDbl);
}
default: {
Rcpp::stop("Only integers and numerics are supported");
}
}
}
基准测试证实了我对整数继承效率优势的正常思考:
microbenchmark(mySumTest(int1e6), mySumTest(dbl1e6))
Unit: microseconds
expr min lq mean median uq max neval
mySumTest(int1e6) 103.455 160.776 185.2529 180.2505 200.3245 326.950 100
mySumTest(dbl1e6) 1160.501 1166.032 1278.1622 1233.1575 1347.1660 1644.494 100
二元运算符
这让我进一步思考。也许正是围绕 standardGeneric
的复杂性使得不同的数据类型表现得 奇怪 。那么,让我们跳过所有爵士乐,直接进入二元运算符 (+, -, *, /, %/%
)
set.seed(321)
int1e6Two <- sample(1:10, 1e6, TRUE)
dbl1e6Two <- runif(1e6, 1, 10)
## addition
microbenchmark(intPlus = int1e6 + int1e6Two,
dblPlus = dbl1e6 + dbl1e6Two, times = 1000)
Unit: milliseconds
expr min lq mean median uq max neval
intPlus 2.531220 3.214673 3.970903 3.401631 3.668878 82.11871 1000
dblPlus 1.299004 2.045720 3.074367 2.139489 2.275697 69.89538 1000
## subtraction
microbenchmark(intSub = int1e6 - int1e6Two,
dblSub = dbl1e6 - dbl1e6Two, times = 1000)
Unit: milliseconds
expr min lq mean median uq max neval
intSub 2.280881 2.985491 3.748759 3.166262 3.379755 79.03561 1000
dblSub 1.302704 2.107817 3.252457 2.208293 2.382188 70.24451 1000
## multiplication
microbenchmark(intMult = int1e6 * int1e6Two,
dblMult = dbl1e6 * dbl1e6Two, times = 1000)
Unit: milliseconds
expr min lq mean median uq max neval
intMult 2.913680 3.573557 4.380174 3.772987 4.077219 74.95485 1000
dblMult 1.303688 2.020221 3.078500 2.119648 2.299145 10.86589 1000
## division
microbenchmark(intDiv = int1e6 %/% int1e6Two,
dblDiv = dbl1e6 / dbl1e6Two, times = 1000)
Unit: milliseconds
expr min lq mean median uq max neval
intDiv 2.892297 3.210666 3.720360 3.228242 3.373456 62.12020 1000
dblDiv 1.228171 1.809902 2.558428 1.842272 1.990067 64.82425 1000
类也被保留:
unique(c(class(int1e6 + int1e6Two), class(int1e6 - int1e6Two),
class(int1e6 * int1e6Two), class(int1e6 %/% int1e6Two)))
# [1] "integer"
unique(c(class(dbl1e6 + dbl1e6Two), class(dbl1e6 - dbl1e6Two),
class(dbl1e6 * dbl1e6Two), class(dbl1e6 / dbl1e6Two)))
# [1] "numeric"
在每种情况下,我们都看到数字数据类型的算术运算速度提高了 40% - 70%。真正奇怪的是,当操作的两个向量相同时,我们会得到更大的差异:
microbenchmark(intPlus = int1e6 + int1e6,
dblPlus = dbl1e6 + dbl1e6, times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intPlus 2522.774 3148.464 3894.723 3304.189 3531.310 73354.97 1000
dblPlus 977.892 1703.865 2710.602 1767.801 1886.648 77738.47 1000
microbenchmark(intSub = int1e6 - int1e6,
dblSub = dbl1e6 - dbl1e6, times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intSub 2236.225 2854.068 3467.062 2994.091 3214.953 11202.06 1000
dblSub 893.819 1658.032 2789.087 1730.981 1873.899 74034.62 1000
microbenchmark(intMult = int1e6 * int1e6,
dblMult = dbl1e6 * dbl1e6, times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intMult 2852.285 3476.700 4222.726 3658.599 3926.264 78026.18 1000
dblMult 973.640 1679.887 2638.551 1754.488 1875.058 10866.52 1000
microbenchmark(intDiv = int1e6 %/% int1e6,
dblDiv = dbl1e6 / dbl1e6, times = 1000)
Unit: microseconds
expr min lq mean median uq max neval
intDiv 2879.608 3355.015 4052.564 3531.762 3797.715 11781.39 1000
dblDiv 945.519 1627.203 2706.435 1701.512 1829.869 72215.51 1000
unique(c(class(int1e6 + int1e6), class(int1e6 - int1e6),
class(int1e6 * int1e6), class(int1e6 %/% int1e6)))
# [1] "integer"
unique(c(class(dbl1e6 + dbl1e6), class(dbl1e6 - dbl1e6),
class(dbl1e6 * dbl1e6), class(dbl1e6 / dbl1e6)))
# [1] "numeric"
每种运算符类型都增加了近 100%!!!
基础 R 中的常规 for 循环怎么样?
funInt <- function(v) {
mySumInt <- 0L
for (element in v)
mySumInt <- mySumInt + element
mySumInt
}
funDbl <- function(v) {
mySumDbl <- 0
for (element in v)
mySumDbl <- mySumDbl + element
mySumDbl
}
microbenchmark(funInt(int1e6), funDbl(dbl1e6))
Unit: milliseconds
expr min lq mean median uq max neval
funInt(int1e6) 25.44143 25.75075 26.81548 26.09486 27.60330 32.29436 100
funDbl(dbl1e6) 24.48309 24.82219 25.68922 25.13742 26.49816 29.36190 100
class(funInt(int1e6))
# [1] "integer"
class(funDbl(dbl1e6))
# [1] "numeric"
差异并不惊人,但人们仍然希望整数和优于双精度和。我真的不知道该怎么想。
所以我的问题是:
Why exactly do numeric data types outperform integer data types on basic arithmetical operations in base R?
编辑。忘了提这个:
sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6
F.Privé的"random guess"真不错!功能
do_arith
seems to be the starting point within arithmetic.c
. First for scalars we see that the case of REALSXP
is simple: e.g., standard +
is used. For INTSXP
there is a dispatch to, for example, R_integer_plus
,确实检查整数溢出:
static R_INLINE int R_integer_plus(int x, int y, Rboolean *pnaflag)
{
if (x == NA_INTEGER || y == NA_INTEGER)
return NA_INTEGER;
if (((y > 0) && (x > (R_INT_MAX - y))) ||
((y < 0) && (x < (R_INT_MIN - y)))) {
if (pnaflag != NULL)
*pnaflag = TRUE;
return NA_INTEGER;
}
return x + y;
}
其他二元运算类似。对于向量也是类似的。在 integer_binary
there is a dispatch to the same method, while in real_binary
中使用标准操作而不进行任何检查。
我们可以使用以下 Rcpp 代码看到这一点:
#include <Rcpp.h>
// [[Rcpp::plugins(cpp11)]]
#include <cstdint>
using namespace Rcpp;
// [[Rcpp::export]]
IntegerVector sumInt(IntegerVector a, IntegerVector b) {
IntegerVector result(no_init(a.size()));
std::transform(a.begin(), a.end(), b.begin(), result.begin(),
[] (int32_t x, int32_t y) {return x + y;});
return result;
}
// [[Rcpp::export]]
IntegerVector sumIntOverflow(IntegerVector a, IntegerVector b) {
IntegerVector result(no_init(a.size()));
std::transform(a.begin(), a.end(), b.begin(), result.begin(),
[] (int32_t x, int32_t y) {
if (x == NA_INTEGER || y == NA_INTEGER)
return NA_INTEGER;
if (((y > 0) && (x > (INT32_MAX - y))) ||
((y < 0) && (x < (INT32_MIN - y))))
return NA_INTEGER;
return x + y;
});
return result;
}
// [[Rcpp::export]]
NumericVector sumReal(NumericVector a, NumericVector b) {
NumericVector result(no_init(a.size()));
std::transform(a.begin(), a.end(), b.begin(), result.begin(),
[] (double x, double y) {return x + y;});
return result;
}
/*** R
set.seed(123)
int1e6 <- sample(1:10, 1e6, TRUE)
int1e6two <- sample(1:10, 1e6, TRUE)
dbl1e6 <- runif(1e6, 1, 10)
dbl1e6two <- runif(1e6, 1, 10)
microbenchmark::microbenchmark(int1e6 + int1e6two,
sumInt(int1e6, int1e6two),
sumIntOverflow(int1e6, int1e6two),
dbl1e6 + dbl1e6two,
sumReal(dbl1e6, dbl1e6two),
times = 1000)
*/
结果:
Unit: microseconds
expr min lq mean median uq max neval
int1e6 + int1e6two 1999.698 2046.2025 2232.785 2061.7625 2126.970 5461.816 1000
sumInt 812.560 846.1215 1128.826 861.9305 892.089 44723.313 1000
sumIntOverflow 1664.351 1690.2455 1901.472 1702.6100 1760.218 4868.182 1000
dbl1e6 + dbl1e6two 1444.172 1501.9100 1997.924 1526.0695 1641.103 47277.955 1000
sumReal 1459.224 1505.2715 1887.869 1530.5995 1675.594 5124.468 1000
在 C++ 代码中引入溢出检查会显着降低性能。尽管它没有标准 +
那么糟糕。所以如果你知道你的整数是 "well behaved",你可以通过直接进入 C/C++ 跳过 R 的错误检查来获得相当多的性能。这让我想起了
对于具有相同向量的情况,我得到以下基准测试结果:
Unit: microseconds
expr min lq mean median uq max neval
int1e6 + int1e6 1761.285 2000.720 2191.541 2011.5710 2029.528 47397.029 1000
sumInt 648.151 761.787 1002.662 767.9885 780.129 46673.632 1000
sumIntOverflow 1408.109 1647.926 1835.325 1655.6705 1670.495 44958.840 1000
dbl1e6 + dbl1e6 1081.079 1119.923 1443.582 1137.8360 1173.807 44469.509 1000
sumReal 1076.791 1118.538 1456.917 1137.2025 1250.850 5141.558 1000
双打(R 和 C++)的性能显着提高。对于整数,也有一些性能提升,但不如双精度那么可捕获。