由于类型不稳定导致的性能差异?

Peformance difference due to type instability?

我正在 Julia 0.4-prerelease 中尝试以下代码,它以两种不同的方式执行矩阵求幂(精确展开与级数展开)。我尝试使用几种方法来获取数组维度 n 和设置单位矩阵 eye( n ).

function test()
    A = [ 1.0 -1.0 ; -1.0  1.0 ]

    lam, U = eig( A )                         # diagonalization: A U = U diagm(lam)

    Bref = U * diagm( exp(lam) ) * U'         # Bref = exp(A) (in matrix sense)

    #[ Get the dimension n ]                            
    n = length( lam )                                                  # slow (1a)
    # const n = length( lam )                                          # slow (1b)
    # n::Int = length( lam )                                           # fast (1c)
    # const n::Int = length( lam )                                     # fast (1d)
    # n = size( A, 1 )                                                 # fast (1e)

    #[ Set unit matrices to B and X ]
    B = eye( n ); X = eye( n )                                         # slow with (1a) (2-1)
    # B = eye( 2 ); X = eye( 2 )                                       # fast (2-2)
    # B = eye( n::Int ); X = eye( n::Int )                             # fast (2-3) 
    # B::Array{Float64,2} = eye( n ); X::Array{Float64,2} = eye( n )   # fast (2-4)
    # B = eye( A ); X = eye( A )                                       # fast (2-5)

    #[ Calc B = exp(A) with Taylor expansion ]
    @time for k = 1:20
        X[:,:] = X * A / float( k )
        B[:,:] += X
    end

    #[ Check error ]
    @show norm( B - Bref )
end

test()

在这里我观察到当 n 是一个动态变量(没有类型注释)时,代码变得比其他情况慢得多。例如,(1a) 和 (2-1) 的组合给出 下面的 "slow" 结果,而其他组合给出 "fast" 结果(快 1000 多倍)。

slow case => elapsed time: 0.043822985 seconds (1 MB allocated)
fast case => elapsed time: 1.1702e-5 seconds (16 kB allocated)

这是因为 "type instability" 发生在 for 循环中吗?我很困惑,因为 eye( n ) 总是 Array{Float64,2} (仅用于初始化)并且似乎没有(隐式)类型更改。同样令人困惑的是 (1e) 和 (2-1) 的组合很快,其中动态 n 是用 size() 而不是 length() 设置的。总体而言,为了获得良好的性能,显式注释数组维度变量是否更好?

我认为区别主要在于编译时间。如果我再放两个 test()s,我会得到以下结果:

2-11a:

  73.599 milliseconds (70583 allocations: 3537 KB)
norm(B - Bref) = 4.485301019485633e-14
  15.165 microseconds (200 allocations: 11840 bytes)
norm(B - Bref) = 4.485301019485633e-14
  10.844 microseconds (200 allocations: 11840 bytes)
norm(B - Bref) = 4.485301019485633e-14

2-21a:

   8.662 microseconds (180 allocations: 11520 bytes)
norm(B - Bref) = 4.485301019485633e-14
   7.968 microseconds (180 allocations: 11520 bytes)
norm(B - Bref) = 4.485301019485633e-14
   7.654 microseconds (180 allocations: 11520 bytes)
norm(B - Bref) = 4.485301019485633e-14

虽然编译时间的差异来自不同的编译代码。那,以及剩下的一些时间上的小差异,实际上是来自类型的不稳定性。查看 @code_warntype test() 的这一部分以获得 1a 版本:

  GenSym(0) = (Base.LinAlg.__eig#214__)(GenSym(19),A::Array{Float64,2})::Tuple{Any,Any}
  #s8 = 1
  GenSym(22) = (Base.getfield)(GenSym(0),1)::Any
  GenSym(23) = (Base.box)(Base.Int,(Base.add_int)(1,1)::Any)::Int64
  lam = GenSym(22)
  #s8 = GenSym(23)
  GenSym(24) = (Base.getfield)(GenSym(0),2)::Any
  GenSym(25) = (Base.box)(Base.Int,(Base.add_int)(2,1)::Any)::Int64
  U = GenSym(24)
  #s8 = GenSym(25) # line 7:
  Bref = U * (Main.diagm)((Main.exp)(lam)::Any)::Any * (Main.ctranspose)(U)::Any::Any # line 9:
  n = (Main.length)(lam)::Any # line 11:
  B = (Main.eye)(n)::Any # line 11:
  X = (Main.eye)(n)::Any # line 13: # util.jl, line 170:

我读到的是类型推断未能找出 eig 的 return 类型。这然后传播到 B 和 X。如果您添加 n::Int,最后一行更改为

  n = (top(typeassert))((top(convert))(Main.Int,(Main.length)(lam)::Any)::Any,Main.Int)::Int64 # line 11:
  B = (Base.eye)(Base.Float64,n::Int64,n::Int64)::Array{Float64,2} # line 11:
  X = (Base.eye)(Base.Float64,n::Int64,n::Int64)::Array{Float64,2} # line 13: # util.jl, line 170:

所以 BX 输入正确。 An issue about this exact subject was raised recently - 如果您想获得最佳性能,除了自己注释之外似乎没有太多选择。