加速或向量化 pandas 应用函数 - 需要有条件地应用函数

Speed up or vectorize pandas apply function - require a conditional application of a function

我想将函数逐行应用到数据框,如下所示:

name  value 

'foo' 2
'bar' 4
'bar' 3
'foo' 1
  .   .
  .   .
  .   .
'bar' 8

速度对我来说很重要,因为我在多个 90GB 数据集上运行,所以我一直在尝试向量化以下操作以用于 df.apply:

以 'name' 为条件,我想将 'value' 插入一个单独的函数,对结果执行一些运算,然后写入新列 'output'。像,

funcs = {'foo': <FunctionObject>, 'bar': <FunctionObject>}

def masterFunc(row):
    correctFunction = funcs[row['name']]
    row['output'] = correctFunction(row['value']) + 3*row['value']

df.apply(masterFunc, axis=1).

在我的实际问题中,我有 32 个不同的函数可以应用于基于 'name' 的 'value'。这些单独的函数(fooFunc、barFunc、zooFunc 等)中的每一个都已经矢量化;它们是 scipy.interp1d 函数,构建如下:

separateFunc = scipy.interpolate.interp1d(x-coords=[2, 3, 4], y-coords=[3, 5, 7])
#separateFunc is now a math function, y=2x-1. use case:
y = separateFunc(3.5) # y == 6

但是,我不确定如何向量化 masterFunc 本身。似乎选择 'pull out' 的哪个函数应用于 'value' 是非常昂贵的,因为它需要在每次迭代时访问内存(使用我当前将函数存储在哈希表中的方法)。但是,替代方案似乎只是一堆 if-then 语句,似乎也无法对其进行矢量化。我怎样才能加快速度?

实际代码,为简洁起见删除了重复的部分:

interpolationFunctions = {}
#the 'interpolate.emissionsFunctions' are a separate function which does some scipy stuff
interpolationFunctions[2] = interpolate.emissionsFunctions('./roadtype_2_curve.csv')
interpolationFunctions[3] = interpolate.emissionsFunctions('./roadtype_3_curve.csv')

def compute_pollutants(row):
    funcs = interpolationFunctions[row['roadtype']]
    speed = row['speed']
    length = row['length']
    row['CO2-Atm'] = funcs['CO2-Atm'](speed)*length*speed*0.00310686368
    row['CO2-Eq'] = funcs['CO2-Eq'](speed)*length*speed*0.00310686368
    return row

已尝试创建一个可以概括您的问题的可重现示例。您可以 运行 具有不同行大小的代码来比较不同方法之间的结果,将其中一种方法扩展为使用 cython 或多处理以获得更快的速度也不难。你提到你的数据很大,我没有测试每种方法的内存使用情况,所以值得在你自己的机器上尝试。

import numpy as np
import pandas as pd
import time as t

# Example Functions
def foo(x):
    return x + x

def bar(x):
    return x * x

# Example Functions for multiple columns
def foo2(x, y):
    return x + y

def bar2(x, y):
    return x * y

# Create function dictionary
funcs = {'foo': foo, 'bar': bar}
funcs2 = {'foo': foo2, 'bar': bar2}

n_rows = 1000000
# Generate Sample Data
names = np.random.choice(list(funcs.keys()), size=n_rows)
values = np.random.normal(100, 20, size=n_rows)
df = pd.DataFrame()
df['name'] = names
df['value'] = values

# Create copy for comparison using different methods
df_copy = df.copy()

# Modified original master function
def masterFunc(row, functs):
    correctFunction = funcs[row['name']]
    return correctFunction(row['value']) + 3*row['value']

t1 = t.time()
df['output'] = df.apply(lambda x: masterFunc(x, funcs), axis=1)
t2 = t.time()
print("Time for all rows/functions: ", t2 - t1)


# For Functions that Can be vectorized using numpy
t3 = t.time()
output_dataframe_list = []
for func_name, func in funcs.items():
    df_subset = df_copy.loc[df_copy['name'] == func_name,:]
    df_subset['output'] = func(df_subset['value'].values) + 3 * df_subset['value'].values
    output_dataframe_list.append(df_subset)

output_df = pd.concat(output_dataframe_list)

t4 = t.time()
print("Time for all rows/functions: ", t4 - t3)


# Using a for loop over numpy array of values is still faster than dataframe apply using
t5 = t.time()
output_dataframe_list2 = []
for func_name, func in funcs2.items():
    df_subset = df_copy.loc[df_copy['name'] == func_name,:]
    col1_values = df_subset['value'].values
    outputs = np.zeros(len(col1_values))
    for i, v in enumerate(col1_values):
        outputs[i] = func(col1_values[i], col1_values[i]) + 3 * col1_values[i]

    df_subset['output'] = np.array(outputs)
    output_dataframe_list2.append(df_subset)

output_df2 = pd.concat(output_dataframe_list2)

t6 = t.time()
print("Time for all rows/functions: ", t6 - t5)