将行附加到 DataFrame 的最快和最有效的方法是什么?
What is the fastest and most efficient way to append rows to a DataFrame?
我有一个大型数据集,我必须将其转换为 .csv 格式,它由 29 列和 1M+ 行组成。我认为随着数据框变大,向其附加任何行会越来越耗时。我想知道是否有更快的方法,分享代码中的相关片段。
欢迎提出任何建议。
df = DataFrame()
for startID in range(0, 100000, 1000):
s1 = time.time()
tempdf = DataFrame()
url = f'https://******/products?startId={startID}&size=1000'
r = requests.get(url, headers={'****-Token': 'xxxxxx', 'Merchant-Id': '****'})
jsonList = r.json() # datatype= list, contains= dict
normalized = json_normalize(jsonList)
# type(normal) = pandas.DataFrame
print(startID / 1000) # status indicator
for series in normalized.iterrows():
series = series[1] # iterrows returns tuple (index, series)
offers = series['offers']
series = series.drop(columns='offers')
length = len(offers)
for offer in offers:
n = json_normalize(offer).squeeze() # squeeze() casts DataFrame into Series
concatinated = concat([series, n]).to_frame().transpose()
tempdf = tempdf.append(concatinated, ignore_index=True)
del normalized
df = df.append(tempdf)
f1 = time.time()
print(f1 - s1, ' seconds')
df.to_csv('out.csv')
正如 Mohit Motwani 所建议的,最快的方法是将数据收集到字典中,然后将所有数据加载到数据框中。下面是一些速度测量示例:
import pandas as pd
import numpy as np
import time
import random
end_value = 10000
创建字典列表并在最后将所有加载到数据框中的测量
start_time = time.time()
dictinary_list = []
for i in range(0, end_value, 1):
dictionary_data = {k: random.random() for k in range(30)}
dictionary_list.append(dictionary_data)
df_final = pd.DataFrame.from_dict(dictionary_list)
end_time = time.time()
print('Execution time = %.6f seconds' % (end_time-start_time))
执行时间 = 0.090153 秒
将数据附加到列表中并连接到数据框中的措施:
start_time = time.time()
appended_data = []
for i in range(0, end_value, 1):
data = pd.DataFrame(np.random.randint(0, 100, size=(1, 30)), columns=list('A'*30))
appended_data.append(data)
appended_data = pd.concat(appended_data, axis=0)
end_time = time.time()
print('Execution time = %.6f seconds' % (end_time-start_time))
执行时间 = 4.183921 秒
附加数据帧的测量:
start_time = time.time()
df_final = pd.DataFrame()
for i in range(0, end_value, 1):
df = pd.DataFrame(np.random.randint(0, 100, size=(1, 30)), columns=list('A'*30))
df_final = df_final.append(df)
end_time = time.time()
print('Execution time = %.6f seconds' % (end_time-start_time))
执行时间 = 11.085888 秒
使用 loc 插入数据的测量:
start_time = time.time()
df = pd.DataFrame(columns=list('A'*30))
for i in range(0, end_value, 1):
df.loc[i] = list(np.random.randint(0, 100, size=30))
end_time = time.time()
print('Execution time = %.6f seconds' % (end_time-start_time))
执行时间 = 21.029176 秒
我有一个大型数据集,我必须将其转换为 .csv 格式,它由 29 列和 1M+ 行组成。我认为随着数据框变大,向其附加任何行会越来越耗时。我想知道是否有更快的方法,分享代码中的相关片段。
欢迎提出任何建议。
df = DataFrame()
for startID in range(0, 100000, 1000):
s1 = time.time()
tempdf = DataFrame()
url = f'https://******/products?startId={startID}&size=1000'
r = requests.get(url, headers={'****-Token': 'xxxxxx', 'Merchant-Id': '****'})
jsonList = r.json() # datatype= list, contains= dict
normalized = json_normalize(jsonList)
# type(normal) = pandas.DataFrame
print(startID / 1000) # status indicator
for series in normalized.iterrows():
series = series[1] # iterrows returns tuple (index, series)
offers = series['offers']
series = series.drop(columns='offers')
length = len(offers)
for offer in offers:
n = json_normalize(offer).squeeze() # squeeze() casts DataFrame into Series
concatinated = concat([series, n]).to_frame().transpose()
tempdf = tempdf.append(concatinated, ignore_index=True)
del normalized
df = df.append(tempdf)
f1 = time.time()
print(f1 - s1, ' seconds')
df.to_csv('out.csv')
正如 Mohit Motwani 所建议的,最快的方法是将数据收集到字典中,然后将所有数据加载到数据框中。下面是一些速度测量示例:
import pandas as pd
import numpy as np
import time
import random
end_value = 10000
创建字典列表并在最后将所有加载到数据框中的测量
start_time = time.time()
dictinary_list = []
for i in range(0, end_value, 1):
dictionary_data = {k: random.random() for k in range(30)}
dictionary_list.append(dictionary_data)
df_final = pd.DataFrame.from_dict(dictionary_list)
end_time = time.time()
print('Execution time = %.6f seconds' % (end_time-start_time))
执行时间 = 0.090153 秒
将数据附加到列表中并连接到数据框中的措施:
start_time = time.time()
appended_data = []
for i in range(0, end_value, 1):
data = pd.DataFrame(np.random.randint(0, 100, size=(1, 30)), columns=list('A'*30))
appended_data.append(data)
appended_data = pd.concat(appended_data, axis=0)
end_time = time.time()
print('Execution time = %.6f seconds' % (end_time-start_time))
执行时间 = 4.183921 秒
附加数据帧的测量:
start_time = time.time()
df_final = pd.DataFrame()
for i in range(0, end_value, 1):
df = pd.DataFrame(np.random.randint(0, 100, size=(1, 30)), columns=list('A'*30))
df_final = df_final.append(df)
end_time = time.time()
print('Execution time = %.6f seconds' % (end_time-start_time))
执行时间 = 11.085888 秒
使用 loc 插入数据的测量:
start_time = time.time()
df = pd.DataFrame(columns=list('A'*30))
for i in range(0, end_value, 1):
df.loc[i] = list(np.random.randint(0, 100, size=30))
end_time = time.time()
print('Execution time = %.6f seconds' % (end_time-start_time))
执行时间 = 21.029176 秒