有没有更快的方法从字典中获取多个键?

is there a faster way to get multiple keys from dictionary?

我有字典:

d = {'a':1, 'b':2, 'c':3, 'd':4}

然后我有一个键列表:

l = ['a', 'b', 'z']

我想要的结果是:

[1, 2, None]

我目前所做的是:

[d.get(k) for k in l]

有没有更快的方法?也许没有 for

您可以使用:

>>> list(map(d.get, l))
[1, 2, None]

它有两个优点:

  • 它只执行一次 d.get 查找 - 不是每次迭代
  • Only CPython:因为dict.get是用C实现的,map是用C实现的所以可以避免函数调用中的Python层(大致来说细节有点复杂)。

至于计时(在 Jupyter notebook 中的 Python 3.6 上执行):

d = {'a':1, 'b':2, 'c':3, 'd':4}
l = ['a', 'b', 'z']

%timeit list(map(d.get, l))
594 ns ± 41.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

%timeit [d.get(k) for k in l]
508 ns ± 17.1 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

请注意,在这种情况下这实际上更慢!那是因为对于短迭代来说,maplist 开销占主导地位。因此,如果您希望在短迭代上更快,请坚持使用您的方法。

l 越长,您会发现 list(map(...)) 最终变得更快:

d = {'a':1, 'b':2, 'c':3, 'd':4}
l = [random.choice(string.ascii_lowercase) for _ in range(10000)]

%timeit list(map(d.get, l))
663 µs ± 64.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit [d.get(k) for k in l]
1.13 ms ± 7.55 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

然而,这仍然 "just" 快了 2 倍。

需要注意的是,这里的瓶颈不是字典的大小,而是键列表的大小(当然还有方法look-up时间,散列键所花费的时间正在查找等)

考虑以下因素:

from timeit import Timer

d = {1: 'a'}
keys = [1 for _ in range(1000)]

def _map():
    return list(map(d.get, keys))

def _map_single_lookup():
    g = d.get
    return list(map(g, keys))

def _list_comp():
    return [d.get(key) for key in keys]

def _list_comp_single_lookup():
    g = d.get
    return [g(key) for key in keys]

print(min(Timer(_map).repeat(100, 100)))
print(min(Timer(_map_single_lookup).repeat(100, 100)))
print(min(Timer(_list_comp).repeat(100, 100)))
print(min(Timer(_list_comp_single_lookup).repeat(100, 100)))

#  0.009307396466818774
#  0.009261678214412816
#  0.018456645101335933
#  0.011634828724497837

一个更快的替代方法是将 itemgetterdefaultdict 一起使用(因为 itemgetter 不支持像 dict.get 这样的默认值,以防密钥不支持'不存在)

from collections import defaultdict
from timeit import Timer
from operator import itemgetter

d = defaultdict(lambda: None)
d[1] = 'a'
keys = [1 for _ in range(1000)]

def _map():
    return list(map(d.get, keys))

def _getter():
    return list(itemgetter(*keys)(d))

print(min(Timer(_map).repeat(100, 100)))
print(min(Timer(_getter).repeat(100, 100)))

# 0.0074976040767260055
# 0.0021861597102568187

编辑 添加了 non-existing 键(整数和字符串)的计时。对性能无明显影响。

from collections import defaultdict
from timeit import Timer
from operator import itemgetter

d = defaultdict(lambda: None)
d[1] = 'a'
non_existing_keys_int = [2 for _ in range(1000)]
non_existing_keys_string = ['a' for _ in range(1000)]

def get_non_existing_keys_int():
    return list(itemgetter(*non_existing_keys_int)(d))

def get_non_existing_keys_string():
    return list(itemgetter(*non_existing_keys_string)(d))

print(min(Timer(get_non_existing_keys_int).repeat(100, 100)))
print(min(Timer(get_non_existing_keys_string).repeat(100, 100)))

#  0.002647169132724954
#  0.002408539023506795

getpy is a library that allows fast parallel hashmap operations based on a C++ lib by what better name than parallel-hashmap?这是一个 ipython 会话,显示使用并行读取获取 1000 个值的速度提高了 200 多倍!

In [1]: import numpy as np
   ...: import getpy as gp

In [2]: key_type = np.dtype('u8')
   ...: value_type = np.dtype('u8')

In [3]: keys = np.random.randint(1, 1000, size=10**2, dtype=key_type)
   ...: values = np.random.randint(1, 1000, size=10**2, dtype=value_type)
   ...: 
   ...: gp_dict = gp.Dict(key_type, value_type, default_value=42)
   ...: gp_dict[keys] = values
   ...: 
   ...: random_keys = np.random.randint(1, 1000, size=500, dtype=key_type)

In [4]: %timeit random_values = gp_dict[random_keys]
2.19 µs ± 11.6 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

In [7]: %timeit [gp_dict[k] for k in random_keys]
491 µs ± 3.51 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)