reading/writing 共享资源时协程是否需要锁?

Do coroutines require locks when reading/writing a shared resource?

例如

shared = {}

async def coro1():
    # do r/w stuff with shared 

async def coro2():
    # do r/w stuff with shared 

asyncio.create_task(coro1())
asyncio.create_task(coro2())

如果 coro1coro2 都访问一个 dictionary/variable,同时读取和写入,是否需要某种类型的 mutex/lock?还是因为 asyncio 的东西只在 1 个线程上发生过?

是的,您仍然需要锁。并发修改并不会因为它是通过协程而不是线程发生的而变得安全。

asyncio 有它自己专用的 asyncio.Lock,以及它自己的其他同步原语版本,因为关心线程的锁不会保护协程免受彼此的影响,等待锁需要通过事件循环发生,而不是通过阻塞线程。

shared = {}
lock = asyncio.Lock()

async def coro1():
    ...
    async with lock:
        await do_stuff_with(shared)
    ...

async def coro2():
    ...
    async with lock:
        await do_stuff_with(shared)
    ...

也就是说,由于协程基于协作式多任务处理而不是抢占式,因此您有时可以保证在线程需要锁的情况下不需要锁。例如,如果 any 协程在临界区中没有任何点可以让出控制权,那么您就不需要锁。

例如,这需要一个锁:

async def coro1():
    async with lock:
        for key in shared:
            shared[key] = await do_something_that_could_yield(shared[key])

async def coro2():
    async with lock:
        for key in shared:
            shared[key] = await do_something_that_could_yield(shared[key])

这在技术上不是:

async def coro1():
    for key in shared:
        shared[key] = do_something_that_cant_yield(shared[key])

async def coro2():
    for key in shared:
        shared[key] = do_something_that_cant_yield(shared[key])

但不锁定可能会随着代码更改引入错误,特别是因为以下 确实 需要锁,在 both 协程中:

async def coro1():
    async with lock:
        for key in shared:
            shared[key] = await do_something_that_could_yield(shared[key])

async def coro2():
    async with lock:
        for key in shared:
            shared[key] = do_something_that_cant_yield(shared[key])

如果两个协程都没有锁,coro2 可能会中断 coro1coro1 需要独占访问共享资源。