我的带有 Django 连接池的多线程代码没有任何改进

My multi-threading code with django connection pool doesn't have any improvement

我在 Django.

上与连接池的多线程作斗争

我知道 python 线程有 GIL 问题,但我认为 python 线程足以提高性能,如果大部分工作是 DB I/O。

首先我尝试实现了一个小代码来证明我的想法。

简单的解释一下,代码使用threadPool.apply_async()settings.py中的CONN_MAX_AGE设置的数据库连接池。

通过代码,我重复控制工作线程的线程数。

from multiprocessing            import pool
from threadPoolTestWithDB_IO    import models
from django.db                  import transaction
import django
import datetime
import logging
import g2sType


def addEgm(pre, id_):
    """
    @summary: This function only inserts a bundle of records tied by a foreign key 
    """
    try:
        with transaction.atomic():

            egmId = pre + "_" + str(id_)
            egm = models.G2sEgm(egmId=egmId, egmLocation="localhost")
            egm.save()

            device = models.Device(egm=egm,
                          deviceId=1,
                          deviceClass=g2sType.t_deviceClass.G2S_eventHandler,
                          deviceActive=True)
            device.save()

            models.EventHandlerProfile(device=device, queueBehavior="a").save()
            models.EventHandlerStatus(device=device).save()

            for i2 in range(1, 200):
                models.EventReportData(device=device,
                                       deviceClass=g2sType.t_deviceClass.G2S_communications,
                                       deviceId=1,
                                       eventCode="TEST",
                                       eventText="",
                                       eventId=i2,
                                       transactionId=0
                                       ).save()

            print "Done %d" % id_


    except Exception as e:
        logging.root.exception(e)


if __name__ == "__main__":

    django.setup()
    logging.basicConfig()

    print "Start test"

    tPool = pool.ThreadPool(processes=1)    #Set the number of processes

    s = datetime.datetime.now()
    for i in range(100):                    #Set the number of record bundles
        tPool.apply_async(func=addEgm, args=("a", i))

    print "Wait worker processes"
    tPool.close()                           
    tPool.join()

    e = datetime.datetime.now()
    print "End test"

    print "Time Measurement : %s" % (e-s,)

    models.G2sEgm.objects.all().delete()    #remove all records inserted while the test
--------------------------
# settings.py


DATABASES = {
             'default': {
                         'ENGINE': 'django.db.backends.oracle',
                         'NAME': 'orcl',
                         'USER': 'test',
                         'PASSWORD': '1123',
                         'HOST': '192.168.0.90',
                         'PORT': '1521',
                         'CONN_MAX_AGE': 100,
                         'OPTIONS': {'threaded': True}
                         }
             }

然而,结果出来了,因为他们在 1 线程工作器和多线程工作之间没有任何大的区别。

例如,10个线程需要30.6 sec,1个线程需要30.4 sec

我哪里出错了?

要么你在数据库层面有问题。您可以通过执行此查询来证明这一点:

select /* +rule */
    s1.username || '@' || s1.machine
    || ' ( SID=' || s1.sid || ' ' || s1.program || ' )  is blocking ' || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ' || s2.program || ' ) ' AS blocking_status
    from v$lock l1, v$session s1, v$lock l2, v$session s2
    where s1.sid=l1.sid and s2.sid=l2.sid
    and l1.BLOCK=1 and l2.request > 0
    and l1.id1 = l2.id1
    and l2.id2 = l2.id2 ;

或者Python中有线程被阻塞。 (可能在数据库驱动程序级别)。 将 gdb 附加到 python 进程,然后执行 thread apply all bt.

你会看到的。