过滤日志级别
Filtering log levels
我正在尝试将日志级别分成单独的文件(每个级别一个)。目前我已经为每个级别定义了一个文件,但是根据我当前的配置,较高级别会传播到较低级别。
我的日志配置是:
version: 1
formatters:
standard:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
error:
format: "%(levelname)s <PID %(process)d:%(processName)s> %(name)s.%(funcName)s(): %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: standard
level: DEBUG
debug_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: DEBUG
filename: logs/debug.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
info_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: INFO
filename: logs/info.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
warning_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: WARNING
filename: logs/warning.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
error_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: ERROR
filename: logs/error.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
critical_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: CRITICAL
filename: logs/critical.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
loggers:
development:
handlers: [ console, debug_file_handler ]
propagate: false
production:
handlers: [ info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
propagate: false
root:
handlers: [ debug_file_handler, info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
然后我加载配置并像这样设置记录器:
with open(path_log_config_file, 'r') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
logger = logging.getLogger(LOGS_MODE)
logger.setLevel(LOGS_LEVEL)
其中 LOGS_MODE
和 LOGS_LEVEL
在我项目的配置文件中定义:
# Available loggers: development, production
LOGS_MODE = 'production'
# Available levels: CRITICAL = 50, ERROR = 40, WARNING = 30, INFO = 20, DEBUG = 10
LOGS_LEVEL = 20
当我想使用记录器时,我会这样做:
from src.logger import logger
我在提到使用过滤器的地方找到了这些答案:#1 #2 但他们都说要使用不同的处理程序并为每个处理程序指定级别但是使用这种方法我将不得不导入不同的记录器在某些情况下,而不是只有一个。这是实现它的唯一方法吗?
此致。
更新 1:
当我使用 YAML 文件加载记录器配置时,我找到了这个答案 #3:
所以我在我的文件中定义了过滤器 logger.py
:
with open(path_log_config_file, 'rt') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
class InfoFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.INFO
class WarningFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.WARNING
class ErrorFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.ERROR
class CriticalFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.CRITICAL
logger = logging.getLogger(LOGS_MODE)
logger.setLevel(LOGS_LEVEL)
并且在 YAML 文件中:
filters:
info_filter:
(): src.logger.InfoFilter
warning_filter:
(): src.logger.WarningFilter
error_filter:
(): src.logger.ErrorFilter
critical_filter:
(): src.logger.CriticalFilter
handlers:
console:
class: logging.StreamHandler
formatter: standard
level: DEBUG
debug_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: DEBUG
filename: logs/debug.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
info_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: INFO
filename: logs/info.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ info_filter ]
warning_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: WARNING
filename: logs/warning.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ warning_filter ]
error_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: ERROR
filename: logs/error.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ error_filter ]
critical_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: CRITICAL
filename: logs/critical.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ critical_filter ]
我现在的问题出在过滤器部分。我不知道如何指定每个 class 的名称。在响应 #3 中,他使用 __main__.
因为他直接 运行 脚本,而不是作为模块,并且没有说明如果使用模块该怎么做。
正在阅读 User-defined objects doc reference I've tried to use ext://
as it's said in the Access to external objects 部分,但出现与尝试使用 src.logger.InfoFilter
指定层次结构时相同的错误。
logging.config.dictConfig(config)
File "/usr/lib/python3.8/logging/config.py", line 808, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python3.8/logging/config.py", line 553, in configure
raise ValueError('Unable to configure '
ValueError: Unable to configure filter 'info_filter'
python-BaseException
我的项目树是(只显示重要部分):
.
├── resources
│ ├── log.yaml
│ └── properties.py
├── src
│ ├── main.py
│ └── logger.py
└── ...
我想你误会了。
both of them say to use different handlers and specify the level for each one
正确。
but with this approach I'll have to import different loggers in some cases instead of only one
不,您可以向一个记录器添加任意数量的处理程序。这就是为什么调用该方法 Logger.addHandler
并且每个记录器对象都有一个处理程序列表(其 .handlers
成员)的原因。
您只需要为 5 个处理程序设置一个记录器。
我提交了另一个答案,因为您的问题随着您的更新 1 发生了很大变化。
复制注意事项:
- 我重新创建了你的树状结构
- 我的 PYTHONPATH 只指向根目录(
src/
和 ressources/
的父目录)
- I 运行 来自根目录(当前目录)的脚本
- 我在顶层创建了一个
logs/
目录(否则我得到 ValueError: Unable to configure handler 'critical_file_handler': [Errno 2] No such file or directory: 'C:\PycharmProjects\so69336121\logs\critical.log'
)
您遇到的问题是循环导入引起的。当导入 logger
模块时,它通过加载 YAML 文件开始,该文件要求实例化一些 src.logger.*Filter
对象,但由于文件尚未完全初始化而无法找到。我建议将有效的代码放入可以在启动时由主函数调用的函数中。
这是我的资料:
# file: src/logger.py
import logging.config
import yaml # by `pip install pyyaml`
path_log_config_file = "ressources/log.yml"
LOGS_LEVEL = logging.ERROR
LOGS_MODE = "production"
def setup_logging():
with open(path_log_config_file, 'rt') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
# ... the rest of the file you provided
# file: src/main.py
from src.logger import setup_logging, logger
setup_logging()
logger.debug("DEBUG")
logger.info("INFO")
logger.warning("WARNING")
logger.error("ERROR")
logger.critical("CRITICAL")
然后我得到一个错误:
ValueError: dictionary doesn't specify a version
通过将此行添加到 YAML 文件的顶部来解决:
version: 1
然后我得到这个错误:
ValueError: Unable to configure handler 'console': Unable to set formatter 'standard': 'standard'
因为您的格式化程序未定义(您可能复制粘贴错误)。给你,把它添加到你的 YAML 文件中:
formatters:
standard:
format: '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
error:
format: 'ERROR %(asctime)s [%(levelname)s] %(name)s: %(message)s'
它 运行 没有错误,但没有任何内容写入日志。快速调试器断点显示从未调用 *Filter.filter
方法。我检查了 logger
对象,确实它没有附加处理程序。它也可以添加到 YAML 中:
loggers:
production:
handlers: [ debug_file_handler, info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
propagate: False
现在可以使用了。
我正在尝试将日志级别分成单独的文件(每个级别一个)。目前我已经为每个级别定义了一个文件,但是根据我当前的配置,较高级别会传播到较低级别。
我的日志配置是:
version: 1
formatters:
standard:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
error:
format: "%(levelname)s <PID %(process)d:%(processName)s> %(name)s.%(funcName)s(): %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: standard
level: DEBUG
debug_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: DEBUG
filename: logs/debug.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
info_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: INFO
filename: logs/info.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
warning_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: WARNING
filename: logs/warning.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
error_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: ERROR
filename: logs/error.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
critical_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: CRITICAL
filename: logs/critical.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
loggers:
development:
handlers: [ console, debug_file_handler ]
propagate: false
production:
handlers: [ info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
propagate: false
root:
handlers: [ debug_file_handler, info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
然后我加载配置并像这样设置记录器:
with open(path_log_config_file, 'r') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
logger = logging.getLogger(LOGS_MODE)
logger.setLevel(LOGS_LEVEL)
其中 LOGS_MODE
和 LOGS_LEVEL
在我项目的配置文件中定义:
# Available loggers: development, production
LOGS_MODE = 'production'
# Available levels: CRITICAL = 50, ERROR = 40, WARNING = 30, INFO = 20, DEBUG = 10
LOGS_LEVEL = 20
当我想使用记录器时,我会这样做:
from src.logger import logger
我在提到使用过滤器的地方找到了这些答案:#1 #2 但他们都说要使用不同的处理程序并为每个处理程序指定级别但是使用这种方法我将不得不导入不同的记录器在某些情况下,而不是只有一个。这是实现它的唯一方法吗?
此致。
更新 1:
当我使用 YAML 文件加载记录器配置时,我找到了这个答案 #3:
所以我在我的文件中定义了过滤器 logger.py
:
with open(path_log_config_file, 'rt') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
class InfoFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.INFO
class WarningFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.WARNING
class ErrorFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.ERROR
class CriticalFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.CRITICAL
logger = logging.getLogger(LOGS_MODE)
logger.setLevel(LOGS_LEVEL)
并且在 YAML 文件中:
filters:
info_filter:
(): src.logger.InfoFilter
warning_filter:
(): src.logger.WarningFilter
error_filter:
(): src.logger.ErrorFilter
critical_filter:
(): src.logger.CriticalFilter
handlers:
console:
class: logging.StreamHandler
formatter: standard
level: DEBUG
debug_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: DEBUG
filename: logs/debug.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
info_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: INFO
filename: logs/info.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ info_filter ]
warning_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: WARNING
filename: logs/warning.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ warning_filter ]
error_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: ERROR
filename: logs/error.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ error_filter ]
critical_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: CRITICAL
filename: logs/critical.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ critical_filter ]
我现在的问题出在过滤器部分。我不知道如何指定每个 class 的名称。在响应 #3 中,他使用 __main__.
因为他直接 运行 脚本,而不是作为模块,并且没有说明如果使用模块该怎么做。
正在阅读 User-defined objects doc reference I've tried to use ext://
as it's said in the Access to external objects 部分,但出现与尝试使用 src.logger.InfoFilter
指定层次结构时相同的错误。
logging.config.dictConfig(config)
File "/usr/lib/python3.8/logging/config.py", line 808, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python3.8/logging/config.py", line 553, in configure
raise ValueError('Unable to configure '
ValueError: Unable to configure filter 'info_filter'
python-BaseException
我的项目树是(只显示重要部分):
.
├── resources
│ ├── log.yaml
│ └── properties.py
├── src
│ ├── main.py
│ └── logger.py
└── ...
我想你误会了。
both of them say to use different handlers and specify the level for each one
正确。
but with this approach I'll have to import different loggers in some cases instead of only one
不,您可以向一个记录器添加任意数量的处理程序。这就是为什么调用该方法 Logger.addHandler
并且每个记录器对象都有一个处理程序列表(其 .handlers
成员)的原因。
您只需要为 5 个处理程序设置一个记录器。
我提交了另一个答案,因为您的问题随着您的更新 1 发生了很大变化。
复制注意事项:
- 我重新创建了你的树状结构
- 我的 PYTHONPATH 只指向根目录(
src/
和ressources/
的父目录) - I 运行 来自根目录(当前目录)的脚本
- 我在顶层创建了一个
logs/
目录(否则我得到ValueError: Unable to configure handler 'critical_file_handler': [Errno 2] No such file or directory: 'C:\PycharmProjects\so69336121\logs\critical.log'
)
您遇到的问题是循环导入引起的。当导入 logger
模块时,它通过加载 YAML 文件开始,该文件要求实例化一些 src.logger.*Filter
对象,但由于文件尚未完全初始化而无法找到。我建议将有效的代码放入可以在启动时由主函数调用的函数中。
这是我的资料:
# file: src/logger.py
import logging.config
import yaml # by `pip install pyyaml`
path_log_config_file = "ressources/log.yml"
LOGS_LEVEL = logging.ERROR
LOGS_MODE = "production"
def setup_logging():
with open(path_log_config_file, 'rt') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
# ... the rest of the file you provided
# file: src/main.py
from src.logger import setup_logging, logger
setup_logging()
logger.debug("DEBUG")
logger.info("INFO")
logger.warning("WARNING")
logger.error("ERROR")
logger.critical("CRITICAL")
然后我得到一个错误:
ValueError: dictionary doesn't specify a version
通过将此行添加到 YAML 文件的顶部来解决:
version: 1
然后我得到这个错误:
ValueError: Unable to configure handler 'console': Unable to set formatter 'standard': 'standard'
因为您的格式化程序未定义(您可能复制粘贴错误)。给你,把它添加到你的 YAML 文件中:
formatters:
standard:
format: '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
error:
format: 'ERROR %(asctime)s [%(levelname)s] %(name)s: %(message)s'
它 运行 没有错误,但没有任何内容写入日志。快速调试器断点显示从未调用 *Filter.filter
方法。我检查了 logger
对象,确实它没有附加处理程序。它也可以添加到 YAML 中:
loggers:
production:
handlers: [ debug_file_handler, info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
propagate: False
现在可以使用了。