Python 3.5 语法的最后四个记号是什么?
What are the last four tokens of Python 3.5 grammar?
https://docs.python.org/3.5/library/token.html
Python 3.5 语法的标记定义中的最后四个标记是什么?
token.OP
是运算符令牌的泛化。 tokenize模块中也提到了这一点:
To simplify token stream handling, all Operators and Delimiters tokens are returned using the generic token.OP
token type. The exact type can be determined by checking the exact_type
property on the named tuple returned from tokenize.tokenize()
.
token.ERRORTOKEN
用于标记解析器标记化过程中的错误。这主要用于生成 abort the parsing process.
的语法错误
在tokenize documentation中也提到了:
Note that unclosed single-quoted strings do not cause an error to be raised. They are tokenized as ERRORTOKEN
, followed by the tokenization of their contents.
token.N_TOKENS
就是定义的token的个数。它在解析器中用于迭代标记列表。
token.NT_OFFSET
在token.h
中是这样使用的:
/* Special definitions for cooperation with parser */
#define NT_OFFSET 256
#define ISTERMINAL(x) ((x) < NT_OFFSET)
#define ISNONTERMINAL(x) ((x) >= NT_OFFSET)
基本上是分开了terminal and non-terminal tokens.
https://docs.python.org/3.5/library/token.html
Python 3.5 语法的标记定义中的最后四个标记是什么?
token.OP
是运算符令牌的泛化。 tokenize模块中也提到了这一点:
To simplify token stream handling, all Operators and Delimiters tokens are returned using the generic
token.OP
token type. The exact type can be determined by checking theexact_type
property on the named tuple returned fromtokenize.tokenize()
.
token.ERRORTOKEN
用于标记解析器标记化过程中的错误。这主要用于生成 abort the parsing process.
的语法错误在tokenize documentation中也提到了:
Note that unclosed single-quoted strings do not cause an error to be raised. They are tokenized as
ERRORTOKEN
, followed by the tokenization of their contents.
token.N_TOKENS
就是定义的token的个数。它在解析器中用于迭代标记列表。
token.NT_OFFSET
在token.h
中是这样使用的:
/* Special definitions for cooperation with parser */
#define NT_OFFSET 256
#define ISTERMINAL(x) ((x) < NT_OFFSET)
#define ISNONTERMINAL(x) ((x) >= NT_OFFSET)
基本上是分开了terminal and non-terminal tokens.