You've already forked python-uncompyle6
mirror of
https://github.com/rocky/python-uncompyle6.git
synced 2025-08-02 16:44:46 +08:00
Revise "ingest" docstring
This commit is contained in:
@@ -45,6 +45,24 @@ class Scanner37(Scanner37Base):
|
||||
def ingest(
|
||||
self, co, classname=None, code_objects={}, show_asm=None
|
||||
) -> Tuple[list, dict]:
|
||||
"""
|
||||
Create "tokens" the bytecode of an Python code object. Largely these
|
||||
are the opcode name, but in some cases that has been modified to make parsing
|
||||
easier.
|
||||
returning a list of uncompyle6 Token's.
|
||||
|
||||
Some transformations are made to assist the deparsing grammar:
|
||||
- various types of LOAD_CONST's are categorized in terms of what they load
|
||||
- COME_FROM instructions are added to assist parsing control structures
|
||||
- operands with stack argument counts or flag masks are appended to the opcode name, e.g.:
|
||||
* BUILD_LIST, BUILD_SET
|
||||
* MAKE_FUNCTION and FUNCTION_CALLS append the number of positional arguments
|
||||
- EXTENDED_ARGS instructions are removed
|
||||
|
||||
Also, when we encounter certain tokens, we add them to a set which will cause custom
|
||||
grammar rules. Specifically, variable arg tokens like MAKE_FUNCTION or BUILD_LIST
|
||||
cause specific rules for the specific number of arguments they take.
|
||||
"""
|
||||
tokens, customize = Scanner37Base.ingest(self, co, classname, code_objects, show_asm)
|
||||
new_tokens = []
|
||||
for i, t in enumerate(tokens):
|
||||
|
Reference in New Issue
Block a user