Compare commits

...

111 Commits

Author SHA1 Message Date
rocky
0b24eca8d7 Merge branch 'master' into python-2.4 2018-04-08 05:39:28 -04:00
rocky
ab414d3d9c Get ready for release 3.1.2 2018-04-08 05:34:25 -04:00
rocky
3116ac8323 Merge branch 'master' into python-2.4 2018-04-08 05:27:16 -04:00
rocky
ede6eabc40 Slightly Python 3.x handing of subclasses...
which are created via a call to create a subclass.

Should be more general though.
2018-04-08 05:22:35 -04:00
rocky
61e2b3b635 Can run on 3.1. Fix some 3.1 function-call bugs 2018-04-08 04:11:01 -04:00
rocky
23fb07b1c9 Update test 2018-04-07 07:21:22 -04:00
rocky
1bbb72a6ce Handle class with one kwarg subclass 2018-04-07 07:13:49 -04:00
rocky
17361a9baa Administrivia 2018-04-06 22:11:30 -04:00
rocky
68821efdb0 Improve 3.5+ BUILD_MAP_UNPACK...
And add build_tuple_unpack runtime test from a previous commit.

We are far from out of the woods, as there is more to do and
we've uncovered more bugs in handling this.
2018-04-06 21:34:31 -04:00
rocky
e9ee671874 Testing administriva 2018-04-06 19:06:11 -04:00
rocky
9593043432 Add more stdlib run test coverage 2018-04-06 14:23:56 -04:00
rocky
1c95eb7b4e Make sure we call 'expr' go set precidence right 2018-04-06 14:04:58 -04:00
rocky
ff9ae4e792 Better handling of BUILD_TUPLE_UNPACK 2018-04-06 11:35:41 -04:00
rocky
d9eb5c5b09 Start folding in 3.5 vararg ops as varargs ops 2018-04-05 23:02:45 -04:00
rocky
e7b7de8842 Bump xdis version 2018-04-04 23:55:01 -04:00
rocky
3f26589bf1 More testing 2018-04-04 22:43:19 -04:00
rocky
30ce3a8bea Small tweaks 2018-04-04 22:36:26 -04:00
rocky
341e17f62c Split of Python 3 semantic-action customization...
And remove duplicate customization code in pysource.
2018-04-04 21:54:09 -04:00
rocky
b561b0090c Increase testing 2018-04-04 20:32:54 -04:00
rocky
ca41ea99f2 Fix 3.2 to 3.3 make_function more properly 2018-04-04 14:30:34 -04:00
rocky
e3040c78a9 3.2-3.4 Functions cals/defininitions yet again
And we're still not out of the woods.
2018-04-03 21:27:31 -04:00
rocky
a556e96c22 Merge branch 'master' of github.com:rocky/python-uncompyle6 2018-04-03 19:44:09 -04:00
rocky
e9c0d03b8b 3.2 mk_func tweak...
...more is needed though
2018-04-03 17:57:37 -04:00
rocky
155fd06372 More administrivia 2018-04-03 11:08:22 -04:00
rocky
acff1b6ee0 Administrivia
Adjust requirements-dev for 2.6.9
2018-04-03 11:00:16 -04:00
rocky
19bb16270d Merge conflicts 2018-04-03 10:56:27 -04:00
rocky
35c41f8065 Merge branch 'master' into python-2.4 2018-04-03 10:55:51 -04:00
rocky
1cd2d1e915 DRY scanner code more...
Expand 2.6 testing
2018-04-03 10:35:02 -04:00
rocky
e2dec73a62 3.5 CALL_FUNCTION_VAR bug 2018-04-03 05:56:45 -04:00
rocky
fad43feb3d DRY instruction building code...
There is a little more that could be done with  self.offset2inst_index
2018-04-03 04:41:36 -04:00
rocky
96d8daeae9 More pyenv testing 2018-04-01 21:19:55 -04:00
rocky
8f6a1cb10b Add 3.2 to list of supported distributions 2018-04-01 16:54:10 -04:00
rocky
9d36e7742e Merge branch 'master' into python-2.4 2018-04-01 15:17:37 -04:00
rocky
fc98bc972e Update NEWS 2018-04-01 14:53:45 -04:00
rocky
007ba4a8f3 Get ready for release 3.1.1 2018-04-01 14:17:50 -04:00
rocky
75f3624f31 Merge branch 'master' into python-2.4 2018-04-01 13:48:16 -04:00
rocky
6b78677a74 Work on 3.5+ BUILD_MAP_UNPACK...
bugs still remain, just reduced.
2018-04-01 13:41:16 -04:00
rocky
ab1dba1536 Handle 3.5+ BUILD_MAP_UNPACK used in dictionaries
A number of weaknesses have been uncovered though
2018-04-01 12:56:58 -04:00
rocky
254d0519bb More 3.6 CALL_FUNCTION argument parsing 2018-04-01 11:26:46 -04:00
rocky
120412f5a8 Add Python 3.6 setcomp and another call bug 2018-04-01 07:09:24 -04:00
rocky
b54be24e14 3.6 argument parsing 2018-03-31 23:07:06 -04:00
rocky
535df1592e Another 3.6 control-flow bug...
and add source to some previous bytecode tests
2018-03-31 19:28:35 -04:00
rocky
64ffa5f6ab Add semantic action rule for except_return 2018-03-29 22:50:25 -04:00
rocky
9be4908c9c Python 3.6 MAKE_FUNCTION yet again...
And we'll eventually have to do more down the line
2018-03-29 22:04:46 -04:00
rocky
f18ce71e91 Replace all_instrs with inst_matches...
which works on 3.6+. Still should write a pytest for this.
2018-03-29 21:23:26 -04:00
rocky
362a353e03 Merge branch 'master' of github.com:rocky/python-uncompyle6 2018-03-29 17:21:51 -04:00
rocky
7d110f17bc 3.6 decompilation problems 2018-03-29 17:21:22 -04:00
R. Bernstein
04f4f3c25f Merge pull request #166 from rocky/grammer-reduce
Some 3.x grammar reduction...
2018-03-29 11:59:37 -04:00
rocky
1d5f4b0a05 Some 3.x grammar reduction...
Add 3.2 to grammar testing
2018-03-28 21:19:27 -04:00
rocky
dc3e6b31ca Limit coverage on 3.6 for now 2018-03-28 13:47:30 -04:00
rocky
94a81a36b7 3.5, 3.6 loop if/continue handling 2018-03-28 09:23:34 -04:00
rocky
e568d68baa Reinstate a test 2018-03-28 07:52:04 -04:00
R. Bernstein
8c22d57979 Merge pull request #165 from rocky/grammar-cleanup2
Grammar reduction for 2.6/2.7,3.x
2018-03-27 20:11:29 -04:00
rocky
bf0f5715a3 Adjust grammar-checking test 2018-03-27 19:47:08 -04:00
rocky
d90c44b454 3.5+ handle then before "if" jump going to loop 2018-03-27 19:24:29 -04:00
rocky
9d807501af Grammar reduction for 2.6/2.7,3.x 2018-03-27 17:02:03 -04:00
rocky
aa4416571b grammar-cover run-and-email fixup 2018-03-27 14:21:28 -04:00
rocky
d38395334c grammar-cover administrivia 2018-03-27 04:33:01 -04:00
rocky
2e78c007ee Merge branch 'master' into python-2.4 2018-03-27 04:10:47 -04:00
rocky
516c7a0e9a Python 3.6 CALL_FUNCTION_EX fixes 2018-03-27 04:10:11 -04:00
rocky
681588f12d 3.5 CALL_FUNCTION_EX 2018-03-26 20:56:17 -04:00
rocky
f5a10ed5d0 Merge branch 'master' into python-2.4 2018-03-26 19:41:20 -04:00
rocky
3500c49daf More Python 3.4 CALL_FUNCTION_VAR 2018-03-26 19:40:33 -04:00
rocky
3d218c84b0 LOAD assert needs to be on 3.x...
Expand testing
2018-03-26 18:11:57 -04:00
rocky
de75849ae3 Merge branch 'master' into python-2.4 2018-03-26 14:52:01 -04:00
rocky
1afe1fd943 Merge branch 'master' of github.com:rocky/python-uncompyle6 2018-03-26 14:50:54 -04:00
rocky
c5f8bbf32d Remove hacky 3.x offset address arithmetic 2018-03-26 14:50:17 -04:00
rocky
6b36d14859 Limit grammar coverage of 3.5.5 for now 2018-03-26 13:33:22 -04:00
rocky
30d6dcdd69 Merge branch 'master' into python-2.4 2018-03-26 12:56:54 -04:00
rocky
ccbe8a8e2b cover all of 2.6.9 2018-03-26 12:55:43 -04:00
rocky
46c02bd352 There is no 2.8 2018-03-26 12:38:48 -04:00
rocky
30ba043000 Grammar coverage hacking 2018-03-26 11:12:56 -04:00
rocky
1f0e5f27d5 DRY grammar code 2018-03-26 11:08:27 -04:00
rocky
75245ba38c Back off full 2.7.14 testing for now 2018-03-26 09:47:18 -04:00
rocky
4889916304 Grammar coverage work 2018-03-26 09:26:24 -04:00
rocky
d1806edaad Administrivia: grammar coverage 2018-03-26 09:23:24 -04:00
rocky
c968e31be8 Administrivia: grammar coverage 2018-03-26 09:16:15 -04:00
rocky
c8870c6ed8 Grammar coverage hacking 2018-03-26 08:41:40 -04:00
rocky
23180806b4 More grammar coverage hacking 2018-03-26 08:19:03 -04:00
rocky
c48345a5c0 More grammar coverage work 2018-03-26 08:14:15 -04:00
rocky
a1cdc5e40c Grammar testing 2018-03-26 08:13:17 -04:00
rocky
661bfd4e52 Merge branch 'master' into python-2.4 2018-03-26 08:04:32 -04:00
rocky
1f835d6237 Start grammar coverage testing 2018-03-26 08:03:54 -04:00
rocky
3d072e29a6 3.5 CALL_FUNCTION_VAR runnable test 2018-03-26 07:45:34 -04:00
rocky
74f01fbe33 Python 3.5 CALL_FUNCTION_VAR handling 2018-03-26 07:42:15 -04:00
rocky
710c950965 Bang on 3.4 CALL_FUNCTION_VAR 2018-03-26 00:19:39 -04:00
rocky
7aa6ff1d9b 3.6.4 runtests.sh futzing 2018-03-25 22:48:33 -04:00
rocky
421c358f9d Put 3.5.5 back into testing 2018-03-25 22:30:36 -04:00
rocky
cfb4ad625f 3.5 *() arg without further args 2018-03-25 22:24:32 -04:00
R. Bernstein
0b622a0ad8 Merge pull request #163 from rocky/grammar-cleanup
Grammar cleanup
2018-03-25 21:30:50 -04:00
rocky
8b7d5d3270 Merge branch 'master' into grammar-cleanup 2018-03-25 20:57:51 -04:00
rocky
5c7fdf6e8f Adjust stdlib tests 2018-03-25 20:55:04 -04:00
rocky
d2c8e4e12c Adjust test_grammar for recent changes 2018-03-25 20:52:28 -04:00
rocky
626f690a5a More grammar specialization by instruction 2018-03-25 20:38:21 -04:00
rocky
6ac48bb0e1 Merge branch 'master' into python-2.4 2018-03-25 17:57:26 -04:00
rocky
39cef6a41b More raise vs. assert hacky distinctions 2018-03-25 17:56:35 -04:00
rocky
a18b4b1505 Merge branch 'master' into python-2.4 2018-03-25 17:37:04 -04:00
rocky
116fbb33e0 Merge branch 'master' of github.com:rocky/python-uncompyle6 2018-03-25 17:36:46 -04:00
rocky
631940887f Additional Python 2.x assert vs raise testing 2018-03-25 17:35:18 -04:00
rocky
47beff57b2 Increase testing 2018-03-25 14:56:48 -04:00
rocky
7fb94176b1 Less ambigouus 2.x grammar rule for BUILD_MAP 2018-03-25 12:09:42 -04:00
rocky
b2c832e19f Merge branch 'master' into python-2.4 2018-03-24 10:55:43 -04:00
rocky
2ae9cd7d08 bang on CALL_FUNCTION_EX_KW 2018-03-24 10:52:55 -04:00
rocky
1f663013ab 3.5 CALL_FUNCTION_VAR semantic handling 2018-03-24 10:24:16 -04:00
rocky
e3c7afb94d Towards handling 3.x' CALL_FUNCTION_VAR correctly 2018-03-24 08:26:45 -04:00
rocky
0d327ab0ce Fix bug introduced in last commit 2018-03-24 06:29:35 -04:00
rocky
35a60e0274 Fix parser slowness in decompiling 3.x locale.py..
And remove grammar inefficiency in adding extraneous kwargs in <= 3.2
kwargs was nullable so it might not have been wasn't wrong, just inefficient.
2018-03-23 11:59:04 -04:00
rocky
1b2b45642b 3.6 try except-as bug 2018-03-22 23:54:12 -04:00
rocky
28bfb453f5 Localize call_kw precedence to 3.6 2018-03-22 14:21:36 -04:00
rocky
df55ce3212 Isolate some 3.x dictcomp grammar rules 2018-03-22 13:34:39 -04:00
rocky
fcb4409e50 Omit 2.7 test_generators.py 2018-03-21 21:01:01 -04:00
95 changed files with 1821 additions and 1359 deletions

1
.gitignore vendored
View File

@@ -21,3 +21,4 @@
ChangeLog
__pycache__
build
nohup.out

View File

@@ -27,19 +27,20 @@ check:
check-short: pytest
$(MAKE) -C test check-short
# Note for 2.6 use <=3.0.1 see requirements-dev.txt
#: Tests for Python 2.7, 3.3 and 3.4
check-2.7 check-3.3 check-3.4: pytest
check-2.6 check-2.7 check-3.3 check-3.4 check-3.5: pytest
$(MAKE) -C test $@
#: Tests for Python 3.2 and 3.5 - pytest doesn't work here
# Or rather 3.5 doesn't work not on Travis
check-3.0 check-3.1 check-3.2 check-3.5 check-3.6:
check-3.0 check-3.1 check-3.2 check-3.6:
$(MAKE) -C test $@
check-3.7: pytest
#:Tests for Python 2.6 (doesn't have pytest)
check-2.4 check-2.5 check-2.6:
#:Tests for Python 2.4-2.5 (don't have pytest)
check-2.4 check-2.5:
$(MAKE) -C test $@
#:PyPy 2.6.1 PyPy 5.0.1, or PyPy 5.8.0-beta0

21
NEWS
View File

@@ -1,3 +1,24 @@
uncompyle6 3.1.2 2018-04-08 Eastern Orthodox Easter
- Python 3.x subclass and call parsing fixes
- Allow/note running on Python 3.1
- improve 3.5+ BUILD_MAP_UNPACK
- DRY instruction building code between 2.x and 3.x
- expand testing
uncompyle6 3.1.1 2018-04-01 Easter April Fool's
Jesus on Friday's New York Times puzzle: "I'm stuck on 2A"
- fill out 3.5+ BUILD_MAP_UNPACK (more work is needed)
- fill out 3.4+ CALL_FUNCTION_... (more work is needed)
- fill out 3.5 MAKE_FUNCTION (more work is needed)
- reduce 3.5, 3.6 control-flow bugs
- reduce ambiguity in rules that lead to long (exponential?) parses
- limit/isolate some 2.6/2.7,3.x grammar rules
- more runtime testing of decompiled code
- more removal of parenthesis around calls via setting precidence
uncompyle6 3.1.0 2018-03-21 Equinox
- Add code_deparse_with_offset() fragment function.

View File

@@ -35,6 +35,8 @@ classifiers = ['Development Status :: 5 - Production/Stable',
'Programming Language :: Python :: 2.5',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.1',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',

View File

@@ -5,4 +5,4 @@ if [[ $0 == ${BASH_SOURCE[0]} ]] ; then
echo "This script should be *sourced* rather than run directly through bash"
exit 1
fi
export PYVERSIONS='3.5.5 3.6.4 2.6.9 3.3.7 2.7.14 3.4.8'
export PYVERSIONS='3.5.5 3.6.4 2.6.9 3.3.7 2.7.14 3.2.6 3.1.5 3.4.8'

View File

@@ -1,8 +1,6 @@
#!/usr/bin/env python
from uncompyle6 import PYTHON_VERSION, IS_PYPY
from uncompyle6.scanner import get_scanner
from xdis.bytecode import Bytecode
from array import array
def bug(state, slotstate):
if state:
if slotstate is not None:
@@ -25,14 +23,7 @@ def test_if_in_for():
code = bug.func_code
scan = get_scanner(PYTHON_VERSION)
if 2.7 <= PYTHON_VERSION <= 3.0 and not IS_PYPY:
n = scan.setup_code(code)
bytecode = Bytecode(code, scan.opc)
scan.build_lines_data(code, n)
scan.insts = list(bytecode)
scan.offset2inst_index = {}
for i, inst in enumerate(scan.insts):
scan.offset2inst_index[inst.offset] = i
scan.build_prev_op(n)
scan.build_instructions(code)
fjt = scan.find_jump_targets(False)
## FIXME: the data below is wrong.
@@ -47,14 +38,7 @@ def test_if_in_for():
# {'start': 62, 'end': 63, 'type': 'for-else'}]
code = bug_loop.__code__
n = scan.setup_code(code)
bytecode = Bytecode(code, scan.opc)
scan.build_lines_data(code, n)
scan.insts = list(bytecode)
scan.build_prev_op(n)
scan.offset2inst_index = {}
for i, inst in enumerate(scan.insts):
scan.offset2inst_index[inst.offset] = i
scan.build_instructions(code)
fjt = scan.find_jump_targets(False)
assert{64: [42], 67: [42, 42], 42: [16, 41], 19: [6]} == fjt
assert scan.structs == [
@@ -68,14 +52,7 @@ def test_if_in_for():
{'start': 48, 'end': 67, 'type': 'while-loop'}]
elif 3.2 < PYTHON_VERSION <= 3.4:
bytecode = Bytecode(code, scan.opc)
scan.code = array('B', code.co_code)
scan.lines = scan.build_lines_data(code)
scan.build_prev_op()
scan.insts = list(bytecode)
scan.offset2inst_index = {}
for i, inst in enumerate(scan.insts):
scan.offset2inst_index[inst.offset] = i
scan.build_instructions(code)
fjt = scan.find_jump_targets(False)
assert {69: [66], 63: [18]} == fjt
assert scan.structs == \
@@ -85,5 +62,6 @@ def test_if_in_for():
{'end': 59, 'type': 'for-loop', 'start': 31},
{'end': 63, 'type': 'for-else', 'start': 62}]
else:
assert True, "FIXME: should note fixed"
print("FIXME: should fix for %s" % PYTHON_VERSION)
assert True
return

View File

@@ -18,40 +18,44 @@ def test_grammar():
right_recursive, dup_rhs) = p.check_sets()
# We have custom rules that create the below
expect_lhs = set(['expr1024', 'pos_arg', 'get_iter', 'attribute'])
expect_lhs = set(['pos_arg', 'get_iter', 'attribute'])
unused_rhs = set(['list', 'mkfunc',
'mklambda',
'unpack',])
expect_right_recursive = set([('designList',
('store', 'DUP_TOP', 'designList'))])
if PYTHON3:
expect_lhs.add('load_genexpr')
if PYTHON_VERSION > 2.6:
expect_lhs.add('kvlist')
expect_lhs.add('kv3')
unused_rhs.add('dict')
if PYTHON3:
expect_lhs.add('load_genexpr')
unused_rhs = unused_rhs.union(set("""
except_pop_except generator_exp classdefdeco2
dict
except_pop_except generator_exp
""".split()))
if PYTHON_VERSION >= 3.0:
expect_lhs.add("annotate_arg")
expect_lhs.add("annotate_tuple")
unused_rhs.add("mkfunc_annotate")
unused_rhs.add('call')
unused_rhs.add("dict_comp")
unused_rhs.add("classdefdeco1")
if PYTHON_VERSION < 3.6:
# 3.6 has at least one non-custom call rule
# the others don't
unused_rhs.add('call')
if PYTHON_VERSION == 3.5:
expect_right_recursive.add((('l_stmts',
('lastl_stmt', 'COME_FROM', 'l_stmts'))))
('lastl_stmt', 'come_froms', 'l_stmts'))))
pass
pass
else:
expect_right_recursive.add((('l_stmts',
('lastl_stmt', 'COME_FROM', 'l_stmts'))))
# expect_lhs.add('kwargs1')
pass
pass
pass
@@ -85,6 +89,8 @@ def test_grammar():
""".split())
if 2.6 <= PYTHON_VERSION <= 2.7:
opcode_set = set(s.opc.opname).union(ignore_set)
if PYTHON_VERSION == 2.6:
opcode_set.add("THEN")
check_tokens(tokens, opcode_set)
elif PYTHON_VERSION == 3.4:
ignore_set.add('LOAD_CLASSNAME')

View File

@@ -1,6 +1,6 @@
from uncompyle6 import PYTHON_VERSION, deparse_code
if PYTHON_VERSION >= 2.5:
if PYTHON_VERSION >= 2.6:
def test_single_mode():
single_expressions = (
'i = 1',

View File

@@ -1,3 +1,3 @@
pytest>=3.0.0
pytest>=3.0.0,<=3.0.1
flake8
hypothesis<=3.8.3
hypothesis<=3.0.0

1
test/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/nohup.out

View File

@@ -3,9 +3,9 @@ PHONY=check clean dist distclean test test-unit test-functional rmChangeLog clea
check-bytecode-2.2 check-byteocde-2.3 check-bytecode-2.4 \
check-short check-2.6 check-2.7 check-3.0 check-3.1 check-3.2 check-3.3 \
check-3.4 check-3.5 check-5.6 5.6 5.8 \
grammar-coverage-2.5 grammar-coverage-2.6 grammarcoverage-2.7 \
grammar-coverage-3.1 grammar-coverage-3.2 grammarcoverage-3.3 \
grammar-coverage-3.4 grammar-coverage-3.5 grammarcoverage-3.6
grammar-coverage-2.5 grammar-coverage-2.6 grammar-coverage-2.7 \
grammar-coverage-3.1 grammar-coverage-3.2 grammar-coverage-3.3 \
grammar-coverage-3.4 grammar-coverage-3.5 grammar-coverage-3.6
GIT2CL ?= git2cl
@@ -37,18 +37,22 @@ check-3.0: check-bytecode
#: Run working tests from Python 3.1
check-3.1: check-bytecode
$(PYTHON) test_pythonlib.py --bytecode-3.1 --weak-verify $(COMPILE)
$(PYTHON) test_pythonlib.py --bytecode-3.1-run --verify-run
#: Run working tests from Python 3.2
check-3.2: check-bytecode
$(PYTHON) test_pythonlib.py --bytecode-3.2 --weak-verify $(COMPILE)
$(PYTHON) test_pythonlib.py --bytecode-3.2-run --verify-run
#: Run working tests from Python 3.3
check-3.3: check-bytecode
$(PYTHON) test_pythonlib.py --bytecode-3.3 --weak-verify $(COMPILE)
$(PYTHON) test_pythonlib.py --bytecode-3.3-run --verify-run
#: Run working tests from Python 3.4
check-3.4: check-bytecode check-3.4-ok check-2.7-ok
$(PYTHON) test_pythonlib.py --bytecode-3.4 --weak-verify $(COMPILE)
$(PYTHON) test_pythonlib.py --bytecode-3.4-run --verify-run
#: Run working tests from Python 3.5
check-3.5: check-bytecode
@@ -117,26 +121,26 @@ check-bytecode-2.5:
#: Get grammar coverage for Python 2.4
grammar-coverage-2.4:
-rm $(COVER_DIR)/spark-grammar-24.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-24.cover $(PYTHON) test_pythonlib.py --bytecode-2.4
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-24.cover $(PYTHON) test_pyenvlib.py --2.4.6
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.4.cover $(PYTHON) test_pythonlib.py --bytecode-2.4
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.4.cover $(PYTHON) test_pyenvlib.py --2.4.6 --max= 800
#: Get grammar coverage for Python 2.5
grammar-coverage-2.5:
-rm $(COVER_DIR)/spark-grammar-25.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-25.cover $(PYTHON) test_pythonlib.py --bytecode-2.5
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-25.cover $(PYTHON) test_pyenvlib.py --2.5.6
-rm $(COVER_DIR)/spark-grammar-2.5.cover || true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.5.cover $(PYTHON) test_pythonlib.py --bytecode-2.5
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.5.cover $(PYTHON) test_pyenvlib.py --2.5.6 --max=800
#: Get grammar coverage for Python 2.6
grammar-coverage-2.6:
-rm $(COVER_DIR)/spark-grammar-26.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-26.cover $(PYTHON) test_pythonlib.py --bytecode-2.6
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-26.cover $(PYTHON) test_pyenvlib.py --2.6.9
-rm $(COVER_DIR)/spark-grammar-2.6.cover || true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.6.cover $(PYTHON) test_pythonlib.py --bytecode-2.6
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.6.cover $(PYTHON) test_pyenvlib.py --2.6.9 --max=800
#: Get grammar coverage for Python 2.7
grammar-coverage-2.7:
-rm $(COVER_DIR)/spark-grammar-27.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-27.cover $(PYTHON) test_pythonlib.py --bytecode-2.7
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-27.cover $(PYTHON) test_pyenvlib.py --2.7.13
-rm $(COVER_DIR)/spark-grammar-2.7.cover || true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.7.cover $(PYTHON) test_pythonlib.py --bytecode-2.7
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.7.cover $(PYTHON) test_pyenvlib.py --2.7.14 --max=600
#: Get grammar coverage for Python 3.0
grammar-coverage-3.0:
@@ -147,33 +151,39 @@ SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-30.cover $(PYTHON) test_pythonl
#: Get grammar coverage for Python 3.1
grammar-coverage-3.1:
-rm $(COVER_DIR)/spark-grammar-31.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-31.cover $(PYTHON) test_pythonlib.py --bytecode-3.1
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-31.cover $(PYTHON) test_pyenvlib.py --3.1.5
-rm $(COVER_DIR)/spark-grammar-3.1.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.1.cover $(PYTHON) test_pythonlib.py --bytecode-3.1
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.1.cover $(PYTHON) test_pyenvlib.py --3.1.5
#: Get grammar coverage for Python 3.2
grammar-coverage-3.2:
-rm $(COVER_DIR)/spark-grammar-32.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-32.cover $(PYTHON) test_pythonlib.py --bytecode-3.2
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-32.cover $(PYTHON) test_pyenvlib.py --3.2.6
-rm $(COVER_DIR)/spark-grammar-3.2.cover || true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.2.cover $(PYTHON) test_pythonlib.py --bytecode-3.2
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.2.cover $(PYTHON) test_pyenvlib.py --3.2.6
#: Get grammar coverage for Python 3.3
grammar-coverage-3.3:
-rm $(COVER_DIR)/spark-grammar-33.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-33.cover $(PYTHON) test_pythonlib.py --bytecode-3.3
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-33.cover $(PYTHON) test_pyenvlib.py --3.3.6
-rm $(COVER_DIR)/spark-grammar-3.3.cover || true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.3.cover $(PYTHON) test_pythonlib.py --bytecode-3.3
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.3.cover $(PYTHON) test_pyenvlib.py --3.3.7 --max=800
#: Get grammar coverage for Python 3.4
grammar-coverage-3.4:
-rm $(COVER_DIR)/spark-grammar-34.cover
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-34.cover $(PYTHON) test_pythonlib.py --bytecode-3.4
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-34.cover $(PYTHON) test_pyenvlib.py --3.4.2
-rm $(COVER_DIR)/spark-grammar-3.4.cover || true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.4.cover $(PYTHON) test_pythonlib.py --bytecode-3.4
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.4.cover $(PYTHON) test_pyenvlib.py --3.4.8 --max=800
#: Get grammar coverage for Python 3.5
grammar-coverage-3.5:
rm $(COVER_DIR)/spark-grammar-35.cover || /bin/true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-35.cover $(PYTHON) test_pythonlib.py --bytecode-3.5
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-35.cover $(PYTHON) test_pyenvlib.py --3.5.3
rm $(COVER_DIR)/spark-grammar-3.5.cover || /bin/true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.5.cover $(PYTHON) test_pythonlib.py --bytecode-3.5
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.5.cover $(PYTHON) test_pyenvlib.py --3.5.5 --max=450
#: Get grammar coverage for Python 3.6
grammar-coverage-3.6:
rm $(COVER_DIR)/spark-grammar-3.6.cover || /bin/true
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.6.cover $(PYTHON) test_pythonlib.py --bytecode-3.6
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.6.cover $(PYTHON) test_pyenvlib.py --3.6.4 --max=280
#: Check deparsing Python 2.6
check-bytecode-2.6:
@@ -192,10 +202,12 @@ check-bytecode-3.0:
#: Check deparsing Python 3.1
check-bytecode-3.1:
$(PYTHON) test_pythonlib.py --bytecode-3.1 --weak-verify
$(PYTHON) test_pythonlib.py --bytecode-3.1-run --verify-run
#: Check deparsing Python 3.2
check-bytecode-3.2:
$(PYTHON) test_pythonlib.py --bytecode-3.2 --weak-verify
$(PYTHON) test_pythonlib.py --bytecode-3.2-run --verify-run
#: Check deparsing Python 3.3
check-bytecode-3.3:

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

1
test/grammar-cover/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/.python-version

View File

@@ -0,0 +1 @@
Code in this directory gets statistics on grammar coverage

5
test/grammar-cover/convert.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/bash
for VERS in 2{4,5,6,7} 3{2,3,4,5} ; do
GRAMMAR_TXT=grammar-${VERS}.txt
spark-parser-coverage --max-count 3000 --path spark-grammar-${VERS}.cover > $GRAMMAR_TXT
done

View File

@@ -0,0 +1,2 @@
#!/bin/bash
$SHELL ./grammar.sh 2.4 2.5 2.6 2.7 3.2 3.3 3.4 3.5 3.6

44
test/grammar-cover/grammar.sh Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/bash
# Remake Python grammar statistics
typeset -A ALL_VERS=([2.4]=2.4.6 [2.5]=2.5.6 [2.6]=2.6.9 [2.7]=2.7.14 [3.2]=3.2.6 [3.3]=3.3.6 [3.4]=3.4.8 [3.5]=3.5.5 [3.6]=3.6.4)
if (( $# == 0 )); then
echo 1>&2 "usage: $0 two-digit-version"
exit 1
fi
me=${BASH_SOURCE[0]}
workdir=$(dirname $me)
cd $workdir
workdir=$(pwd)
while [[ -n $1 ]] ; do
SHORT_VERSION=$1; shift
LONG_VERSION=${ALL_VERS[$SHORT_VERSION]}
if [[ -z ${LONG_VERSION} ]] ; then
echo 1>&2 "Version $SHORT_VERSION not known"
exit 2
fi
tmpdir=$workdir/../../tmp/grammar-cover
COVER_FILE=${tmpdir}/spark-grammar-${SHORT_VERSION}.cover
[[ -d $tmpdir ]] || mkdir $tmpdir
cd $workdir/../..
if [[ $SHORT_VERSION > 2.5 ]] ; then
source ./admin-tools/setup-master.sh
else
source ./admin-tools/setup-python-2.4.sh
fi
GRAMMAR_TXT=$tmpdir/grammar-${SHORT_VERSION}.txt
(cd ../.. && pyenv local ${LONG_VERSION})
cd ./test
if [[ -r $COVER_FILE ]]; then
rm $COVER_FILE
fi
if [[ -r $GRAMMAR_TXT ]]; then
GRAMMAR_SAVE_TXT=${tmpdir}/grammar-${SHORT_VERSION}-save.txt
cp $GRAMMAR_TXT $GRAMMAR_SAVE_TXT
fi
make grammar-coverage-${SHORT_VERSION};
spark-parser-coverage --max-count=3000 --path $COVER_FILE > $GRAMMAR_TXT
done

View File

@@ -0,0 +1,13 @@
#!/bin/bash
USER=${USER:-rocky}
EMAIL=${EMAIL:-rb@dustyfeet.com}
SUBJECT_PREFIX="grammar cover testing for"
LOGFILE=/tmp/grammar-cover-$$.log
/bin/bash ./grammar-all.sh >$LOGFILE 2>&1
rc=$?
if ((rc == 0)); then
tail -v $LOGFILE | mail -s "$SUBJECT_PREFIX ok" ${USER}@localhost
else
tail -v $LOGFILE | mail -s "$SUBJECT_PREFIX not ok" ${USER}@localhost
tail -v $LOGFILE | mail -s "$SUBJECT_PREFIX not ok" $EMAIL
fi

View File

@@ -14,7 +14,7 @@ function displaytime {
printf '%d seconds\n' $S
}
PYVERSION=${PYVERSION:-"3.5.5 2.7.14 3.4.8 2.6.9"}
PYVERSION=${PYVERSION:-"3.5.5 2.7.14 3.2.6 3.3.7 3.4.8 2.6.9 3.6.4"}
# PYVERSION=${PYVERSION:-"3.5.5"}
USER=${USER:-rocky}
@@ -28,6 +28,10 @@ for VERSION in $PYVERSION ; do
if [[ $VERSION == '3.5.5' ]] ; then
MAX_TESTS=224
elif [[ $VERSION == '3.2.6' ]] ; then
MAX_TESTS=700
elif [[ $VERSION == '3.6.4' ]] ; then
MAX_TESTS=400
else
MAX_TESTS=800
fi
@@ -42,7 +46,7 @@ for VERSION in $PYVERSION ; do
rc=$?
echo Python Version $(pyenv local) >> $LOGFILE
echo "" >>LOGFILE
echo "" >>$LOGFILE
typeset -i ALL_FILES_ENDTIME=$(date +%s)
(( time_diff = ALL_FILES_ENDTIME - ALL_FILES_STARTTIME))

View File

@@ -0,0 +1,18 @@
# Bug found in 2.4 test_math.py
# Bug was turning last try/except/else into try/else
import math
def test_exceptions():
try:
x = math.exp(-1000000000)
except:
raise RuntimeError
x = 1
try:
x = math.sqrt(-1.0)
except ValueError:
return x
else:
raise RuntimeError
test_exceptions()

View File

@@ -1,5 +1,5 @@
# From 2.7 test_argparse.py
# Bug was turnning assert into an "or raise" statement
# Bug was turning assert into an "or raise" statement
def __call__(arg, dest):
try:
assert arg == 'spam', 'dest: %s' % dest
@@ -15,3 +15,17 @@ def refactor_doctest(clipped, new):
if not new:
new += u"\n"
return
# From 2.7.14 test_hashlib.py
# The bug was turning assert into an "if"
# statement which isn't wrong, but we got the
# range of the if incorrect. When we have
# better control flow analysis we can revisit.
def test_threaded_hashing():
for threadnum in xrange(1):
result = 1
assert result > 0
result = 2
return result
assert test_threaded_hashing() == 2

View File

@@ -1,5 +1,5 @@
# From 2.7 test_itertools.py
# Bug was in 2.7 decompiling like the commented out
# Bug was in 2.7 decompiling the target assignment
# code below
from itertools import izip_longest
for args in [

View File

@@ -0,0 +1,18 @@
# Extracted from Python 3.5 test_abc.py
# Bug is class having only a single kwarg
# subclass.
import abc
import unittest
from inspect import isabstract
def test_abstractmethod_integration(self):
for abstractthing in [abc.abstractmethod]:
class C(metaclass=abc.ABCMeta):
@abstractthing
def foo(self): pass # abstract
def bar(self): pass # concrete
assert C.__abstractmethods__, {"foo"}
assert isabstract(C)
pass
test_abstractmethod_integration(None)

View File

@@ -1,5 +1,6 @@
# Python 3.6 subprocess.py bug
# Bug is getting params correct: timeout before **kwargs
import subprocess
def call(*popenargs, timeout=None, **kwargs):
return
@@ -14,6 +15,9 @@ def subprocess_shell(self, protocol_factory, cmd, *, stdin=subprocess.PIPE,
# From 3.4 asyncio/locks.py
# Bug was handling" "value=1, *"
class Semaphore:
pass
class BoundedSemaphore(Semaphore):
def __init__(self, value=1, *, loop=None):
super().__init__(value, loop=loop)

View File

@@ -0,0 +1,13 @@
# From 3.6.4 pathlib.py
# Bug was handling "continue" as last statement of "if"
# RUNNABLE!
def parse_parts(it, parts):
for part in it:
if not part:
continue
parts = 1
return parts
assert parse_parts([], 5) == 5
assert parse_parts([True], 6) == 1
assert parse_parts([False], 6) == 6

View File

@@ -1,9 +1,19 @@
# Python 3.5+ PEP 448 - Additional Unpacking Generalizations for dictionaries
{**{}}
{**{'a': 1, 'b': 2}}
## {**{'x': 1}, **{'y': 2}}
# RUNNABLE!
b = {**{}}
assert b == {}
c = {**{'a': 1, 'b': 2}}
assert c == {'a': 1, 'b': 2}
d = {**{'x': 1}, **{'y': 2}}
assert d == {'x': 1, 'y': 2}
# {'c': 1, {'d': 2}, **{'e': 3}}
[*[]]
{**{0:0 for a in b}}
## {**{}, **{}}
## {**{}, **{}, **{}}
assert {0: 0} == {**{0:0 for a in c}}
# FIXME: assert deparsing is incorrect for:
# {**{}, **{}}
# assert {} == {**{}, **{}, **{}}
# {**{}, **{}, **{}}
# assert {} == {**{}, **{}, **{}}

View File

@@ -0,0 +1,23 @@
# From python 3.5.5 telnetlib
# The bug is the end of a "then" jumping
# back to the loop which could look like
# a "continue" and also not like a then/else
# break
def process_rawq(self, cmd, cmd2):
while self.rawq:
if self.iacseq:
if cmd:
pass
elif cmd2:
if self.option_callback:
self.option = 2
else:
self.option = 3
# From python 3.5.5 telnetlib
def listener(data):
while 1:
if data:
data = 1
else:
data = 2

View File

@@ -1,5 +1,8 @@
# From sql/schema.py and 3.5 _strptime.py
# Note that kwargs comes before "positional" args
# Bug was code not knowing which Python versions
# have kwargs coming before positional args in code.
# RUNNABLE!
def tometadata(self, metadata, schema, Table, args, name=None):
table = Table(
@@ -10,3 +13,69 @@ def tometadata(self, metadata, schema, Table, args, name=None):
def _strptime_datetime(cls, args):
return cls(*args)
# From 3.5.5 imaplib
# Bug is in parsing *date_time[:6] parameter
from datetime import datetime, timezone, timedelta
import time
def Time2Internaldate(date_time):
delta = timedelta(seconds=0)
return datetime(*date_time[:6], tzinfo=timezone(delta))
assert Time2Internaldate(time.localtime())
# From 3.5.5 tkinter/dialog.py
def test_varargs0_ext():
try:
{}.__contains__(*())
except TypeError:
pass
test_varargs0_ext()
# From 3.4.6 tkinter/dialog.py
# Bug is in position of *cnf.
def __init__(self, cnf={}):
self.num = self.tk.call(
'tk_dialog', self._w,
cnf['title'], cnf['text'],
cnf['bitmap'], cnf['default'],
*cnf['strings'])
# From python 3.4.8 multiprocessing/context.py
def Value(self, fn, typecode_or_type, *args, lock=True):
return fn(typecode_or_type, *args, lock=lock,
ctx=self.get_context())
# From 3.6.4 heapq.py
def merge(*iterables, key=None, reverse=False):
return
def __call__(self, *args, **kwds):
pass
# From 3.6.4 shutil
def unpack_archive(func, filename, dict, format_info, extract_dir=None):
func(filename, extract_dir, **dict(format_info[2]))
# From 3.5.5 test_xrdrlib.py
import xdrlib
def assertRaisesConversion(self, *args):
self.assertRaises(xdrlib.ConversionError, *args)
# From 3.2.6 _pyio.py
class BlockingIOError(IOError):
def __init__(self, errno, strerror, characters_written=5):
super().__init__(errno, strerror)
# From urllib/parse.py
# Bug was using a subclass made from a call (to namedtuple)
from collections import namedtuple
class ResultMixin(object):
pass
class SplitResult(namedtuple('SplitResult', 'scheme netloc path query fragment'), ResultMixin):
pass

View File

@@ -0,0 +1,12 @@
# From 3.6.4 pdb.py
# Bug was not having a semantic action for "except_return" tree
def do_commands(self, arg):
if not arg:
bnum = 1
else:
try:
bnum = int(arg)
except:
self.error("Usage:")
return
self.commands_bnum = bnum

View File

@@ -0,0 +1,10 @@
# Adapted from Python 3.6 trace.py
# Bug was in handling BUID_TUPLE_UNPACK created via
# *opts.arguments
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('filename', nargs='?')
parser.add_argument('arguments', nargs=argparse.REMAINDER)
opts = parser.parse_args(["foo", "a", "b"])
argv = opts.filename, *opts.arguments
assert argv == ('foo', 'a', 'b')

View File

@@ -31,3 +31,18 @@ def handle_read(self):
return why
return data
# From 3.6 contextlib
# Bug is indentation of "return exc"
# Also there are extra statements to remove exec,
# which we hide (unless doing fragments).
# Note: The indentation bug may be a result of using improper
# grammar.
def __exit__(self, type, value, traceback):
try:
value()
except StopIteration as exc:
return exc
except RuntimeError as exc:
return exc
return

View File

@@ -0,0 +1,13 @@
# From 3.6.4 configparser.py
# Bug in 3.6 was handling "else" with compound
# if. there is no POP_BLOCK and
# there are several COME_FROMs before the else
def _read(self, fp, a, value, f):
for line in fp:
for prefix in a:
fp()
if (value and fp and
prefix > 5):
f()
else:
f()

View File

@@ -0,0 +1,18 @@
# From 3.6.4 test_argparse.py
# Bug was in parsing ** args
import argparse
def test_namespace_starkwargs_notidentifier(self):
ns = argparse.Namespace(**{'"': 'quote'})
string = """Namespace(**{'"': 'quote'})"""
assert ns == string
def test_namespace_kwargs_and_starkwargs_notidentifier(self):
ns = argparse.Namespace(a=1, **{'"': 'quote'})
string = """Namespace(a=1, **{'"': 'quote'})"""
assert ns == string
def test_namespace(self):
ns = argparse.Namespace(foo=42, bar='spam')
string = "Namespace(bar='spam', foo=42)"
assert ns == string

View File

@@ -0,0 +1,16 @@
# Adapted from Python 3.3 idlelib/PyParse.py
# Bug is continue flowing back to while messing up the determination
# that it is inside an "if".
# RUNNABLE!
def _study1(i, n, ch):
while i == 3:
i = 4
if ch:
i = 10
assert i < 5
continue
if n:
return n
assert _study1(3, 4, False) == 4

View File

@@ -1,3 +1,5 @@
# RUNNABLE!
# Tests:
# 2.7:
# assert ::= assert_expr jmp_true LOAD_ASSERT RAISE_VARARGS_1
@@ -16,7 +18,7 @@ for method_name in ['a']:
if method_name in ('b',):
method = 'a'
else:
assert 0, "instance installed"
assert True, "instance installed"
methods = 'b'
@@ -25,5 +27,9 @@ for method_name in ['a']:
# if not not do_setlocal:
# raise AssertError
# Hmmm.. this isn't strickly a bug
def getpreferredencoding(do_setlocale=True):
assert not do_setlocale
getpreferredencoding(False)

1
test/stdlib/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/.python-version

View File

@@ -2,7 +2,7 @@
USER=${USER:-rocky}
EMAIL=${EMAIL:-rb@dustyfeet.com}
SUBJECT_PREFIX="stdlib unit testing for"
for VERSION in 2.7.14 2.6.9 ; do
for VERSION in 2.6.9 2.7.14 3.4.8 3.5.5 3.6.4 ; do
typeset -i rc=0
LOGFILE=/tmp/runtests-$VERSION-$$.log
if ! pyenv local $VERSION ; then

View File

@@ -78,7 +78,8 @@ case $PYVERSION in
[test_curses.py]=1 # Possibly fails on its own but not detected
[test_dis.py]=1 # We change line numbers - duh!
[test_doctest.py]=1 # Fails on its own
[test_grammar.py]=1 # Too many stmts. Handle large stmts
[test_generators.py]=1 # control flow. uncompyle2 has problem here too
[test_grammar.py]=1 # Too many stmts. Handle large stmts
[test_io.py]=1 # Test takes too long to run
[test_ioctl.py]=1 # Test takes too long to run
[test_itertools.py]=1 # Fix erroneous reduction to "conditional_true".
@@ -100,13 +101,16 @@ case $PYVERSION in
3.5)
SKIP_TESTS=(
[test_decorators.py]=1 # Control flow wrt "if elif"
[test_quopri.py]=1 # Fails in crontab environment?
)
;;
3.6)
SKIP_TESTS=(
[test_contains.py]=1 # Code "while False: yield None" is optimized away in compilation
[test_decorators.py]=1 # Control flow wrt "if elif"
[test_pow.py]=1 # Control flow wrt "continue"
[test_quopri.py]=1 # Only fails on POWER
)
;;
*)

View File

@@ -176,6 +176,7 @@ def main(in_base, out_base, files, codes, outfile=None,
for filename in files:
infile = os.path.join(in_base, filename)
# print("XXX", infile)
if not os.path.exists(infile):
sys.stderr.write("File '%s' doesn't exist. Skipped\n"
% infile)

View File

@@ -565,9 +565,6 @@ class PythonParser(GenericASTBuilder):
# Positional arguments in make_function
pos_arg ::= expr
expr32 ::= expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr expr
expr1024 ::= expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32 expr32
'''
def p_store(self, args):

View File

@@ -107,11 +107,8 @@ class Python2Parser(PythonParser):
_mklambda ::= load_closure mklambda
kwarg ::= LOAD_CONST expr
kvlist ::= kvlist kv3
kv3 ::= expr expr STORE_MAP
dict ::= BUILD_MAP kvlist
classdef ::= buildclass store
buildclass ::= LOAD_CONST expr mkfunc
@@ -296,19 +293,8 @@ class Python2Parser(PythonParser):
# The order of opname listed is roughly sorted below
if opname_base in ('BUILD_LIST', 'BUILD_SET', 'BUILD_TUPLE'):
v = token.attr
thousands = (v//1024)
thirty32s = ((v//32) % 32)
if thirty32s > 0:
rule = "expr32 ::=%s" % (' expr' * 32)
self.add_unique_rule(rule, opname_base, v, customize)
self.seen32 = True
if thousands > 0:
self.add_unique_rule("expr1024 ::=%s" % (' expr32' * 32),
opname_base, v, customize)
self.seen1024 = True
collection = opname_base[opname_base.find('_')+1:].lower()
rule = (('%s ::= ' % collection) + 'expr1024 '*thousands +
'expr32 '*thirty32s + 'expr '*(v % 32) + opname)
rule = '%s ::= %s%s' % (collection, (token.attr * 'expr '), opname)
self.add_unique_rules([
"expr ::= %s" % collection,
rule], customize)
@@ -328,11 +314,9 @@ class Python2Parser(PythonParser):
'dict_comp_func', 0, customize)
else:
kvlist_n = "kvlist_%s" % token.attr
self.add_unique_rules([
(kvlist_n + " ::=" + ' kv3' * token.attr),
"dict ::= %s %s" % (opname, kvlist_n)
], customize)
kvlist_n = ' kv3' * token.attr
rule = "dict ::= %s%s" % (opname, kvlist_n)
self.addRule(rule, nop_func)
continue
elif opname_base == 'BUILD_SLICE':
slice_num = token.attr
@@ -518,7 +502,7 @@ class Python2Parser(PythonParser):
self.addRule(rule, nop_func)
pass
self.check_reduce['aug_assign1'] = 'AST'
self.check_reduce['raise_stmt1'] = 'tokens'
self.check_reduce['aug_assign2'] = 'AST'
self.check_reduce['or'] = 'AST'
# self.check_reduce['_stmts'] = 'AST'
@@ -538,7 +522,12 @@ class Python2Parser(PythonParser):
if lhs in ('aug_assign1', 'aug_assign2') and ast[0] and ast[0][0] in ('and', 'or'):
return True
if rule == ('or', ('expr', 'jmp_true', 'expr', '\\e_come_from_opt')):
elif lhs in ('raise_stmt1',):
# We will assme 'LOAD_ASSERT' will be handled by an assert grammar rule
return (tokens[first] == 'LOAD_ASSERT' and
(last >= len(tokens) or tokens[last] not in
('COME_FROM', 'JUMP_BACK','JUMP_FORWARD')))
elif rule == ('or', ('expr', 'jmp_true', 'expr', '\\e_come_from_opt')):
expr2 = ast[2]
return expr2 == 'expr' and expr2[0] == 'LOAD_ASSERT'
return False

View File

@@ -73,7 +73,6 @@ class Python25Parser(Python26Parser):
classdefdeco1 ::= expr classdefdeco2 CALL_FUNCTION_1
classdefdeco2 ::= LOAD_CONST expr mkfunc CALL_FUNCTION_0 BUILD_CLASS
kv3 ::= expr expr STORE_MAP
kvlist ::= kvlist kv3
ret_cond ::= expr jmp_false_then expr RETURN_END_IF POP_TOP ret_expr_or_cond
return_if_lambda ::= RETURN_END_IF_LAMBDA POP_TOP
return_if_stmt ::= ret_expr RETURN_END_IF POP_TOP

View File

@@ -261,6 +261,9 @@ class Python26Parser(Python2Parser):
def p_misc26(self, args):
"""
dict ::= BUILD_MAP kvlist
kvlist ::= kvlist kv3
conditional ::= expr jmp_false expr jf_cf_pop expr come_from_opt
and ::= expr JUMP_IF_FALSE POP_TOP expr JUMP_IF_FALSE POP_TOP

View File

@@ -38,8 +38,6 @@ class Python27Parser(Python2Parser):
comp_for ::= expr for_iter store comp_iter JUMP_BACK
comp_iter ::= comp_if
comp_iter ::= comp_if_not
comp_if_not ::= expr jmp_true comp_iter
comp_iter ::= comp_body
dict_comp_body ::= expr expr MAP_ADD

View File

@@ -84,8 +84,6 @@ class Python3Parser(PythonParser):
stmt ::= dict_comp_func
dict_comp_func ::= BUILD_MAP_0 LOAD_FAST FOR_ITER store
comp_iter JUMP_BACK RETURN_VALUE RETURN_LAST
dict_comp ::= LOAD_DICTCOMP LOAD_CONST MAKE_FUNCTION_0 expr
GET_ITER CALL_FUNCTION_1
comp_iter ::= comp_if
comp_iter ::= comp_if_not
@@ -113,21 +111,25 @@ class Python3Parser(PythonParser):
continues ::= continue
kwarg ::= LOAD_CONST expr
kwargs ::= kwarg*
kwargs1 ::= kwarg+
kwarg ::= LOAD_CONST expr
kwargs ::= kwarg+
classdef ::= build_class store
# FIXME: we need to add these because don't detect this properly
# in custom rules. Specifically if one of the exprs is CALL_FUNCTION
# then we'll mistake that for the final CALL_FUNCTION.
# We can fix by triggering on the CALL_FUNCTION op
# Python3 introduced LOAD_BUILD_CLASS
# Other definitions are in a custom rule
build_class ::= LOAD_BUILD_CLASS mkfunc expr call CALL_FUNCTION_3
build_class ::= LOAD_BUILD_CLASS mkfunc expr call expr CALL_FUNCTION_4
stmt ::= classdefdeco
classdefdeco ::= classdefdeco1 store
classdefdeco1 ::= expr classdefdeco1 CALL_FUNCTION_1
classdefdeco1 ::= expr classdefdeco2 CALL_FUNCTION_1
expr ::= LOAD_ASSERT
assert ::= assert_expr jmp_true LOAD_ASSERT RAISE_VARARGS_1 COME_FROM
assert_expr ::= expr
@@ -393,9 +395,6 @@ class Python3Parser(PythonParser):
'''
load_genexpr ::= LOAD_GENEXPR
load_genexpr ::= BUILD_TUPLE_1 LOAD_GENEXPR LOAD_CONST
# Is there something general going on here?
dict_comp ::= load_closure LOAD_DICTCOMP LOAD_CONST MAKE_CLOSURE_0 expr GET_ITER CALL_FUNCTION_1
'''
def p_expr3(self, args):
@@ -430,7 +429,7 @@ class Python3Parser(PythonParser):
LOAD_CONST CALL_FUNCTION_n
build_class ::= LOAD_BUILD_CLASS mkfunc
expr
call_function
call
CALL_FUNCTION_3
'''
# FIXME: I bet this can be simplified
@@ -511,7 +510,8 @@ class Python3Parser(PythonParser):
self.add_unique_rule(rule, token.kind, uniq_param, customize)
if possible_class_decorator:
if next_token == 'CALL_FUNCTION' and next_token.attr == 1:
if (next_token == 'CALL_FUNCTION' and next_token.attr == 1
and args_pos > 1):
rule = ('classdefdeco2 ::= LOAD_BUILD_CLASS mkfunc %s%s_%d'
% (('expr ' * (args_pos-1)), opname, args_pos))
self.add_unique_rule(rule, token.kind, uniq_param, customize)
@@ -531,7 +531,7 @@ class Python3Parser(PythonParser):
subclassing is, well, is pretty base. And we want it that way: lean and
mean so that parsing will go faster.
Here, we add additional grammra rules based on specific instructions
Here, we add additional grammar rules based on specific instructions
that are in the instruction/token stream. In classes that
inherit from from here and other versions, grammar rules may
also be removed.
@@ -575,6 +575,11 @@ class Python3Parser(PythonParser):
seen_LOAD_BUILD_CLASS = False
seen_GET_AWAITABLE_YIELD_FROM = False
# This is used in parse36.py as well as here
self.seen_LOAD_DICTCOMP = False
self.seen_LOAD_SETCOMP = False
# Loop over instructions adding custom grammar rules based on
# a specific instruction seen.
@@ -623,13 +628,11 @@ class Python3Parser(PythonParser):
self.addRule(rule, nop_func)
elif opname.startswith('BUILD_LIST_UNPACK'):
v = token.attr
rule = ('build_list_unpack ::= ' + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
rule = 'build_list_unpack ::= %s%s' % ('expr ' * v, opname)
self.addRule(rule, nop_func)
rule = 'expr ::= build_list_unpack'
self.addRule(rule, nop_func)
elif opname_base == 'BUILD_MAP':
elif opname_base in ('BUILD_MAP', 'BUILD_MAP_UNPACK'):
kvlist_n = "kvlist_%s" % token.attr
if opname == 'BUILD_MAP_n':
# PyPy sometimes has no count. Sigh.
@@ -644,21 +647,29 @@ class Python3Parser(PythonParser):
self.add_unique_rule(rule, 'kvlist_n', 1, customize)
rule = "dict ::= BUILD_MAP_n kvlist_n"
elif self.version >= 3.5:
if opname != 'BUILD_MAP_WITH_CALL':
if opname == 'BUILD_MAP_UNPACK':
rule = kvlist_n + ' ::= ' + 'expr ' * (token.attr*2)
if not opname.startswith('BUILD_MAP_WITH_CALL'):
# FIXME: Use the attr
# so this doesn't run into exponential parsing time.
if opname.startswith('BUILD_MAP_UNPACK'):
self.add_unique_rule(rule, opname, token.attr, customize)
rule = 'dict_entry ::= ' + 'expr ' * (token.attr*2)
self.add_unique_rule(rule, opname, token.attr, customize)
rule = 'dict ::= ' + 'dict_entry ' * token.attr
self.add_unique_rule(rule, opname, token.attr, customize)
rule = ('unmap_dict ::= ' +
('dict ' * token.attr) +
'BUILD_MAP_UNPACK')
# FIXME: start here. The LHS should be unmap_dict, not dict.
# FIXME: really we need a combination of dict_entry-like things.
# It just so happens the most common case is not to mix
# dictionary comphensions with dictionary, elements
if self.seen_LOAD_DICTCOMP:
rule = 'dict ::= %s%s' % ('dict_comp ' * token.attr, opname)
self.addRule(rule, nop_func)
rule = """
expr ::= unmap_dict
unmap_dict ::= %s%s
""" % ('expr ' * token.attr, opname)
else:
rule = kvlist_n + ' ::= ' + 'expr ' * (token.attr*2)
rule = "%s ::= %s %s" % (kvlist_n, 'expr ' * (token.attr*2), opname)
self.add_unique_rule(rule, opname, token.attr, customize)
rule = "dict ::= %s %s" % (kvlist_n, opname)
rule = "dict ::= %s" % kvlist_n
else:
rule = kvlist_n + ' ::= ' + 'expr expr STORE_MAP ' * token.attr
self.add_unique_rule(rule, opname, token.attr, customize)
@@ -666,15 +677,15 @@ class Python3Parser(PythonParser):
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname.startswith('BUILD_MAP_UNPACK_WITH_CALL'):
v = token.attr
rule = ('build_map_unpack_with_call ::= ' + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
rule = 'build_map_unpack_with_call ::= %s%s' % ('expr ' * v, opname)
self.addRule(rule, nop_func)
elif opname.startswith('BUILD_TUPLE_UNPACK_WITH_CALL'):
v = token.attr
rule = ('starred ::= %s %s' % ('expr ' * v, opname))
self.addRule(rule, nop_func)
elif opname_base in ('BUILD_LIST', 'BUILD_SET', 'BUILD_TUPLE'):
elif opname_base in ('BUILD_LIST', 'BUILD_SET', 'BUILD_TUPLE',
'BUILD_TUPLE_UNPACK'):
v = token.attr
is_LOAD_CLOSURE = False
@@ -691,9 +702,7 @@ class Python3Parser(PythonParser):
self.add_unique_rule(rule, opname, token.attr, customize)
if not is_LOAD_CLOSURE or v == 0:
collection = opname_base[opname_base.find('_')+1:].lower()
rule = (('%s ::= ' % collection) + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
rule = '%s ::= %s%s' % (collection, 'expr ' * v, opname)
self.add_unique_rules([
'expr ::= %s' % collection,
rule], customize)
@@ -716,6 +725,19 @@ class Python3Parser(PythonParser):
'CALL_FUNCTION_VAR',
'CALL_FUNCTION_VAR_KW'))
or opname.startswith('CALL_FUNCTION_KW')):
if opname == 'CALL_FUNCTION' and token.attr == 1:
rule = """
dict_comp ::= LOAD_DICTCOMP LOAD_CONST MAKE_FUNCTION_0 expr
GET_ITER CALL_FUNCTION_1
classdefdeco1 ::= expr classdefdeco2 CALL_FUNCTION_1
"""
if self.version < 3.5:
rule += """
classdefdeco1 ::= expr classdefdeco1 CALL_FUNCTION_1
"""
self.addRule(rule, nop_func)
self.custom_classfunc_rule(opname, token, customize,
seen_LOAD_BUILD_CLASS,
seen_GET_AWAITABLE_YIELD_FROM, tokens[i+1])
@@ -784,6 +806,7 @@ class Python3Parser(PythonParser):
self.addRule("expr ::= LOAD_CLASSNAME", nop_func)
custom_ops_seen.add(opname)
elif opname == 'LOAD_DICTCOMP':
self.seen_LOAD_DICTCOMP = True
if has_get_iter_call_function1:
rule_pat = ("dict_comp ::= LOAD_DICTCOMP %sMAKE_FUNCTION_0 expr "
"GET_ITER CALL_FUNCTION_1")
@@ -798,6 +821,7 @@ class Python3Parser(PythonParser):
elif opname == 'LOAD_LISTCOMP':
self.add_unique_rule("expr ::= listcomp", opname, token.attr, customize)
elif opname == 'LOAD_SETCOMP':
self.seen_LOAD_SETCOMP = True
# Should this be generalized and put under MAKE_FUNCTION?
if has_get_iter_call_function1:
self.addRule("expr ::= set_comp", nop_func)
@@ -816,6 +840,17 @@ class Python3Parser(PythonParser):
# DRY with MAKE_FUNCTION
# Note: this probably doesn't handle kwargs proprerly
if opname == 'MAKE_CLOSURE_0' and self.seen_LOAD_DICTCOMP:
# Is there something general going on here?
# Note that 3.6+ doesn't do this, but we'll remove
# this rule in parse36.py
rule = """
dict_comp ::= load_closure LOAD_DICTCOMP LOAD_CONST
MAKE_CLOSURE_0 expr
GET_ITER CALL_FUNCTION_1
"""
self.addRule(rule, nop_func)
args_pos, args_kw, annotate_args = token.attr
# FIXME: Fold test into add_make_function_rule
@@ -854,13 +889,13 @@ class Python3Parser(PythonParser):
opname, token.attr, customize)
if args_kw > 0:
kwargs_str = 'kwargs1 '
kwargs_str = 'kwargs '
else:
kwargs_str = ''
# Note order of kwargs and pos args changed between 3.3-3.4
if self.version <= 3.2:
rule = ('mkfunc ::= %s%sload_closure LOAD_CONST kwargs %s'
rule = ('mkfunc ::= %s%sload_closure LOAD_CONST %s'
% (kwargs_str, 'expr ' * args_pos, opname))
elif self.version == 3.3:
rule = ('mkfunc ::= %s%sload_closure LOAD_CONST LOAD_CONST %s'
@@ -870,9 +905,11 @@ class Python3Parser(PythonParser):
% ('expr ' * args_pos, kwargs_str, opname))
self.add_unique_rule(rule, opname, token.attr, customize)
rule = ('mkfunc ::= %sload_closure load_genexpr %s'
% ('pos_arg ' * args_pos, opname))
self.add_unique_rule(rule, opname, token.attr, customize)
if args_kw == 0:
rule = ('mkfunc ::= %sload_closure load_genexpr %s'
% ('pos_arg ' * args_pos, opname))
self.add_unique_rule(rule, opname, token.attr, customize)
if self.version < 3.4:
rule = ('mkfunc ::= %sload_closure LOAD_CONST %s'
@@ -968,30 +1005,41 @@ class Python3Parser(PythonParser):
opname))
self.add_make_function_rule(rule_pat, opname, token.attr, customize)
if args_kw == 0:
kwargs = 'no_kwargs'
self.add_unique_rule("no_kwargs ::=", opname, token.attr, customize)
else:
kwargs = 'kwargs'
if self.version < 3.3:
# positional args after keyword args
rule = ('mkfunc ::= kwargs %s%s %s' %
rule = ('mkfunc ::= %s %s%s%s' %
(kwargs, 'pos_arg ' * args_pos, 'LOAD_CONST ',
opname))
self.add_unique_rule(rule, opname, token.attr, customize)
rule = ('mkfunc ::= %s%s%s' %
('pos_arg ' * args_pos, 'LOAD_CONST ',
opname))
elif self.version == 3.3:
# positional args after keyword args
rule = ('mkfunc ::= kwargs %s%s %s' %
('pos_arg ' * args_pos, 'LOAD_CONST '*2,
rule = ('mkfunc ::= %s %s%s%s' %
(kwargs, 'pos_arg ' * args_pos, 'LOAD_CONST '*2,
opname))
elif self.version > 3.5:
# positional args before keyword args
rule = ('mkfunc ::= %skwargs1 %s %s' %
('pos_arg ' * args_pos, 'LOAD_CONST '*2,
rule = ('mkfunc ::= %s%s %s%s' %
('pos_arg ' * args_pos, kwargs, 'LOAD_CONST '*2,
opname))
elif self.version > 3.3:
# positional args before keyword args
rule = ('mkfunc ::= %skwargs %s %s' %
('pos_arg ' * args_pos, 'LOAD_CONST '*2,
rule = ('mkfunc ::= %s%s %s%s' %
('pos_arg ' * args_pos, kwargs, 'LOAD_CONST '*2,
opname))
else:
rule = ('mkfunc ::= kwargs %sexpr %s' %
('pos_arg ' * args_pos, opname))
rule = ('mkfunc ::= %s%sexpr %s' %
(kwargs, 'pos_arg ' * args_pos, opname))
self.add_unique_rule(rule, opname, token.attr, customize)
if opname.startswith('MAKE_FUNCTION_A'):
if self.version >= 3.6:
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST LOAD_CONST %s' %

View File

@@ -31,6 +31,9 @@ class Python34Parser(Python33Parser):
expr ::= LOAD_ASSERT
# passtmt is needed for semantic actions to add "pass"
suite_stmts_opt ::= pass
# Seems to be needed starting 3.4.4 or so
while1stmt ::= SETUP_LOOP l_stmts
COME_FROM JUMP_BACK POP_BLOCK COME_FROM_LOOP

View File

@@ -32,7 +32,7 @@ class Python35Parser(Python34Parser):
# ...
# the end of the if will jump back to the loop and there will be a COME_FROM
# after the jump
l_stmts ::= lastl_stmt COME_FROM l_stmts
l_stmts ::= lastl_stmt come_froms l_stmts
# Python 3.5+ Await statement
expr ::= await_expr
@@ -101,7 +101,20 @@ class Python35Parser(Python34Parser):
return_if_stmt ::= ret_expr RETURN_END_IF POP_BLOCK
jb_else ::= JUMP_BACK ELSE
ifelsestmtc ::= testexpr c_stmts_opt JUMP_FORWARD else_suitec
ifelsestmtl ::= testexpr c_stmts_opt jb_else else_suitel
# 3.5 Has jump optimization which can route the end of an
# "if/then" back to to a loop just before an else.
jump_absolute_else ::= jb_else
jump_absolute_else ::= CONTINUE ELSE
# Our hacky "ELSE" determination doesn't do a good job and really
# determine the start of an "else". It could also be the end of an
# "if-then" which ends in a "continue". Perhaps with real control-flow
# analysis we'll sort this out. Or call "ELSE" something more appropriate.
_ifstmts_jump ::= c_stmts_opt ELSE
# ifstmt ::= testexpr c_stmts_opt
@@ -209,7 +222,6 @@ class Python35Parser(Python34Parser):
self.add_unique_rule(rule, token.kind, uniq_param, customize)
self.add_unique_rule('expr ::= async_call', token.kind, uniq_param, customize)
uniq_param = args_kw + args_pos
if opname.startswith('CALL_FUNCTION_VAR'):
# Python 3.5 changes the stack position of *args. KW args come
# after *args.
@@ -225,7 +237,10 @@ class Python35Parser(Python34Parser):
rule = ('call ::= expr expr ' +
('pos_arg ' * args_pos) +
('kwarg ' * args_kw) + kw + token.kind)
self.add_unique_rule(rule, token.kind, uniq_param, customize)
# Note: semantic actions make use of the fact of wheter "args_pos"
# zero or not in creating a template rule.
self.add_unique_rule(rule, token.kind, args_pos, customize)
else:
super(Python35Parser, self).custom_classfunc_rule(opname, token, customize,
seen_LOAD_BUILD_CLASS,

View File

@@ -35,9 +35,6 @@ class Python36Parser(Python35Parser):
# 3.6 redoes how return_closure works. FIXME: Isolate to LOAD_CLOSURE
return_closure ::= LOAD_CLOSURE DUP_TOP STORE_NAME RETURN_VALUE RETURN_LAST
# Is there something general going on here? FIXME: Isolate to LOAD_DICTCOMP
dict_comp ::= load_closure LOAD_DICTCOMP LOAD_CONST MAKE_FUNCTION_8 expr GET_ITER CALL_FUNCTION_1
stmt ::= conditional_lambda
conditional_lambda ::= expr jmp_false expr return_if_lambda
return_stmt_lambda LAMBDA_MARKER
@@ -51,8 +48,9 @@ class Python36Parser(Python35Parser):
whilestmt ::= SETUP_LOOP testexpr l_stmts_opt
JUMP_BACK come_froms POP_BLOCK COME_FROM_LOOP
# This might be valid in < 3.6
# A COME_FROM is dropped off because of JUMP-to-JUMP optimization
and ::= expr jmp_false expr
and ::= expr jmp_false expr jmp_false
jf_cf ::= JUMP_FORWARD COME_FROM
conditional ::= expr jmp_false expr jf_cf expr COME_FROM
@@ -62,6 +60,9 @@ class Python36Parser(Python35Parser):
except_suite ::= c_stmts_opt COME_FROM POP_EXCEPT jump_except COME_FROM
jb_cfs ::= JUMP_BACK come_froms
ifelsestmtl ::= testexpr c_stmts_opt jb_cfs else_suitel
# In 3.6+, A sequence of statements ending in a RETURN can cause
# JUMP_FORWARD END_FINALLY to be omitted from try middle
@@ -105,6 +106,23 @@ class Python36Parser(Python35Parser):
fstring_single ::= expr FORMAT_VALUE
"""
self.add_unique_doc_rules(rules_str, customize)
elif opname == 'MAKE_FUNCTION_8':
if self.seen_LOAD_DICTCOMP:
# Is there something general going on here?
rule = """
dict_comp ::= load_closure LOAD_DICTCOMP LOAD_CONST
MAKE_FUNCTION_8 expr
GET_ITER CALL_FUNCTION_1
"""
self.addRule(rule, nop_func)
elif self.seen_LOAD_SETCOMP:
rule = """
set_comp ::= load_closure LOAD_SETCOMP LOAD_CONST
MAKE_FUNCTION_8 expr
GET_ITER CALL_FUNCTION_1
"""
self.addRule(rule, nop_func)
elif opname == 'BEFORE_ASYNC_WITH':
rules_str = """
stmt ::= async_with_stmt
@@ -195,11 +213,9 @@ class Python36Parser(Python35Parser):
self.add_unique_rule('expr ::= async_call', token.kind, uniq_param, customize)
if opname.startswith('CALL_FUNCTION_KW'):
self.addRule("expr ::= call_kw", nop_func)
self.addRule("expr ::= call_kw36", nop_func)
values = 'expr ' * token.attr
rule = 'call_kw ::= expr kwargs_36 %s' % token.kind
self.addRule(rule, nop_func)
rule = 'kwargs_36 ::= %s LOAD_CONST' % values
rule = "call_kw36 ::= expr %s LOAD_CONST %s" % (values, opname)
self.add_unique_rule(rule, token.kind, token.attr, customize)
elif opname == 'CALL_FUNCTION_EX_KW':
self.addRule("""expr ::= call_ex_kw

View File

@@ -21,15 +21,22 @@ scanner/ingestion module. From here we call various version-specific
scanners, e.g. for Python 2.7 or 3.4.
"""
from array import array
import sys
from uncompyle6 import PYTHON3, IS_PYPY
from uncompyle6 import PYTHON3, IS_PYPY, PYTHON_VERSION
from uncompyle6.scanners.tok import Token
import xdis
from xdis.bytecode import instruction_size, extended_arg_val, next_offset
from xdis.bytecode import (
Bytecode, instruction_size, extended_arg_val, next_offset)
from xdis.magics import canonic_python_version
from xdis.util import code2num
if PYTHON_VERSION < 2.6:
from xdis.namedtuple24 import namedtuple
else:
from collections import namedtuple
# The byte code versions we support.
# Note: these all have to be floats
PYTHON_VERSIONS = frozenset((1.5,
@@ -88,11 +95,73 @@ class Scanner(object):
# FIXME: This weird Python2 behavior is not Python3
self.resetTokenClass()
def opname_for_offset(self, offset):
return self.opc.opname[self.code[offset]]
def build_instructions(self, co):
"""
Create a list of instructions (a structured object rather than
an array of bytes) and store that in self.insts
"""
# FIXME: remove this when all subsidiary functions have been removed.
# We should be able to get everything from the self.insts list.
self.code = array('B', co.co_code)
def op_name(self, op):
return self.opc.opname[op]
bytecode = Bytecode(co, self.opc)
self.build_prev_op()
self.insts = self.remove_extended_args(list(bytecode))
self.lines = self.build_lines_data(co)
self.offset2inst_index = {}
for i, inst in enumerate(self.insts):
self.offset2inst_index[inst.offset] = i
return bytecode
def build_lines_data(self, code_obj):
"""
Generate various line-related helper data.
"""
# Offset: lineno pairs, only for offsets which start line.
# Locally we use list for more convenient iteration using indices
linestarts = list(self.opc.findlinestarts(code_obj))
self.linestarts = dict(linestarts)
# 'List-map' which shows line number of current op and offset of
# first op on following line, given offset of op as index
lines = []
LineTuple = namedtuple('LineTuple', ['l_no', 'next'])
# Iterate through available linestarts, and fill
# the data for all code offsets encountered until
# last linestart offset
_, prev_line_no = linestarts[0]
offset = 0
for start_offset, line_no in linestarts[1:]:
while offset < start_offset:
lines.append(LineTuple(prev_line_no, start_offset))
offset += 1
prev_line_no = line_no
# Fill remaining offsets with reference to last line number
# and code length as start offset of following non-existing line
codelen = len(self.code)
while offset < codelen:
lines.append(LineTuple(prev_line_no, codelen))
offset += 1
return lines
def build_prev_op(self):
"""
Compose 'list-map' which allows to jump to previous
op, given offset of current op as index.
"""
code = self.code
codelen = len(code)
# 2.x uses prev 3.x uses prev_op. Sigh
# Until we get this sorted out.
self.prev = self.prev_op = [0]
for offset in self.op_range(0, codelen):
op = code[offset]
for _ in range(instruction_size(op, self.opc)):
self.prev_op.append(offset)
def is_jump_forward(self, offset):
"""
@@ -242,17 +311,57 @@ class Scanner(object):
pass
return result_offset
def all_instr(self, start, end, instr, target=None, include_beyond_target=False):
def inst_matches(self, start, end, instr, target=None, include_beyond_target=False):
"""
Find all <instr> in the block from start to end.
<instr> is any python bytecode instruction or a list of opcodes
If <instr> is an opcode with a target (like a jump), a target
Find all `instr` in the block from start to end.
`instr` is a Python opcode or a list of opcodes
If `instr` is an opcode with a target (like a jump), a target
destination can be specified which must match precisely.
Return a list with indexes to them or [] if none found.
"""
try:
None in instr
except:
instr = [instr]
# FIXME: this is broken on 3.6+. Revise to use instructions self.insts
first = self.offset2inst_index[start]
result = []
for inst in self.insts[first:]:
if inst.opcode in instr:
if target is None:
result.append(inst.offset)
else:
t = self.get_target(inst.offset)
if include_beyond_target and t >= target:
result.append(inst.offset)
elif t == target:
result.append(inst.offset)
pass
pass
pass
if inst.offset >= end:
break
pass
# FIXME: put in a test
# check = self.all_instr(start, end, instr, target, include_beyond_target)
# assert result == check
return result
# FIXME: this is broken on 3.6+. Replace remaining (2.x-based) calls
# with inst_matches
def all_instr(self, start, end, instr, target=None, include_beyond_target=False):
"""
Find all `instr` in the block from start to end.
`instr` is any Python opcode or a list of opcodes
If `instr` is an opcode with a target (like a jump), a target
destination can be specified which must match precisely.
Return a list with indexes to them or [] if none found.
"""
code = self.code
assert(start >= 0 and end <= len(code))
@@ -290,6 +399,12 @@ class Scanner(object):
return result
def opname_for_offset(self, offset):
return self.opc.opname[self.code[offset]]
def op_name(self, op):
return self.opc.opname[op]
def op_range(self, start, end):
"""
Iterate through positions of opcodes, skipping
@@ -299,11 +414,50 @@ class Scanner(object):
yield start
start += instruction_size(self.code[start], self.opc)
def remove_extended_args(self, instructions):
"""Go through instructions removing extended ARG.
get_instruction_bytes previously adjusted the operand values
to account for these"""
new_instructions = []
last_was_extarg = False
n = len(instructions)
for i, inst in enumerate(instructions):
if (inst.opname == 'EXTENDED_ARG' and
i+1 < n and instructions[i+1].opname != 'MAKE_FUNCTION'):
last_was_extarg = True
starts_line = inst.starts_line
is_jump_target = inst.is_jump_target
offset = inst.offset
continue
if last_was_extarg:
# j = self.stmts.index(inst.offset)
# self.lines[j] = offset
new_inst= inst._replace(starts_line=starts_line,
is_jump_target=is_jump_target,
offset=offset)
inst = new_inst
if i < n:
new_prev = self.prev_op[instructions[i].offset]
j = instructions[i+1].offset
old_prev = self.prev_op[j]
while self.prev_op[j] == old_prev and j < n:
self.prev_op[j] = new_prev
j += 1
last_was_extarg = False
new_instructions.append(inst)
return new_instructions
def remove_mid_line_ifs(self, ifs):
"""
Go through passed offsets, filtering ifs
located somewhere mid-line.
"""
# FIXME: this doesn't work for Python 3.6+
filtered = []
for i in ifs:
# For each offset, if line number of current and next op
@@ -371,7 +525,7 @@ def get_scanner(version, is_pypy=False, show_asm=None):
if __name__ == "__main__":
import inspect, uncompyle6
co = inspect.currentframe().f_code
scanner = get_scanner('2.7.13', True)
scanner = get_scanner(sys.version[:5], False)
# scanner = get_scanner('2.7.13', True)
# scanner = get_scanner(sys.version[:5], False)
scanner = get_scanner(uncompyle6.PYTHON_VERSION, IS_PYPY, True)
tokens, customize = scanner.ingest(co, {})
tokens, customize = scanner.ingest(co, {}, show_asm='after')

View File

@@ -35,21 +35,15 @@ Finally we save token information.
from uncompyle6 import PYTHON_VERSION
if PYTHON_VERSION < 2.6:
from xdis.namedtuple24 import namedtuple
else:
from collections import namedtuple
from array import array
from copy import copy
from xdis.code import iscode
from xdis.bytecode import (
Bytecode, op_has_argument, instruction_size,
op_has_argument, instruction_size,
_get_const_info)
from xdis.util import code2num
from uncompyle6.scanner import Scanner
from uncompyle6.scanner import Scanner, Token
class Scanner2(Scanner):
def __init__(self, version, show_asm=None, is_pypy=False):
@@ -61,6 +55,57 @@ class Scanner2(Scanner):
self.genexpr_name = '<genexpr>'
self.load_asserts = set([])
# Create opcode classification sets
# Note: super initilization above initializes self.opc
# Ops that start SETUP_ ... We will COME_FROM with these names
# Some blocks and END_ statements. And they can start
# a new statement
self.statement_opcodes = frozenset([
self.opc.SETUP_LOOP, self.opc.BREAK_LOOP,
self.opc.SETUP_FINALLY, self.opc.END_FINALLY,
self.opc.SETUP_EXCEPT, self.opc.POP_BLOCK,
self.opc.STORE_FAST, self.opc.DELETE_FAST,
self.opc.STORE_DEREF, self.opc.STORE_GLOBAL,
self.opc.DELETE_GLOBAL, self.opc.STORE_NAME,
self.opc.DELETE_NAME, self.opc.STORE_ATTR,
self.opc.DELETE_ATTR, self.opc.STORE_SUBSCR,
self.opc.DELETE_SUBSCR, self.opc.RETURN_VALUE,
self.opc.RAISE_VARARGS, self.opc.POP_TOP,
self.opc.PRINT_EXPR, self.opc.PRINT_ITEM,
self.opc.PRINT_NEWLINE, self.opc.PRINT_ITEM_TO,
self.opc.PRINT_NEWLINE_TO, self.opc.CONTINUE_LOOP,
self.opc.JUMP_ABSOLUTE, self.opc.EXEC_STMT,
])
# Opcodes that can start a "store" non-terminal.
# FIXME: JUMP_ABSOLUTE is weird. What's up with that?
self.designator_ops = frozenset([
self.opc.STORE_FAST, self.opc.STORE_NAME,
self.opc.STORE_GLOBAL, self.opc.STORE_DEREF, self.opc.STORE_ATTR,
self.opc.STORE_SLICE_0, self.opc.STORE_SLICE_1, self.opc.STORE_SLICE_2,
self.opc.STORE_SLICE_3, self.opc.STORE_SUBSCR, self.opc.UNPACK_SEQUENCE,
self.opc.JUMP_ABSOLUTE
])
# Python 2.7 has POP_JUMP_IF_{TRUE,FALSE}_OR_POP but < 2.7 doesn't
# Add an empty set make processing more uniform.
self.pop_jump_if_or_pop = frozenset([])
# opcodes with expect a variable number pushed values whose
# count is in the opcode. For parsing we generally change the
# opcode name to include that number.
self.varargs_ops = frozenset([
self.opc.BUILD_LIST, self.opc.BUILD_TUPLE,
self.opc.BUILD_SLICE, self.opc.UNPACK_SEQUENCE,
self.opc.MAKE_FUNCTION, self.opc.CALL_FUNCTION,
self.opc.MAKE_CLOSURE, self.opc.CALL_FUNCTION_VAR,
self.opc.CALL_FUNCTION_KW, self.opc.CALL_FUNCTION_VAR_KW,
self.opc.DUP_TOPX, self.opc.RAISE_VARARGS])
@staticmethod
def unmangle_name(name, classname):
"""Remove __ from the end of _name_ if it starts with __classname__
@@ -110,7 +155,8 @@ class Scanner2(Scanner):
if not show_asm:
show_asm = self.show_asm
bytecode = Bytecode(co, self.opc)
bytecode = self.build_instructions(co)
# show_asm = 'after'
if show_asm in ('both', 'before'):
for instr in bytecode.get_instructions(co):
@@ -121,21 +167,10 @@ class Scanner2(Scanner):
# "customize" is in the process of going away here
customize = {}
if self.is_pypy:
customize['PyPy'] = 0
Token = self.Token # shortcut
codelen = self.setup_code(co)
self.build_lines_data(co, codelen)
self.build_prev_op(codelen)
self.insts = list(bytecode)
self.offset2inst_index = {}
for i, inst in enumerate(self.insts):
self.offset2inst_index[inst.offset] = i
codelen = len(self.code)
free, names, varnames = self.unmangle_code_names(co, classname)
self.names = names
@@ -146,13 +181,11 @@ class Scanner2(Scanner):
self.load_asserts = set()
for i in self.op_range(0, codelen):
self.offset2inst_index[inst.offset] = i
# We need to detect the difference between:
# raise AssertionError
# and
# assert ...
# Below we use the heuristic that it is preceded by a POP_JUMP.
# Below we use the heuristic that an "sssert" is preceded by a POP_JUMP.
# however we could also use followed by RAISE_VARARGS
# or for PyPy there may be a JUMP_IF_NOT_DEBUG before.
# FIXME: remove uses of PJIF, and PJIT
@@ -318,7 +351,7 @@ class Scanner2(Scanner):
if (offset in self.stmts and
self.code[offset+3] not in (self.opc.END_FINALLY,
self.opc.POP_BLOCK)):
if ((offset in self.linestartoffsets and
if ((offset in self.linestarts and
self.code[self.prev[offset]] == self.opc.JUMP_ABSOLUTE)
or self.code[target] == self.opc.FOR_ITER
or offset not in self.not_continue):
@@ -331,10 +364,7 @@ class Scanner2(Scanner):
if offset in self.return_end_ifs:
op_name = 'RETURN_END_IF'
if offset in self.linestartoffsets:
linestart = self.linestartoffsets[offset]
else:
linestart = None
linestart = self.linestarts.get(offset, None)
if offset not in replace:
tokens.append(Token(
@@ -353,63 +383,6 @@ class Scanner2(Scanner):
print()
return tokens, customize
def setup_code(self, co):
"""
Creates Python-independent bytecode structure (byte array) in
self.code and records previous instruction in self.prev
The size of self.code is returned
"""
self.code = array('B', co.co_code)
n = -1
for i in self.op_range(0, len(self.code)):
if self.code[i] in (self.opc.RETURN_VALUE, self.opc.END_FINALLY):
n = i + 1
pass
pass
assert n > -1, "Didn't find RETURN_VALUE or END_FINALLY"
self.code = array('B', co.co_code[:n])
return n
def build_prev_op(self, n):
self.prev = [0]
# mapping addresses of instruction & argument
for i in self.op_range(0, n):
op = self.code[i]
self.prev.append(i)
if op_has_argument(op, self.opc):
self.prev.append(i)
self.prev.append(i)
pass
pass
def build_lines_data(self, co, n):
"""
Initializes self.lines and self.linesstartoffsets
"""
self.lines = []
linetuple = namedtuple('linetuple', ['l_no', 'next'])
# self.linestarts is a tuple of (offset, line number).
# Turn that in a has that we can index
self.linestarts = list(self.opc.findlinestarts(co))
self.linestartoffsets = {}
for offset, lineno in self.linestarts:
self.linestartoffsets[offset] = lineno
j = 0
(prev_start_byte, prev_line_no) = self.linestarts[0]
for (start_byte, line_no) in self.linestarts[1:]:
while j < start_byte:
self.lines.append(linetuple(prev_line_no, start_byte))
j += 1
prev_line_no = start_byte
while j < n:
self.lines.append(linetuple(prev_line_no, n))
j+=1
return
def build_statement_indices(self):
code = self.code
start = 0
@@ -420,7 +393,7 @@ class Scanner2(Scanner):
(self.opc.PJIT, self.opc.JUMP_FORWARD),
(self.opc.PJIT, self.opc.JUMP_ABSOLUTE)])
prelim = self.all_instr(start, end, self.stmt_opcodes)
prelim = self.all_instr(start, end, self.statement_opcodes)
stmts = self.stmts = set(prelim)
pass_stmts = set()
@@ -976,7 +949,8 @@ class Scanner2(Scanner):
'end': pre_rtarget})
# FIXME: this is yet another case were we need dominators.
if pre_rtarget not in self.linestartoffsets or self.version < 2.7:
if (pre_rtarget not in self.linestarts
or self.version < 2.7):
self.not_continue.add(pre_rtarget)
if rtarget < end_offset:
@@ -1165,6 +1139,19 @@ class Scanner2(Scanner):
return targets
def patch_continue(self, tokens, offset, op):
if op in (self.opc.JUMP_FORWARD, self.opc.JUMP_ABSOLUTE):
# FIXME: this is a hack to catch stuff like:
# for ...
# try: ...
# except: continue
# the "continue" is not on a new line.
n = len(tokens)
if (n > 2 and
tokens[-1].kind == 'JUMP_BACK' and
self.code[offset+3] == self.opc.END_FINALLY):
tokens[-1].kind = intern('CONTINUE')
# FIXME: combine with scanner3.py code and put into scanner.py
def rem_or(self, start, end, instr, target=None, include_beyond_target=False):
"""
@@ -1204,3 +1191,17 @@ class Scanner2(Scanner):
instr_offsets = filtered
filtered = []
return instr_offsets
if __name__ == "__main__":
from uncompyle6 import PYTHON_VERSION
if 2.0 <= PYTHON_VERSION < 3.0:
import inspect
co = inspect.currentframe().f_code
from uncompyle6 import PYTHON_VERSION
tokens, customize = Scanner2(PYTHON_VERSION).ingest(co)
for t in tokens:
print(t)
else:
print("Need to be Python 2.x to demo; I am %s." %
PYTHON_VERSION)
pass

View File

@@ -32,59 +32,21 @@ from uncompyle6.scanner import L65536
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_26
from xdis.bytecode import Bytecode
from xdis.bytecode import _get_const_info
from uncompyle6.scanner import Token
JUMP_OPS = opcode_26.JUMP_OPS
class Scanner26(scan.Scanner2):
def __init__(self, show_asm=False):
super(Scanner26, self).__init__(2.6, show_asm)
self.stmt_opcodes = frozenset([
self.opc.SETUP_LOOP, self.opc.BREAK_LOOP,
self.opc.SETUP_FINALLY, self.opc.END_FINALLY,
self.opc.SETUP_EXCEPT, self.opc.POP_BLOCK,
self.opc.STORE_FAST, self.opc.DELETE_FAST,
self.opc.STORE_DEREF, self.opc.STORE_GLOBAL,
self.opc.DELETE_GLOBAL, self.opc.STORE_NAME,
self.opc.DELETE_NAME, self.opc.STORE_ATTR,
self.opc.DELETE_ATTR, self.opc.STORE_SUBSCR,
self.opc.DELETE_SUBSCR, self.opc.RETURN_VALUE,
self.opc.RAISE_VARARGS, self.opc.POP_TOP,
self.opc.PRINT_EXPR, self.opc.PRINT_ITEM,
self.opc.PRINT_NEWLINE, self.opc.PRINT_ITEM_TO,
self.opc.PRINT_NEWLINE_TO, self.opc.CONTINUE_LOOP,
self.opc.JUMP_ABSOLUTE, self.opc.EXEC_STMT,
])
# "setup" opcodes
self.setup_ops = frozenset([
self.opc.SETUP_EXCEPT, self.opc.SETUP_FINALLY,
])
# opcodes with expect a variable number pushed values whose
# count is in the opcode. For parsing we generally change the
# opcode name to include that number.
self.varargs_ops = frozenset([
self.opc.BUILD_LIST, self.opc.BUILD_TUPLE,
self.opc.BUILD_SLICE, self.opc.UNPACK_SEQUENCE,
self.opc.MAKE_FUNCTION, self.opc.CALL_FUNCTION,
self.opc.MAKE_CLOSURE, self.opc.CALL_FUNCTION_VAR,
self.opc.CALL_FUNCTION_KW, self.opc.CALL_FUNCTION_VAR_KW,
self.opc.DUP_TOPX, self.opc.RAISE_VARARGS])
# opcodes that store values into a variable
self.designator_ops = frozenset([
self.opc.STORE_FAST, self.opc.STORE_NAME,
self.opc.STORE_GLOBAL, self.opc.STORE_DEREF, self.opc.STORE_ATTR,
self.opc.STORE_SLICE_0, self.opc.STORE_SLICE_1, self.opc.STORE_SLICE_2,
self.opc.STORE_SLICE_3, self.opc.STORE_SUBSCR, self.opc.UNPACK_SEQUENCE,
self.opc.JUMP_ABSOLUTE
])
# Python 2.7 has POP_JUMP_IF_{TRUE,FALSE}_OR_POP but < 2.7 doesn't
# Add an empty set make processing more uniform.
self.pop_jump_if_or_pop = frozenset([])
return
def ingest(self, co, classname=None, code_objects={}, show_asm=None):
@@ -106,7 +68,8 @@ class Scanner26(scan.Scanner2):
if not show_asm:
show_asm = self.show_asm
bytecode = Bytecode(co, self.opc)
bytecode = self.build_instructions(co)
# show_asm = 'after'
if show_asm in ('both', 'before'):
for instr in bytecode.get_instructions(co):
@@ -119,17 +82,7 @@ class Scanner26(scan.Scanner2):
if self.is_pypy:
customize['PyPy'] = 1
Token = self.Token # shortcut
codelen = self.setup_code(co)
self.build_lines_data(co, codelen)
self.build_prev_op(codelen)
self.insts = list(bytecode)
self.offset2inst_index = {}
for i, inst in enumerate(self.insts):
self.offset2inst_index[inst.offset] = i
codelen = len(self.code)
free, names, varnames = self.unmangle_code_names(co, classname)
self.names = names
@@ -288,7 +241,7 @@ class Scanner26(scan.Scanner2):
if (offset in self.stmts
and self.code[offset+3] not in (self.opc.END_FINALLY,
self.opc.POP_BLOCK)):
if ((offset in self.linestartoffsets and
if ((offset in self.linestarts and
tokens[-1].kind == 'JUMP_BACK')
or offset not in self.not_continue):
op_name = 'CONTINUE'
@@ -309,10 +262,7 @@ class Scanner26(scan.Scanner2):
if offset in self.return_end_ifs:
op_name = 'RETURN_END_IF'
if offset in self.linestartoffsets:
linestart = self.linestartoffsets[offset]
else:
linestart = None
linestart = self.linestarts.get(offset, None)
if offset not in replace:
tokens.append(Token(

View File

@@ -23,28 +23,15 @@ class Scanner27(Scanner2):
super(Scanner27, self).__init__(2.7, show_asm, is_pypy)
# opcodes that start statements
self.stmt_opcodes = frozenset([
self.opc.SETUP_LOOP, self.opc.BREAK_LOOP,
self.opc.SETUP_FINALLY, self.opc.END_FINALLY,
self.opc.SETUP_EXCEPT,
self.opc.POP_BLOCK, self.opc.STORE_FAST, self.opc.DELETE_FAST,
self.opc.STORE_DEREF, self.opc.STORE_GLOBAL,
self.opc.DELETE_GLOBAL, self.opc.STORE_NAME,
self.opc.DELETE_NAME, self.opc.STORE_ATTR,
self.opc.DELETE_ATTR, self.opc.STORE_SUBSCR,
self.opc.DELETE_SUBSCR, self.opc.RETURN_VALUE,
self.opc.RAISE_VARARGS, self.opc.POP_TOP,
self.opc.PRINT_EXPR, self.opc.PRINT_ITEM,
self.opc.PRINT_NEWLINE, self.opc.PRINT_ITEM_TO,
self.opc.PRINT_NEWLINE_TO, self.opc.CONTINUE_LOOP,
self.opc.JUMP_ABSOLUTE, self.opc.EXEC_STMT,
# New in 2.7
self.opc.SETUP_WITH,
self.opc.STORE_SLICE_0, self.opc.STORE_SLICE_1,
self.opc.STORE_SLICE_2, self.opc.STORE_SLICE_3,
self.opc.DELETE_SLICE_0, self.opc.DELETE_SLICE_1,
self.opc.DELETE_SLICE_2, self.opc.DELETE_SLICE_3,
])
self.statement_opcodes = frozenset(
self.statement_opcodes | set([
# New in 2.7
self.opc.SETUP_WITH,
self.opc.STORE_SLICE_0, self.opc.STORE_SLICE_1,
self.opc.STORE_SLICE_2, self.opc.STORE_SLICE_3,
self.opc.DELETE_SLICE_0, self.opc.DELETE_SLICE_1,
self.opc.DELETE_SLICE_2, self.opc.DELETE_SLICE_3,
]))
# opcodes which expect a variable number pushed values and whose
# count is in the opcode. For parsing we generally change the
@@ -83,19 +70,6 @@ class Scanner27(Scanner2):
return
def patch_continue(self, tokens, offset, op):
if op in (self.opc.JUMP_FORWARD, self.opc.JUMP_ABSOLUTE):
# FIXME: this is a hack to catch stuff like:
# for ...
# try: ...
# except: continue
# the "continue" is not on a new line.
n = len(tokens)
if (n > 2 and
tokens[-1].kind == 'JUMP_BACK' and
self.code[offset+3] == self.opc.END_FINALLY):
tokens[-1].kind = intern('CONTINUE')
pass
if __name__ == "__main__":

View File

@@ -40,10 +40,8 @@ if PYTHON_VERSION < 2.6:
else:
from collections import namedtuple
from array import array
from xdis.code import iscode
from xdis.bytecode import Bytecode, instruction_size, _get_const_info
from xdis.bytecode import instruction_size, _get_const_info
from uncompyle6.scanner import Token, parse_fn_counts
import xdis
@@ -104,7 +102,7 @@ class Scanner3(Scanner):
self.statement_opcodes = frozenset(statement_opcodes) | self.setup_ops_no_loop
# Opcodes that can start a designator non-terminal.
# Opcodes that can start a "store" non-terminal.
# FIXME: JUMP_ABSOLUTE is weird. What's up with that?
self.designator_ops = frozenset([
self.opc.STORE_FAST, self.opc.STORE_NAME, self.opc.STORE_GLOBAL,
@@ -138,6 +136,7 @@ class Scanner3(Scanner):
(self.opc.JUMP_FORWARD,),
(self.opc.JUMP_ABSOLUTE,)]
# FIXME: remove this and use instead info from xdis.
# Opcodes that take a variable number of arguments
# (expr's)
varargs_ops = set([
@@ -148,52 +147,21 @@ class Scanner3(Scanner):
if is_pypy:
varargs_ops.add(self.opc.CALL_METHOD)
if self.version >= 3.6:
varargs_ops.add(self.opc.BUILD_CONST_KEY_MAP)
# Below is in bit order, "default = bit 0, closure = bit 3
self.MAKE_FUNCTION_FLAGS = tuple("""
default keyword-only annotation closure""".split())
if self.version >= 3.5:
varargs_ops |= set([self.opc.BUILD_SET_UNPACK,
self.opc.BUILD_MAP_UNPACK, # we will handle this later
self.opc.BUILD_LIST_UNPACK,
self.opc.BUILD_TUPLE_UNPACK])
if self.version >= 3.6:
varargs_ops.add(self.opc.BUILD_CONST_KEY_MAP)
# Below is in bit order, "default = bit 0, closure = bit 3
self.MAKE_FUNCTION_FLAGS = tuple("""
default keyword-only annotation closure""".split())
self.varargs_ops = frozenset(varargs_ops)
# FIXME: remove the above in favor of:
# self.varargs_ops = frozenset(self.opc.hasvargs)
def remove_extended_args(self, instructions):
"""Go through instructions removing extended ARG.
get_instruction_bytes previously adjusted the operand values
to account for these"""
new_instructions = []
last_was_extarg = False
n = len(instructions)
for i, inst in enumerate(instructions):
if (inst.opname == 'EXTENDED_ARG' and
i+1 < n and instructions[i+1].opname != 'MAKE_FUNCTION'):
last_was_extarg = True
starts_line = inst.starts_line
is_jump_target = inst.is_jump_target
offset = inst.offset
continue
if last_was_extarg:
# j = self.stmts.index(inst.offset)
# self.lines[j] = offset
new_inst= inst._replace(starts_line=starts_line,
is_jump_target=is_jump_target,
offset=offset)
inst = new_inst
if i < n:
new_prev = self.prev_op[instructions[i].offset]
j = instructions[i+1].offset
old_prev = self.prev_op[j]
while self.prev_op[j] == old_prev and j < n:
self.prev_op[j] = new_prev
j += 1
last_was_extarg = False
new_instructions.append(inst)
return new_instructions
def ingest(self, co, classname=None, code_objects={}, show_asm=None):
"""
Pick out tokens from an uncompyle6 code object, and transform them,
@@ -211,15 +179,15 @@ class Scanner3(Scanner):
cause specific rules for the specific number of arguments they take.
"""
# FIXME: remove this when all subsidiary functions have been removed.
# We should be able to get everything from the self.insts list.
self.code = array('B', co.co_code)
bytecode = Bytecode(co, self.opc)
if not show_asm:
show_asm = self.show_asm
# show_asm = 'both'
if not show_asm:
show_asm = self.show_asm
bytecode = self.build_instructions(co)
# show_asm = 'after'
if show_asm in ('both', 'before'):
for instr in bytecode.get_instructions(co):
print(instr.disassemble())
@@ -233,22 +201,14 @@ class Scanner3(Scanner):
if self.is_pypy:
customize['PyPy'] = 0
self.lines = self.build_lines_data(co)
self.build_prev_op()
# FIXME: put as its own method?
# Scan for assertions. Later we will
# turn 'LOAD_GLOBAL' to 'LOAD_ASSERT'.
# 'LOAD_ASSERT' is used in assert statements.
self.load_asserts = set()
self.insts = self.remove_extended_args(list(bytecode))
self.offset2inst_index = {}
n = len(self.insts)
for i, inst in enumerate(self.insts):
self.offset2inst_index[inst.offset] = i
# We need to detect the difference between:
# raise AssertionError
# and
@@ -434,6 +394,7 @@ class Scanner3(Scanner):
.opname == 'FOR_ITER'
and self.insts[i+1].opname == 'JUMP_FORWARD')
if (is_continue or
(inst.offset in self.stmts and
(self.version != 3.0 or (hasattr(inst, 'linestart'))) and
@@ -487,53 +448,6 @@ class Scanner3(Scanner):
print()
return tokens, customize
def build_lines_data(self, code_obj):
"""
Generate various line-related helper data.
"""
# Offset: lineno pairs, only for offsets which start line.
# Locally we use list for more convenient iteration using indices
linestarts = list(self.opc.findlinestarts(code_obj))
self.linestarts = dict(linestarts)
# Plain set with offsets of first ops on line
self.linestart_offsets = set(a for (a, _) in linestarts)
# 'List-map' which shows line number of current op and offset of
# first op on following line, given offset of op as index
lines = []
LineTuple = namedtuple('LineTuple', ['l_no', 'next'])
# Iterate through available linestarts, and fill
# the data for all code offsets encountered until
# last linestart offset
_, prev_line_no = linestarts[0]
offset = 0
for start_offset, line_no in linestarts[1:]:
while offset < start_offset:
lines.append(LineTuple(prev_line_no, start_offset))
offset += 1
prev_line_no = line_no
# Fill remaining offsets with reference to last line number
# and code length as start offset of following non-existing line
codelen = len(self.code)
while offset < codelen:
lines.append(LineTuple(prev_line_no, codelen))
offset += 1
return lines
def build_prev_op(self):
"""
Compose 'list-map' which allows to jump to previous
op, given offset of current op as index.
"""
code = self.code
codelen = len(code)
# 2.x uses prev 3.x uses prev_op. Sigh
# Until we get this sorted out.
self.prev = self.prev_op = [0]
for offset in self.op_range(0, codelen):
op = code[offset]
for _ in range(instruction_size(op, self.opc)):
self.prev_op.append(offset)
def find_jump_targets(self, debug):
"""
Detect all offsets in a byte code which are jump targets
@@ -616,7 +530,7 @@ class Scanner3(Scanner):
# Compose preliminary list of indices with statements,
# using plain statement opcodes
prelim = self.all_instr(start, end, self.statement_opcodes)
prelim = self.inst_matches(start, end, self.statement_opcodes)
# Initialize final container with statements with
# preliminary data
@@ -767,15 +681,17 @@ class Scanner3(Scanner):
if self.get_target(jump_back) >= next_line_byte:
jump_back = self.last_instr(start, end, self.opc.JUMP_ABSOLUTE, start, False)
# This is wrong for 3.6+
if end > jump_back+4 and self.is_jump_forward(end):
if self.is_jump_forward(jump_back+4):
if self.get_target(jump_back+4) == self.get_target(end):
self.fixed_jumps[offset] = jump_back+4
end = jump_back+4
jb_inst = self.get_inst(jump_back)
jb_next_offset = self.next_offset(jb_inst.opcode, jump_back)
if end > jb_next_offset and self.is_jump_forward(end):
if self.is_jump_forward(jb_next_offset):
if self.get_target(jb_next_offset) == self.get_target(end):
self.fixed_jumps[offset] = jb_next_offset
end = jb_next_offset
elif target < offset:
self.fixed_jumps[offset] = jump_back+4
end = jump_back+4
self.fixed_jumps[offset] = jb_next_offset
end = jb_next_offset
target = self.get_target(jump_back)
@@ -877,11 +793,12 @@ class Scanner3(Scanner):
pass
else:
fix = None
jump_ifs = self.all_instr(start, self.next_stmt[offset],
self.opc.POP_JUMP_IF_FALSE)
jump_ifs = self.inst_matches(start, self.next_stmt[offset],
self.opc.POP_JUMP_IF_FALSE)
last_jump_good = True
for j in jump_ifs:
if target == self.get_target(j):
# FIXME: remove magic number
if self.lines[j].next == j + 3 and last_jump_good:
fix = j
break
@@ -914,7 +831,8 @@ class Scanner3(Scanner):
if offset in self.ignore_if:
return
if (code[pre_rtarget] == self.opc.JUMP_ABSOLUTE and
rtarget_is_ja = code[pre_rtarget] == self.opc.JUMP_ABSOLUTE
if ( rtarget_is_ja and
pre_rtarget in self.stmts and
pre_rtarget != offset and
prev_op[pre_rtarget] != offset and
@@ -934,10 +852,13 @@ class Scanner3(Scanner):
# or a conditional assignment like:
# x = 1 if x else 2
#
# For 3.5, in addition the JUMP_FORWARD above we could have
# JUMP_BACK or CONTINUE
#
# There are other contexts we may need to consider
# like whether the target is "END_FINALLY"
# or if the condition jump is to a forward location
if self.is_jump_forward(pre_rtarget):
if self.is_jump_forward(pre_rtarget) or (rtarget_is_ja and self.version >= 3.5):
if_end = self.get_target(pre_rtarget)
# If the jump target is back, we are looping
@@ -1139,13 +1060,14 @@ class Scanner3(Scanner):
assert(start>=0 and end<=len(self.code) and start <= end)
# Find all offsets of requested instructions
instr_offsets = self.all_instr(start, end, instr, target, include_beyond_target)
instr_offsets = self.inst_matches(start, end, instr, target,
include_beyond_target)
# Get all POP_JUMP_IF_TRUE (or) offsets
if self.version == 3.0:
jump_true_op = self.opc.JUMP_IF_TRUE
else:
jump_true_op = self.opc.POP_JUMP_IF_TRUE
pjit_offsets = self.all_instr(start, end, jump_true_op)
pjit_offsets = self.inst_matches(start, end, jump_true_op)
filtered = []
for pjit_offset in pjit_offsets:
pjit_tgt = self.get_target(pjit_offset) - 3

View File

@@ -193,11 +193,12 @@ class Scanner30(Scanner3):
pass
else:
fix = None
jump_ifs = self.all_instr(start, self.next_stmt[offset],
opc.JUMP_IF_FALSE)
jump_ifs = self.inst_matches(start, self.next_stmt[offset],
opc.JUMP_IF_FALSE)
last_jump_good = True
for j in jump_ifs:
if target == self.get_target(j):
# FIXME: remove magic number
if self.lines[j].next == j + 3 and last_jump_good:
fix = j
break

View File

@@ -288,7 +288,10 @@ TABLE_DIRECT = {
'except': ( '%|except:\n%+%c%-', 3 ),
'except_cond1': ( '%|except %c:\n', 1 ),
'except_suite': ( '%+%c%-%C', 0, (1, maxint, '') ),
# In Python 3.6, this is more complicated in the presence of "returns"
'except_suite_finalize': ( '%+%c%-%C', 1, (3, maxint, '') ),
'pass': ( '%|pass\n', ),
'STORE_FAST': ( '%{pattr}', ),
'kv': ( '%c: %c', 3, 1 ),
@@ -381,7 +384,6 @@ PRECEDENCE = {
'ret_cond_not': 28,
'_mklambda': 30,
'call_kw': 100, # 100 seems to to be module/function precidence
'yield': 101,
'yield_from': 101
}

View File

@@ -17,17 +17,10 @@
"""
from uncompyle6.semantics.consts import (
INDENT_PER_LEVEL, TABLE_R, TABLE_DIRECT)
TABLE_R, TABLE_DIRECT)
from uncompyle6.semantics.make_function import (
make_function3_annotate,
)
from xdis.util import COMPILER_FLAG_BIT
from xdis.code import iscode
from uncompyle6.parsers.astnode import AST
from uncompyle6.scanners.tok import Token
from uncompyle6.semantics.helper import flatten_list
def customize_for_version(self, is_pypy, version):
if is_pypy:
@@ -188,520 +181,6 @@ def customize_for_version(self, is_pypy, version):
})
if version >= 3.0:
TABLE_DIRECT.update({
'function_def_annotate': ( '\n\n%|def %c%c\n', -1, 0),
'store_locals': ( '%|# inspect.currentframe().f_locals = __locals__\n', ),
})
def n_mkfunc_annotate(node):
if self.version >= 3.3 or node[-2] == 'kwargs':
# LOAD_CONST code object ..
# LOAD_CONST 'x0' if >= 3.3
# EXTENDED_ARG
# MAKE_FUNCTION ..
code = node[-4]
elif node[-3] == 'expr':
code = node[-3][0]
else:
# LOAD_CONST code object ..
# MAKE_FUNCTION ..
code = node[-3]
self.indent_more()
for annotate_last in range(len(node)-1, -1, -1):
if node[annotate_last] == 'annotate_tuple':
break
# FIXME: the real situation is that when derived from
# function_def_annotate we the name has been filled in.
# But when derived from funcdefdeco it hasn't Would like a better
# way to distinquish.
if self.f.getvalue()[-4:] == 'def ':
self.write(code.attr.co_name)
# FIXME: handle and pass full annotate args
make_function3_annotate(self, node, is_lambda=False,
codeNode=code, annotate_last=annotate_last)
if len(self.param_stack) > 1:
self.write('\n\n')
else:
self.write('\n\n\n')
self.indent_less()
self.prune() # stop recursing
self.n_mkfunc_annotate = n_mkfunc_annotate
if version >= 3.4:
########################
# Python 3.4+ Additions
#######################
TABLE_DIRECT.update({
'LOAD_CLASSDEREF': ( '%{pattr}', ),
})
########################
# Python 3.5+ Additions
#######################
if version >= 3.5:
TABLE_DIRECT.update({
'await_expr': ( 'await %c', 0),
'await_stmt': ( '%|%c\n', 0),
'async_for_stmt': (
'%|async for %c in %c:\n%+%c%-\n\n', 9, 1, 25 ),
'async_forelse_stmt': (
'%|async for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 9, 1, 25, 28 ),
'async_with_stmt': (
'%|async with %c:\n%+%c%-', 0, 7),
'async_with_as_stmt': (
'%|async with %c as %c:\n%+%c%-', 0, 6, 7),
'unmap_dict': ( '{**%C}', (0, -1, ', **') ),
# 'unmapexpr': ( '{**%c}', 0), # done by n_unmapexpr
})
def async_call(node):
self.f.write('async ')
node.kind == 'call'
p = self.prec
self.prec = 80
self.template_engine(('%c(%P)', 0, (1, -4, ', ',
100)), node)
self.prec = p
node.kind == 'async_call'
self.prune()
self.n_async_call = async_call
self.n_build_list_unpack = self.n_list
if version == 3.5:
def n_call(node):
mapping = self._get_mapping(node)
table = mapping[0]
key = node
for i in mapping[1:]:
key = key[i]
pass
if key.kind.startswith('CALL_FUNCTION_VAR_KW'):
# Python 3.5 changes the stack position of *args. kwargs come
# after *args whereas in earlier Pythons, *args is at the end
# which simplifies things from our perspective.
# Python 3.6+ replaces CALL_FUNCTION_VAR_KW with CALL_FUNCTION_EX
# We will just swap the order to make it look like earlier Python 3.
entry = table[key.kind]
kwarg_pos = entry[2][1]
args_pos = kwarg_pos - 1
# Put last node[args_pos] after subsequent kwargs
while node[kwarg_pos] == 'kwarg' and kwarg_pos < len(node):
# swap node[args_pos] with node[kwargs_pos]
node[kwarg_pos], node[args_pos] = node[args_pos], node[kwarg_pos]
args_pos = kwarg_pos
kwarg_pos += 1
elif key.kind.startswith('CALL_FUNCTION_VAR'):
nargs = node[-1].attr & 0xFF
if nargs > 0:
template = ('%c(%C, ', 0, (1, nargs+1, ', '))
else:
template = ('%c(', 0)
self.template_engine(template, node)
args_node = node[-2]
if args_node == 'pos_arg':
args_node = args_node[0]
if args_node == 'expr':
args_node = args_node[0]
if args_node == 'build_list_unpack':
template = ('*%P)', (0, len(args_node)-1, ', *', 100))
self.template_engine(template, args_node)
else:
template = ('*%c)', -2)
self.template_engine(template, node)
self.prune()
self.default(node)
self.n_call = n_call
def n_function_def(node):
if self.version == 3.6:
code_node = node[0][0]
else:
code_node = node[0][1]
is_code = hasattr(code_node, 'attr') and iscode(code_node.attr)
if (is_code and
(code_node.attr.co_flags & COMPILER_FLAG_BIT['COROUTINE'])):
self.template_engine(('\n\n%|async def %c\n',
-2), node)
else:
self.template_engine(('\n\n%|def %c\n', -2),
node)
self.prune()
self.n_function_def = n_function_def
def unmapexpr(node):
last_n = node[0][-1]
for n in node[0]:
self.preorder(n)
if n != last_n:
self.f.write(', **')
pass
pass
self.prune()
pass
self.n_unmapexpr = unmapexpr
if version >= 3.6:
########################
# Python 3.6+ Additions
#######################
TABLE_DIRECT.update({
'tryfinally36': ( '%|try:\n%+%c%-%|finally:\n%+%c%-\n\n',
(1, 'returns'), 3 ),
'fstring_expr': ( "{%c%{conversion}}", 0),
# FIXME: the below assumes the format strings
# don't have ''' in them. Fix this properly
'fstring_single': ( "f'''{%c%{conversion}}'''", 0),
'fstring_multi': ( "f'''%c'''", 0),
'func_args36': ( "%c(**", 0),
'try_except36': ( '%|try:\n%+%c%-%c\n\n', 1, 2 ),
'unpack_list': ( '*%c', (0, 'list') ),
'call_ex' : (
'%c(%c)',
(0, 'expr'), 1),
'call_ex_kw' : (
'%c(%c)',
(0, 'expr'), 2),
})
TABLE_R.update({
'CALL_FUNCTION_EX': ('%c(*%P)', 0, (1, 2, ', ', 100)),
# Not quite right
'CALL_FUNCTION_EX_KW': ('%c(**%C)', 0, (2, 3, ',')),
})
def build_unpack_tuple_with_call(node):
if node[0] == 'expr':
tup = node[0][0]
else:
tup = node[0]
pass
assert tup == 'tuple'
self.call36_tuple(tup)
buwc = node[-1]
assert buwc.kind.startswith('BUILD_TUPLE_UNPACK_WITH_CALL')
for n in node[1:-1]:
self.f.write(', *')
self.preorder(n)
pass
self.prune()
return
self.n_build_tuple_unpack_with_call = build_unpack_tuple_with_call
def build_unpack_map_with_call(node):
n = node[0]
if n == 'expr':
n = n[0]
if n == 'dict':
self.call36_dict(n)
first = 1
sep = ', **'
else:
first = 0
sep = '**'
for n in node[first:-1]:
self.f.write(sep)
self.preorder(n)
sep = ', **'
pass
self.prune()
return
self.n_build_map_unpack_with_call = build_unpack_map_with_call
def call_ex_kw2(node):
"""Handle CALL_FUNCTION_EX 2 (have KW) but with
BUILD_{MAP,TUPLE}_UNPACK_WITH_CALL"""
# This is weird shit. Thanks Python!
self.preorder(node[0])
self.write('(')
assert node[1] == 'build_tuple_unpack_with_call'
btuwc = node[1]
tup = btuwc[0]
if tup == 'expr':
tup = tup[0]
assert tup == 'tuple'
self.call36_tuple(tup)
assert node[2] == 'build_map_unpack_with_call'
self.write(', ')
d = node[2][0]
if d == 'expr':
d = d[0]
assert d == 'dict'
self.call36_dict(d)
args = btuwc[1]
self.write(', *')
self.preorder(args)
self.write(', **')
star_star_args = node[2][1]
if star_star_args == 'expr':
star_star_args = star_star_args[0]
self.preorder(star_star_args)
self.write(')')
self.prune()
self.n_call_ex_kw2 = call_ex_kw2
def call_ex_kw3(node):
"""Handle CALL_FUNCTION_EX 1 (have KW) but without
BUILD_MAP_UNPACK_WITH_CALL"""
self.preorder(node[0])
self.write('(')
args = node[1][0]
if args == 'expr':
args = args[0]
if args == 'tuple':
if self.call36_tuple(args) > 0:
self.write(', ')
pass
pass
self.write('*')
self.preorder(node[1][1])
self.write(', ')
kwargs = node[2]
if kwargs == 'expr':
kwargs = kwargs[0]
self.write('**')
self.preorder(kwargs)
self.write(')')
self.prune()
self.n_call_ex_kw3 = call_ex_kw3
def call_ex_kw4(node):
"""Handle CALL_FUNCTION_EX 2 (have KW) but without
BUILD_{MAP,TUPLE}_UNPACK_WITH_CALL"""
self.preorder(node[0])
self.write('(')
args = node[1][0]
if args == 'tuple':
if self.call36_tuple(args) > 0:
self.write(', ')
pass
pass
else:
self.write('*')
self.preorder(args)
self.write(', ')
pass
kwargs = node[2]
if kwargs == 'expr':
kwargs = kwargs[0]
self.write('**')
self.preorder(kwargs)
self.write(')')
self.prune()
self.n_call_ex_kw4 = call_ex_kw4
def call36_tuple(node):
"""
A tuple used in a call, these are like normal tuples but they
don't have the enclosing parenthesis.
"""
assert node == 'tuple'
# Note: don't iterate over last element which is a
# BUILD_TUPLE...
flat_elems = flatten_list(node[:-1])
self.indent_more(INDENT_PER_LEVEL)
sep = ''
for elem in flat_elems:
if elem in ('ROT_THREE', 'EXTENDED_ARG'):
continue
assert elem == 'expr'
line_number = self.line_number
value = self.traverse(elem)
if line_number != self.line_number:
sep += '\n' + self.indent + INDENT_PER_LEVEL[:-1]
self.write(sep, value)
sep = ', '
self.indent_less(INDENT_PER_LEVEL)
return len(flat_elems)
self.call36_tuple = call36_tuple
def call36_dict(node):
"""
A dict used in a call_ex_kw2, which are a dictionary items expressed
in a call. This should format to:
a=1, b=2
In other words, no braces, no quotes around keys and ":" becomes
"=".
We will source-code use line breaks to guide us when to break.
"""
p = self.prec
self.prec = 100
self.indent_more(INDENT_PER_LEVEL)
sep = INDENT_PER_LEVEL[:-1]
line_number = self.line_number
if node[0].kind.startswith('kvlist'):
# Python 3.5+ style key/value list in dict
kv_node = node[0]
l = list(kv_node)
i = 0
# Respect line breaks from source
while i < len(l):
self.write(sep)
name = self.traverse(l[i], indent='')
# Strip off beginning and trailing quotes in name
name = name[1:-1]
if i > 0:
line_number = self.indent_if_source_nl(line_number,
self.indent + INDENT_PER_LEVEL[:-1])
line_number = self.line_number
self.write(name, '=')
value = self.traverse(l[i+1], indent=self.indent+(len(name)+2)*' ')
self.write(value)
sep = ", "
if line_number != self.line_number:
sep += "\n" + self.indent + INDENT_PER_LEVEL[:-1]
line_number = self.line_number
i += 2
pass
elif node[-1].kind.startswith('BUILD_CONST_KEY_MAP'):
keys_node = node[-2]
keys = keys_node.attr
# from trepan.api import debug; debug()
assert keys_node == 'LOAD_CONST' and isinstance(keys, tuple)
for i in range(node[-1].attr):
self.write(sep)
self.write(keys[i], '=')
value = self.traverse(node[i], indent='')
self.write(value)
sep = ", "
if line_number != self.line_number:
sep += "\n" + self.indent + INDENT_PER_LEVEL[:-1]
line_number = self.line_number
pass
pass
else:
assert False, "Don't known to to untangle dictionary"
self.prec = p
self.indent_less(INDENT_PER_LEVEL)
return
self.call36_dict = call36_dict
FSTRING_CONVERSION_MAP = {1: '!s', 2: '!r', 3: '!a'}
def n_formatted_value(node):
if node[0] == 'LOAD_CONST':
self.write(node[0].attr)
self.prune()
else:
self.default(node)
self.n_formatted_value = n_formatted_value
def f_conversion(node):
node.conversion = FSTRING_CONVERSION_MAP.get(node.data[1].attr, '')
def fstring_expr(node):
f_conversion(node)
self.default(node)
self.n_fstring_expr = fstring_expr
def fstring_single(node):
f_conversion(node)
self.default(node)
self.n_fstring_single = fstring_single
# def kwargs_only_36(node):
# keys = node[-1].attr
# num_kwargs = len(keys)
# values = node[:num_kwargs]
# for i, (key, value) in enumerate(zip(keys, values)):
# self.write(key + '=')
# self.preorder(value)
# if i < num_kwargs:
# self.write(',')
# self.prune()
# return
# self.n_kwargs_only_36 = kwargs_only_36
def kwargs_36(node):
self.write('(')
keys = node[-1].attr
num_kwargs = len(keys)
num_posargs = len(node) - (num_kwargs + 1)
n = len(node)
assert n >= len(keys)+1, \
'not enough parameters keyword-tuple values'
# try:
# assert n >= len(keys)+1, \
# 'not enough parameters keyword-tuple values'
# except:
# from trepan.api import debug; debug()
sep = ''
# FIXME: adjust output for line breaks?
for i in range(num_posargs):
self.write(sep)
self.preorder(node[i])
sep = ', '
i = num_posargs
j = 0
# FIXME: adjust output for line breaks?
while i < n-1:
self.write(sep)
self.write(keys[j] + '=')
self.preorder(node[i])
sep=', '
i += 1
j += 1
self.write(')')
self.prune()
return
self.n_kwargs_36 = kwargs_36
def starred(node):
l = len(node)
assert l > 0
pos_args = node[0]
if pos_args == 'expr':
pos_args = pos_args[0]
if pos_args == 'tuple':
star_start = 1
template = '%C', (0, -1, ', ')
self.template_engine(template, pos_args)
self.write(', ')
else:
star_start = 0
if l > 1:
template = ( '*%C', (star_start, -1, ', *') )
else:
template = ( '*%c', (star_start, 'expr') )
self.template_engine(template, node)
self.prune()
self.n_starred = starred
def return_closure(node):
# Nothing should be output here
self.prune()
return
self.n_return_closure = return_closure
pass # version >= 3.6
pass # version >= 3.4
pass # version >= 3.0
from uncompyle6.semantics.customize3 import customize_for_version3
customize_for_version3(self, version)
return

View File

@@ -0,0 +1,759 @@
# Copyright (c) 2018 by Rocky Bernstein
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""Isolate Python 3 version-specific semantic actions here.
"""
from uncompyle6.semantics.consts import (
INDENT_PER_LEVEL, PRECEDENCE, TABLE_DIRECT, TABLE_R)
from xdis.code import iscode
from xdis.util import COMPILER_FLAG_BIT
from spark_parser.ast import GenericASTTraversalPruningException
from uncompyle6.scanners.tok import Token
from uncompyle6.semantics.helper import flatten_list
from uncompyle6.semantics.make_function import make_function3_annotate
def customize_for_version3(self, version):
TABLE_DIRECT.update({
'function_def_annotate': ( '\n\n%|def %c%c\n', -1, 0),
'store_locals': ( '%|# inspect.currentframe().f_locals = __locals__\n', ),
})
if version >= 3.3:
def n_yield_from(node):
self.write('yield from')
self.write(' ')
if 3.3 <= self.version <= 3.4:
self.preorder(node[0][0][0][0])
elif self.version >= 3.5:
self.preorder(node[0])
else:
assert False, "dunno about this python version"
self.prune() # stop recursing
self.n_yield_from = n_yield_from
if 3.2 <= version <= 3.4:
def n_call(node):
mapping = self._get_mapping(node)
key = node
for i in mapping[1:]:
key = key[i]
pass
if key.kind.startswith('CALL_FUNCTION_VAR_KW'):
# We may want to fill this in...
# But it is distinct from CALL_FUNCTION_VAR below
pass
elif key.kind.startswith('CALL_FUNCTION_VAR'):
# CALL_FUNCTION_VAR's top element of the stack contains
# the variable argument list, then comes
# annotation args, then keyword args.
# In the most least-top-most stack entry, but position 1
# in node order, the positional args.
argc = node[-1].attr
nargs = argc & 0xFF
kwargs = (argc >> 8) & 0xFF
# FIXME: handle annotation args
if kwargs != 0:
# kwargs == 0 is handled by the table entry
# Should probably handle it here though.
if nargs == 0:
template = ('%c(*%c, %C)',
0, -2, (1, kwargs+1, ', '))
else:
template = ('%c(%C, *%c, %C)',
0, (1, nargs+1, ', '),
-2, (-2-kwargs, -2, ', '))
self.template_engine(template, node)
self.prune()
self.default(node)
self.n_call = n_call
def n_mkfunc_annotate(node):
if self.version >= 3.3 or node[-2] == 'kwargs':
# LOAD_CONST code object ..
# LOAD_CONST 'x0' if >= 3.3
# EXTENDED_ARG
# MAKE_FUNCTION ..
code = node[-4]
elif node[-3] == 'expr':
code = node[-3][0]
else:
# LOAD_CONST code object ..
# MAKE_FUNCTION ..
code = node[-3]
self.indent_more()
for annotate_last in range(len(node)-1, -1, -1):
if node[annotate_last] == 'annotate_tuple':
break
# FIXME: the real situation is that when derived from
# function_def_annotate we the name has been filled in.
# But when derived from funcdefdeco it hasn't Would like a better
# way to distinquish.
if self.f.getvalue()[-4:] == 'def ':
self.write(code.attr.co_name)
# FIXME: handle and pass full annotate args
make_function3_annotate(self, node, is_lambda=False,
codeNode=code, annotate_last=annotate_last)
if len(self.param_stack) > 1:
self.write('\n\n')
else:
self.write('\n\n\n')
self.indent_less()
self.prune() # stop recursing
self.n_mkfunc_annotate = n_mkfunc_annotate
if version >= 3.4:
########################
# Python 3.4+ Additions
#######################
TABLE_DIRECT.update({
'LOAD_CLASSDEREF': ( '%{pattr}', ),
})
########################
# Python 3.5+ Additions
#######################
if version >= 3.5:
TABLE_DIRECT.update({
'await_expr': ( 'await %c', 0),
'await_stmt': ( '%|%c\n', 0),
'async_for_stmt': (
'%|async for %c in %c:\n%+%c%-\n\n', 9, 1, 25 ),
'async_forelse_stmt': (
'%|async for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 9, 1, 25, 28 ),
'async_with_stmt': (
'%|async with %c:\n%+%c%-', 0, 7),
'async_with_as_stmt': (
'%|async with %c as %c:\n%+%c%-', 0, 6, 7),
'unmap_dict': ( '{**%C}', (0, -1, ', **') ),
# 'unmapexpr': ( '{**%c}', 0), # done by n_unmapexpr
})
def async_call(node):
self.f.write('async ')
node.kind == 'call'
p = self.prec
self.prec = 80
self.template_engine(('%c(%P)', 0, (1, -4, ', ',
100)), node)
self.prec = p
node.kind == 'async_call'
self.prune()
self.n_async_call = async_call
self.n_build_list_unpack = self.n_list
if version == 3.5:
def n_call(node):
mapping = self._get_mapping(node)
table = mapping[0]
key = node
for i in mapping[1:]:
key = key[i]
pass
if key.kind.startswith('CALL_FUNCTION_VAR_KW'):
# Python 3.5 changes the stack position of
# *args: kwargs come after *args whereas
# in earlier Pythons, *args is at the end
# which simplifies things from our
# perspective. Python 3.6+ replaces
# CALL_FUNCTION_VAR_KW with
# CALL_FUNCTION_EX We will just swap the
# order to make it look like earlier
# Python 3.
entry = table[key.kind]
kwarg_pos = entry[2][1]
args_pos = kwarg_pos - 1
# Put last node[args_pos] after subsequent kwargs
while node[kwarg_pos] == 'kwarg' and kwarg_pos < len(node):
# swap node[args_pos] with node[kwargs_pos]
node[kwarg_pos], node[args_pos] = node[args_pos], node[kwarg_pos]
args_pos = kwarg_pos
kwarg_pos += 1
elif key.kind.startswith('CALL_FUNCTION_VAR'):
# CALL_FUNCTION_VAR's top element of the stack contains
# the variable argument list, then comes
# annotation args, then keyword args.
# In the most least-top-most stack entry, but position 1
# in node order, the positional args.
argc = node[-1].attr
nargs = argc & 0xFF
kwargs = (argc >> 8) & 0xFF
# FIXME: handle annotation args
if nargs > 0:
template = ('%c(%C, ', 0, (1, nargs+1, ', '))
else:
template = ('%c(', 0)
self.template_engine(template, node)
args_node = node[-2]
if args_node in ('pos_arg', 'expr'):
args_node = args_node[0]
if args_node == 'build_list_unpack':
template = ('*%P)', (0, len(args_node)-1, ', *', 100))
self.template_engine(template, args_node)
else:
if len(node) - nargs > 3:
template = ('*%c, %C)', nargs+1, (nargs+kwargs+1, -1, ', '))
else:
template = ('*%c)', nargs+1)
self.template_engine(template, node)
self.prune()
self.default(node)
self.n_call = n_call
def n_function_def(node):
if self.version == 3.6:
code_node = node[0][0]
else:
code_node = node[0][1]
is_code = hasattr(code_node, 'attr') and iscode(code_node.attr)
if (is_code and
(code_node.attr.co_flags & COMPILER_FLAG_BIT['COROUTINE'])):
self.template_engine(('\n\n%|async def %c\n',
-2), node)
else:
self.template_engine(('\n\n%|def %c\n', -2),
node)
self.prune()
self.n_function_def = n_function_def
def unmapexpr(node):
last_n = node[0][-1]
for n in node[0]:
self.preorder(n)
if n != last_n:
self.f.write(', **')
pass
pass
self.prune()
pass
self.n_unmapexpr = unmapexpr
# FIXME: start here
def n_list_unpack(node):
"""
prettyprint an unpacked list or tuple
"""
p = self.prec
self.prec = 100
lastnode = node.pop()
lastnodetype = lastnode.kind
# If this build list is inside a CALL_FUNCTION_VAR,
# then the first * has already been printed.
# Until I have a better way to check for CALL_FUNCTION_VAR,
# will assume that if the text ends in *.
last_was_star = self.f.getvalue().endswith('*')
if lastnodetype.startswith('BUILD_LIST'):
self.write('['); endchar = ']'
elif lastnodetype.startswith('BUILD_TUPLE'):
# Tuples can appear places that can NOT
# have parenthesis around them, like array
# subscripts. We check for that by seeing
# if a tuple item is some sort of slice.
no_parens = False
for n in node:
if n == 'expr' and n[0].kind.startswith('build_slice'):
no_parens = True
break
pass
if no_parens:
endchar = ''
else:
self.write('('); endchar = ')'
pass
elif lastnodetype.startswith('BUILD_SET'):
self.write('{'); endchar = '}'
elif lastnodetype.startswith('BUILD_MAP_UNPACK'):
self.write('{*'); endchar = '}'
elif lastnodetype.startswith('ROT_TWO'):
self.write('('); endchar = ')'
else:
raise TypeError('Internal Error: n_build_list expects list, tuple, set, or unpack')
flat_elems = flatten_list(node)
self.indent_more(INDENT_PER_LEVEL)
sep = ''
for elem in flat_elems:
if elem in ('ROT_THREE', 'EXTENDED_ARG'):
continue
assert elem == 'expr'
line_number = self.line_number
value = self.traverse(elem)
if elem[0] == 'tuple':
assert value[0] == '('
assert value[-1] == ')'
value = value[1:-1]
if value[-1] == ',':
# singleton tuple
value = value[:-1]
else:
value = '*' + value
if line_number != self.line_number:
sep += '\n' + self.indent + INDENT_PER_LEVEL[:-1]
else:
if sep != '': sep += ' '
if not last_was_star:
pass
else:
last_was_star = False
self.write(sep, value)
sep = ','
if lastnode.attr == 1 and lastnodetype.startswith('BUILD_TUPLE'):
self.write(',')
self.write(endchar)
self.indent_less(INDENT_PER_LEVEL)
self.prec = p
self.prune()
return
self.n_tuple_unpack = n_list_unpack
if version >= 3.6:
########################
# Python 3.6+ Additions
#######################
# Value 100 is important; it is exactly
# module/function precidence.
PRECEDENCE['call_kw'] = 100
PRECEDENCE['call_kw36'] = 100
PRECEDENCE['call_ex'] = 100
PRECEDENCE['call_ex_kw'] = 100
PRECEDENCE['call_ex_kw2'] = 100
PRECEDENCE['call_ex_kw3'] = 100
PRECEDENCE['call_ex_kw4'] = 100
PRECEDENCE['unmap_dict'] = 0
TABLE_DIRECT.update({
'tryfinally36': ( '%|try:\n%+%c%-%|finally:\n%+%c%-\n\n',
(1, 'returns'), 3 ),
'fstring_expr': ( "{%c%{conversion}}", 0),
# FIXME: the below assumes the format strings
# don't have ''' in them. Fix this properly
'fstring_single': ( "f'''{%c%{conversion}}'''", 0),
'fstring_multi': ( "f'''%c'''", 0),
'func_args36': ( "%c(**", 0),
'try_except36': ( '%|try:\n%+%c%-%c\n\n', 1, 2 ),
'except_return': ( '%|except:\n%+%c%-', 3 ),
'unpack_list': ( '*%c', (0, 'list') ),
'call_ex' : (
'%c(%p)',
(0, 'expr'), (1, 100)),
'call_ex_kw' : (
'%c(%p)',
(0, 'expr'), (2, 100)),
})
TABLE_R.update({
'CALL_FUNCTION_EX': ('%c(*%P)', 0, (1, 2, ', ', 100)),
# Not quite right
'CALL_FUNCTION_EX_KW': ('%c(**%C)', 0, (2, 3, ',')),
})
def build_unpack_tuple_with_call(node):
if node[0] == 'expr':
tup = node[0][0]
else:
tup = node[0]
pass
assert tup == 'tuple'
self.call36_tuple(tup)
buwc = node[-1]
assert buwc.kind.startswith('BUILD_TUPLE_UNPACK_WITH_CALL')
for n in node[1:-1]:
self.f.write(', *')
self.preorder(n)
pass
self.prune()
return
self.n_build_tuple_unpack_with_call = build_unpack_tuple_with_call
def build_unpack_map_with_call(node):
n = node[0]
if n == 'expr':
n = n[0]
if n == 'dict':
self.call36_dict(n)
first = 1
sep = ', **'
else:
first = 0
sep = '**'
for n in node[first:-1]:
self.f.write(sep)
self.preorder(n)
sep = ', **'
pass
self.prune()
return
self.n_build_map_unpack_with_call = build_unpack_map_with_call
def call_ex_kw2(node):
"""Handle CALL_FUNCTION_EX 2 (have KW) but with
BUILD_{MAP,TUPLE}_UNPACK_WITH_CALL"""
# This is weird shit. Thanks Python!
self.preorder(node[0])
self.write('(')
assert node[1] == 'build_tuple_unpack_with_call'
btuwc = node[1]
tup = btuwc[0]
if tup == 'expr':
tup = tup[0]
assert tup == 'tuple'
self.call36_tuple(tup)
assert node[2] == 'build_map_unpack_with_call'
self.write(', ')
d = node[2][0]
if d == 'expr':
d = d[0]
assert d == 'dict'
self.call36_dict(d)
args = btuwc[1]
self.write(', *')
self.preorder(args)
self.write(', **')
star_star_args = node[2][1]
if star_star_args == 'expr':
star_star_args = star_star_args[0]
self.preorder(star_star_args)
self.write(')')
self.prune()
self.n_call_ex_kw2 = call_ex_kw2
def call_ex_kw3(node):
"""Handle CALL_FUNCTION_EX 1 (have KW) but without
BUILD_MAP_UNPACK_WITH_CALL"""
self.preorder(node[0])
self.write('(')
args = node[1][0]
if args == 'expr':
args = args[0]
if args == 'tuple':
if self.call36_tuple(args) > 0:
self.write(', ')
pass
pass
self.write('*')
self.preorder(node[1][1])
self.write(', ')
kwargs = node[2]
if kwargs == 'expr':
kwargs = kwargs[0]
if kwargs == 'dict':
self.call36_dict(kwargs)
else:
self.write('**')
self.preorder(kwargs)
self.write(')')
self.prune()
self.n_call_ex_kw3 = call_ex_kw3
def call_ex_kw4(node):
"""Handle CALL_FUNCTION_EX {1 or 2} but without
BUILD_{MAP,TUPLE}_UNPACK_WITH_CALL"""
self.preorder(node[0])
self.write('(')
args = node[1][0]
if args == 'tuple':
if self.call36_tuple(args) > 0:
self.write(', ')
pass
pass
else:
self.write('*')
self.preorder(args)
self.write(', ')
pass
kwargs = node[2]
if kwargs == 'expr':
kwargs = kwargs[0]
call_function_ex = node[-1]
assert call_function_ex == 'CALL_FUNCTION_EX_KW'
# FIXME: decide if the below test be on kwargs == 'dict'
if (call_function_ex.attr & 1 and
(not isinstance(kwargs, Token) and kwargs != 'attribute')
and not kwargs[0].kind.startswith('kvlist')):
self.call36_dict(kwargs)
else:
self.write('**')
self.preorder(kwargs)
self.write(')')
self.prune()
self.n_call_ex_kw4 = call_ex_kw4
def call36_tuple(node):
"""
A tuple used in a call, these are like normal tuples but they
don't have the enclosing parenthesis.
"""
assert node == 'tuple'
# Note: don't iterate over last element which is a
# BUILD_TUPLE...
flat_elems = flatten_list(node[:-1])
self.indent_more(INDENT_PER_LEVEL)
sep = ''
for elem in flat_elems:
if elem in ('ROT_THREE', 'EXTENDED_ARG'):
continue
assert elem == 'expr'
line_number = self.line_number
value = self.traverse(elem)
if line_number != self.line_number:
sep += '\n' + self.indent + INDENT_PER_LEVEL[:-1]
self.write(sep, value)
sep = ', '
self.indent_less(INDENT_PER_LEVEL)
return len(flat_elems)
self.call36_tuple = call36_tuple
def call36_dict(node):
"""
A dict used in a call_ex_kw2, which are a dictionary items expressed
in a call. This should format to:
a=1, b=2
In other words, no braces, no quotes around keys and ":" becomes
"=".
We will source-code use line breaks to guide us when to break.
"""
p = self.prec
self.prec = 100
self.indent_more(INDENT_PER_LEVEL)
sep = INDENT_PER_LEVEL[:-1]
line_number = self.line_number
if node[0].kind.startswith('kvlist'):
# Python 3.5+ style key/value list in dict
kv_node = node[0]
l = list(kv_node)
i = 0
length = len(l)
# FIXME: Parser-speed improved grammars will have BUILD_MAP
# at the end. So in the future when everything is
# complete, we can do an "assert" instead of "if".
if kv_node[-1].kind.startswith("BUILD_MAP"):
length -= 1
# Respect line breaks from source
while i < length:
self.write(sep)
name = self.traverse(l[i], indent='')
# Strip off beginning and trailing quotes in name
name = name[1:-1]
if i > 0:
line_number = self.indent_if_source_nl(line_number,
self.indent + INDENT_PER_LEVEL[:-1])
line_number = self.line_number
self.write(name, '=')
value = self.traverse(l[i+1], indent=self.indent+(len(name)+2)*' ')
self.write(value)
sep = ", "
if line_number != self.line_number:
sep += "\n" + self.indent + INDENT_PER_LEVEL[:-1]
line_number = self.line_number
i += 2
pass
elif node[-1].kind.startswith('BUILD_CONST_KEY_MAP'):
keys_node = node[-2]
keys = keys_node.attr
# from trepan.api import debug; debug()
assert keys_node == 'LOAD_CONST' and isinstance(keys, tuple)
for i in range(node[-1].attr):
self.write(sep)
self.write(keys[i], '=')
value = self.traverse(node[i], indent='')
self.write(value)
sep = ", "
if line_number != self.line_number:
sep += "\n" + self.indent + INDENT_PER_LEVEL[:-1]
line_number = self.line_number
pass
pass
else:
self.write("**")
try:
self.default(node)
except GenericASTTraversalPruningException:
pass
self.prec = p
self.indent_less(INDENT_PER_LEVEL)
return
self.call36_dict = call36_dict
FSTRING_CONVERSION_MAP = {1: '!s', 2: '!r', 3: '!a'}
def n_except_suite_finalize(node):
if node[1] == 'returns' and self.hide_internal:
# Process node[1] only.
# The code after "returns", e.g. node[3], is dead code.
# Adding it is wrong as it dedents and another
# exception handler "except_stmt" afterwards.
# Note it is also possible that the grammar is wrong here.
# and this should not be "except_stmt".
self.indent_more()
self.preorder(node[1])
self.indent_less()
else:
self.default(node)
self.prune()
self.n_except_suite_finalize = n_except_suite_finalize
def n_formatted_value(node):
if node[0] == 'LOAD_CONST':
self.write(node[0].attr)
self.prune()
else:
self.default(node)
self.n_formatted_value = n_formatted_value
def f_conversion(node):
node.conversion = FSTRING_CONVERSION_MAP.get(node.data[1].attr, '')
def fstring_expr(node):
f_conversion(node)
self.default(node)
self.n_fstring_expr = fstring_expr
def fstring_single(node):
f_conversion(node)
self.default(node)
self.n_fstring_single = fstring_single
# def kwargs_only_36(node):
# keys = node[-1].attr
# num_kwargs = len(keys)
# values = node[:num_kwargs]
# for i, (key, value) in enumerate(zip(keys, values)):
# self.write(key + '=')
# self.preorder(value)
# if i < num_kwargs:
# self.write(',')
# self.prune()
# return
# self.n_kwargs_only_36 = kwargs_only_36
def n_call_kw36(node):
self.template_engine(("%c(", 0), node)
keys = node[-2].attr
num_kwargs = len(keys)
num_posargs = len(node) - (num_kwargs + 2)
n = len(node)
assert n >= len(keys)+1, \
'not enough parameters keyword-tuple values'
sep = ''
line_number = self.line_number
for i in range(1, num_posargs):
self.write(sep)
self.preorder(node[i])
if line_number != self.line_number:
sep = ",\n" + self.indent + " "
else:
sep = ", "
line_number = self.line_number
i = num_posargs
j = 0
# FIXME: adjust output for line breaks?
while i < n-2:
self.write(sep)
self.write(keys[j] + '=')
self.preorder(node[i])
if line_number != self.line_number:
sep = ",\n" + self.indent + " "
else:
sep = ", "
i += 1
j += 1
self.write(')')
self.prune()
return
self.n_call_kw36 = n_call_kw36
def starred(node):
l = len(node)
assert l > 0
pos_args = node[0]
if pos_args == 'expr':
pos_args = pos_args[0]
if pos_args == 'tuple':
build_tuple = pos_args[0]
if build_tuple.kind.startswith('BUILD_TUPLE'):
tuple_len = 0
else:
tuple_len = len(node) - 1
star_start = 1
template = '%C', (0, -1, ', ')
self.template_engine(template, pos_args)
if tuple_len == 0:
self.write("*()")
# That's it
self.prune()
self.write(', ')
else:
star_start = 0
if l > 1:
template = ( '*%C', (star_start, -1, ', *') )
else:
template = ( '*%c', (star_start, 'expr') )
self.template_engine(template, node)
self.prune()
self.n_starred = starred
def return_closure(node):
# Nothing should be output here
self.prune()
return
self.n_return_closure = return_closure
pass # version >= 3.6
pass # version >= 3.4
return

View File

@@ -428,10 +428,7 @@ def make_function2(self, node, is_lambda, nested=1, codeNode=None):
code, self.version)
# Python 2 doesn't support the "nonlocal" statement
try:
assert self.version >= 3.0 or not nonlocals
except:
from trepan.api import debug; debug()
assert self.version >= 3.0 or not nonlocals
for g in sorted((all_globals & self.mod_globs) | globals):
self.println(self.indent, 'global ', g)
@@ -509,18 +506,39 @@ def make_function3(self, node, is_lambda, nested=1, codeNode=None):
lambda_index = None
args_node = node[-1]
# Get a list of tree nodes that constitute the values for the "default
# parameters"; these are default values that appear before any *, and are
# not to be confused with keyword parameters which may appear after *.
if isinstance(args_node.attr, tuple):
pos_args, kw_args, annotate_argc = args_node.attr
# FIXME: there is probably a better way to classify this.
if (self.version <= 3.3 and len(node) > 2 and
node[lambda_index] != 'LOAD_LAMBDA' and
(node[0].kind.startswith('kwarg') or node[-4].kind != 'load_closure')):
# args are after kwargs; kwargs are bundled as one node
defparams = node[1:args_node.attr[0]+1]
have_kwargs = node[0].kind.startswith('kwarg') or node[0] == 'no_kwargs'
if len(node) >= 4:
lc_index = -4
else:
# args are before kwargs; kwags as bundled as one node
lc_index = -3
pass
if (3.1 <= self.version <= 3.3 and len(node) > 2 and
node[lambda_index] != 'LOAD_LAMBDA' and
(have_kwargs or node[lc_index].kind != 'load_closure')):
# Find the index in "node" where the first default
# parameter value is located. Note this is in contrast to
# key-word arguments, pairs of (name, value), which appear after "*".
# "default_values_start" is this location.
default_values_start = 0
if node[0] == 'no_kwargs':
default_values_start += 1
# args are after kwargs; kwargs are bundled as one node
if node[default_values_start] == 'kwargs':
default_values_start += 1
defparams = node[default_values_start:default_values_start+args_node.attr[0]]
else:
# args are first, before kwargs. Or there simply are no kwargs.
defparams = node[:args_node.attr[0]]
pos_args, kw_args, annotate_argc = args_node.attr
pass
else:
if self.version < 3.6:
defparams = node[:args_node.attr]
@@ -528,8 +546,10 @@ def make_function3(self, node, is_lambda, nested=1, codeNode=None):
else:
default, kw_args, annotate, closure = args_node.attr
if default:
assert node[0] == 'expr', "expecting mkfunc default node to be an expr"
expr_node = node[0]
if node[0] == 'pos_arg':
expr_node = expr_node[0]
assert expr_node == 'expr', "expecting mkfunc default node to be an expr"
if (expr_node[0] == 'LOAD_CONST' and
isinstance(expr_node[0].attr, tuple)):
defparams = [repr(a) for a in expr_node[0].attr]
@@ -537,7 +557,21 @@ def make_function3(self, node, is_lambda, nested=1, codeNode=None):
defparams = [self.traverse(n, indent='') for n in expr_node[0][:-1]]
else:
defparams = []
# FIXME: handle kw, annotate and closure
i = -4
kw_pairs = 0
if closure:
# FIXME: fill in
i -= 1
if annotate:
# FIXME: fill in
i -= 1
if kw_args:
kw_node = node[i]
if kw_node == 'expr':
kw_node = kw_node[0]
if kw_node == 'dict':
kw_pairs = kw_node[-1].attr
pass
if 3.0 <= self.version <= 3.2:
@@ -561,7 +595,7 @@ def make_function3(self, node, is_lambda, nested=1, codeNode=None):
paramnames = list(scanner_code.co_varnames[:argc])
# defaults are for last n parameters, thus reverse
if not 3.0 <= self.version <= 3.1 or self.version >= 3.6:
if not 3.0 == self.version or self.version >= 3.6:
paramnames.reverse(); defparams.reverse()
try:
@@ -576,10 +610,10 @@ def make_function3(self, node, is_lambda, nested=1, codeNode=None):
return
if self.version >= 3.0:
kw_pairs = args_node.attr[1]
if self.version < 3.6:
kw_pairs = args_node.attr[1]
else:
kw_pairs = 0
indent = self.indent
# build parameters
params = []
@@ -591,7 +625,7 @@ def make_function3(self, node, is_lambda, nested=1, codeNode=None):
else:
params = paramnames
if not 3.0 <= self.version <= 3.1 or self.version >= 3.6:
if not 3.0 == self.version or self.version >= 3.6:
params.reverse() # back to correct order
if code_has_star_arg(code):
@@ -695,7 +729,7 @@ def make_function3(self, node, is_lambda, nested=1, codeNode=None):
for n in node:
if n == 'pos_arg':
continue
elif self.version >= 3.4 and not (n.kind in ('kwargs', 'kwargs1', 'kwarg')):
elif self.version >= 3.4 and not (n.kind in ('kwargs', 'no_kwargs', 'kwarg')):
continue
else:
self.preorder(n)

View File

@@ -135,7 +135,7 @@ from spark_parser import GenericASTTraversal, DEFAULT_DEBUG as PARSER_DEFAULT_DE
from uncompyle6.scanner import Code, get_scanner
import uncompyle6.parser as python_parser
from uncompyle6.semantics.make_function import (
make_function2, make_function3, make_function3_annotate,
make_function2, make_function3
)
from uncompyle6.semantics.parser_error import ParserError
from uncompyle6.semantics.check_ast import checker
@@ -147,7 +147,7 @@ from uncompyle6.scanners.tok import Token
from uncompyle6.semantics.consts import (
LINE_LENGTH, RETURN_LOCALS, NONE, RETURN_NONE, PASS,
ASSIGN_DOC_STRING, NAME_MODULE, TAB,
INDENT_PER_LEVEL, TABLE_R, TABLE_DIRECT, MAP_DIRECT,
INDENT_PER_LEVEL, TABLE_R, MAP_DIRECT,
MAP, PRECEDENCE, ASSIGN_TUPLE_PARAM, escape, minint)
@@ -256,290 +256,6 @@ class SourceWalker(GenericASTTraversal, object):
self.write("\n" + self.indent + INDENT_PER_LEVEL[:-1])
return self.line_number
def customize_for_version(self, is_pypy, version):
if is_pypy:
########################
# PyPy changes
#######################
TABLE_DIRECT.update({
'assert_pypy': ( '%|assert %c\n' , 1 ),
'assert2_pypy': ( '%|assert %c, %c\n' , 1, 4 ),
'try_except_pypy': ( '%|try:\n%+%c%-%c\n\n', 1, 2 ),
'tryfinallystmt_pypy': ( '%|try:\n%+%c%-%|finally:\n%+%c%-\n\n', 1, 3 ),
'assign3_pypy': ( '%|%c, %c, %c = %c, %c, %c\n', 5, 4, 3, 0, 1, 2 ),
'assign2_pypy': ( '%|%c, %c = %c, %c\n', 3, 2, 0, 1),
})
else:
########################
# Without PyPy
#######################
TABLE_DIRECT.update({
'assert': ( '%|assert %c\n' , 0 ),
'assert2': ( '%|assert %c, %c\n' , 0, 3 ),
'try_except': ( '%|try:\n%+%c%-%c\n\n', 1, 3 ),
'assign2': ( '%|%c, %c = %c, %c\n', 3, 4, 0, 1 ),
'assign3': ( '%|%c, %c, %c = %c, %c, %c\n', 5, 6, 7, 0, 1, 2 ),
})
if version < 3.0:
TABLE_R.update({
'STORE_SLICE+0': ( '%c[:]', 0 ),
'STORE_SLICE+1': ( '%c[%p:]', 0, (1, 100) ),
'STORE_SLICE+2': ( '%c[:%p]', 0, (1, 100) ),
'STORE_SLICE+3': ( '%c[%p:%p]', 0, (1, 100), (2, 100) ),
'DELETE_SLICE+0': ( '%|del %c[:]\n', 0 ),
'DELETE_SLICE+1': ( '%|del %c[%c:]\n', 0, 1 ),
'DELETE_SLICE+2': ( '%|del %c[:%c]\n', 0, 1 ),
'DELETE_SLICE+3': ( '%|del %c[%c:%c]\n', 0, 1, 2 ),
})
TABLE_DIRECT.update({
'raise_stmt2': ( '%|raise %c, %c\n', 0, 1),
})
else:
TABLE_DIRECT.update({
# Gotta love Python for its futzing around with syntax like this
'raise_stmt2': ( '%|raise %c from %c\n', 0, 1),
})
if version >= 3.2:
TABLE_DIRECT.update({
'del_deref_stmt': ( '%|del %c\n', 0),
'DELETE_DEREF': ( '%{pattr}', 0 ),
})
if version <= 2.4:
TABLE_DIRECT.update({
'importmultiple': ( '%|import %c%c\n', 2, 3),
'import_cont' : ( ', %c', 2),
'tryfinallystmt': ( '%|try:\n%+%c%-%|finally:\n%+%c%-',
(1, 'suite_stmts_opt') ,
(5, 'suite_stmts_opt') )
})
if version == 2.3:
TABLE_DIRECT.update({
'if1_stmt': ( '%|if 1\n%+%c%-', 5 )
})
global NAME_MODULE
NAME_MODULE = AST('stmt',
[ AST('assign',
[ AST('expr',
[Token('LOAD_GLOBAL', pattr='__name__',
offset=0, has_arg=True)]),
AST('store',
[ Token('STORE_NAME', pattr='__module__',
offset=3, has_arg=True)])
])])
pass
if version <= 2.3:
if version <= 2.1:
TABLE_DIRECT.update({
'importmultiple': ( '%c', 2 ),
# FIXME: not quite right. We have indiividual imports
# when there is in fact one: "import a, b, ..."
'imports_cont': ( '%C%,', (1, 100, '\n') ),
})
pass
pass
pass
elif version >= 2.5:
########################
# Import style for 2.5+
########################
TABLE_DIRECT.update({
'importmultiple': ( '%|import %c%c\n', 2, 3 ),
'import_cont' : ( ', %c', 2 ),
# With/as is allowed as "from future" thing in 2.5
# Note: It is safe to put the variables after "as" in parenthesis,
# and sometimes it is needed.
'withstmt': ( '%|with %c:\n%+%c%-', 0, 3),
'withasstmt': ( '%|with %c as (%c):\n%+%c%-', 0, 2, 3),
})
# In 2.5+ "except" handlers and the "finally" can appear in one
# "try" statement. So the below has the effect of combining the
# "tryfinally" with statement with the "try_except" statement
def tryfinallystmt(node):
if len(node[1][0]) == 1 and node[1][0][0] == 'stmt':
if node[1][0][0][0] == 'try_except':
node[1][0][0][0].kind = 'tf_try_except'
if node[1][0][0][0] == 'tryelsestmt':
node[1][0][0][0].kind = 'tf_tryelsestmt'
self.default(node)
self.n_tryfinallystmt = tryfinallystmt
########################################
# Python 2.6+
# except <condition> as <var>
# vs. older:
# except <condition> , <var>
#
# For 2.6 we use the older syntax which
# matches how we parse this in bytecode
########################################
if version > 2.6:
TABLE_DIRECT.update({
'except_cond2': ( '%|except %c as %c:\n', 1, 5 ),
})
else:
TABLE_DIRECT.update({
'except_cond3': ( '%|except %c, %c:\n', 1, 6 ),
'testtrue_then': ( 'not %p', (0, 22) ),
})
if 2.4 <= version <= 2.6:
TABLE_DIRECT.update({
'comp_for': ( ' for %c in %c', 3, 1 ),
})
else:
TABLE_DIRECT.update({
'comp_for': ( ' for %c in %c%c', 2, 0, 3 ),
})
if version >= 3.0:
TABLE_DIRECT.update({
'function_def_annotate': ( '\n\n%|def %c%c\n', -1, 0),
'store_locals': ( '%|# inspect.currentframe().f_locals = __locals__\n', ),
})
def n_mkfunc_annotate(node):
if self.version >= 3.3 or node[-2] == 'kwargs':
# LOAD_CONST code object ..
# LOAD_CONST 'x0' if >= 3.3
# EXTENDED_ARG
# MAKE_FUNCTION ..
code = node[-4]
elif node[-3] == 'expr':
code = node[-3][0]
else:
# LOAD_CONST code object ..
# MAKE_FUNCTION ..
code = node[-3]
self.indent_more()
for annotate_last in range(len(node)-1, -1, -1):
if node[annotate_last] == 'annotate_tuple':
break
# FIXME: the real situation is that when derived from
# function_def_annotate we the name has been filled in.
# But when derived from funcdefdeco it hasn't Would like a better
# way to distinquish.
if self.f.getvalue()[-4:] == 'def ':
self.write(code.attr.co_name)
# FIXME: handle and pass full annotate args
make_function3_annotate(self, node, is_lambda=False,
codeNode=code, annotate_last=annotate_last)
if len(self.param_stack) > 1:
self.write('\n\n')
else:
self.write('\n\n\n')
self.indent_less()
self.prune() # stop recursing
self.n_mkfunc_annotate = n_mkfunc_annotate
if version >= 3.4:
########################
# Python 3.4+ Additions
#######################
TABLE_DIRECT.update({
'LOAD_CLASSDEREF': ( '%{pattr}', ),
})
########################
# Python 3.5+ Additions
#######################
if version >= 3.5:
TABLE_DIRECT.update({
'await_expr': ( 'await %c', 0),
'await_stmt': ( '%|%c\n', 0),
'async_for_stmt': (
'%|async for %c in %c:\n%+%c%-\n\n', 9, 1, 25 ),
'async_forelse_stmt': (
'%|async for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 9, 1, 25, 28 ),
'async_with_stmt': (
'%|async with %c:\n%+%c%-', 0, 7),
'async_with_as_stmt': (
'%|async with %c as %c:\n%+%c%-', 0, 6, 7),
'unmap_dict': ( '{**%C}', (0, -1, ', **') ),
# 'unmapexpr': ( '{**%c}', 0), # done by n_unmapexpr
})
def async_call(node):
self.f.write('async ')
node.kind == 'call'
p = self.prec
self.prec = 80
self.template_engine(('%c(%P)', 0,
(1, -4, ', ', 100)), node)
self.prec = p
node.kind == 'async_call'
self.prune()
self.n_async_call = async_call
self.n_build_list_unpack = self.n_list
if version == 3.5:
def n_call(node):
mapping = self._get_mapping(node)
table = mapping[0]
key = node
for i in mapping[1:]:
key = key[i]
pass
if key.kind.startswith('CALL_FUNCTION_VAR_KW'):
# Python 3.5 changes the stack position of *args. kwargs come
# after *args whereas in earlier Pythons, *args is at the end
# which simpilfiies things from our perspective.
# Python 3.6+ replaces CALL_FUNCTION_VAR_KW with CALL_FUNCTION_EX
# We will just swap the order to make it look like earlier Python 3.
entry = table[key.kind]
kwarg_pos = entry[2][1]
args_pos = kwarg_pos - 1
# Put last node[args_pos] after subsequent kwargs
while node[kwarg_pos] == 'kwarg' and kwarg_pos < len(node):
# swap node[args_pos] with node[kwargs_pos]
node[kwarg_pos], node[args_pos] = node[args_pos], node[kwarg_pos]
args_pos = kwarg_pos
kwarg_pos += 1
self.default(node)
self.n_call = n_call
def n_function_def(node):
if self.version == 3.6:
code_node = node[0][0]
else:
code_node = node[0][1]
is_code = hasattr(code_node, 'attr') and iscode(code_node.attr)
if (is_code and
(code_node.attr.co_flags & COMPILER_FLAG_BIT['COROUTINE'])):
self.template_engine(('\n\n%|async def %c\n',
-2), node)
else:
self.template_engine(('\n\n%|def %c\n', -2),
node)
self.prune()
self.n_function_def = n_function_def
def unmapexpr(node):
last_n = node[0][-1]
for n in node[0]:
self.preorder(n)
if n != last_n:
self.f.write(', **')
pass
pass
self.prune()
pass
self.n_unmapexpr = unmapexpr
pass # version >= 3.4
pass # version >= 3.0
return
f = property(lambda s: s.params['f'],
lambda s, x: s.params.__setitem__('f', x),
lambda s: s.params.__delitem__('f'),
@@ -691,26 +407,14 @@ class SourceWalker(GenericASTTraversal, object):
self.prune() # stop recursing
def n_yield(self, node):
self.write('yield')
if node != AST('yield', [NONE, Token('YIELD_VALUE')]):
self.write(' ')
self.preorder(node[0])
self.template_engine(( 'yield %c', 0), node)
elif self.version <= 2.4:
# Early versions of Python don't allow a plain "yield"
self.write(' None')
self.prune() # stop recursing
# In Python 3.3+ only
def n_yield_from(self, node):
self.write('yield from')
self.write(' ')
if 3.3 <= self.version <= 3.4:
self.preorder(node[0][0][0][0])
elif self.version >= 3.5:
self.preorder(node[0])
self.write('yield None')
else:
assert False, "dunno about this python version"
self.write('yield')
self.prune() # stop recursing
def n_build_slice3(self, node):
@@ -1041,7 +745,7 @@ class SourceWalker(GenericASTTraversal, object):
def n_mkfunc(self, node):
if self.version >= 3.3 or node[-2] == 'kwargs':
if self.version >= 3.3 or node[-2] in ('kwargs', 'no_kwargs'):
# LOAD_CONST code object ..
# LOAD_CONST 'x0' if >= 3.3
# MAKE_FUNCTION ..
@@ -1057,7 +761,7 @@ class SourceWalker(GenericASTTraversal, object):
self.write(func_name)
self.indent_more()
self.make_function(node, is_lambda=False, codeNode=code_node)
self.make_function(node, is_lambda=False, code_node=code_node)
if len(self.param_stack) > 1:
self.write('\n\n')
@@ -1067,14 +771,14 @@ class SourceWalker(GenericASTTraversal, object):
self.prune() # stop recursing
def make_function(self, node, is_lambda, nested=1,
codeNode=None, annotate=None):
code_node=None, annotate=None):
if self.version >= 3.0:
make_function3(self, node, is_lambda, nested, codeNode)
make_function3(self, node, is_lambda, nested, code_node)
else:
make_function2(self, node, is_lambda, nested, codeNode)
make_function2(self, node, is_lambda, nested, code_node)
def n_mklambda(self, node):
self.make_function(node, is_lambda=True, codeNode=node[-2])
self.make_function(node, is_lambda=True, code_node=node[-2])
self.prune() # stop recursing
def n_list_comp(self, node):
@@ -1563,7 +1267,7 @@ class SourceWalker(GenericASTTraversal, object):
assert 'mkfunc' == build_class[1]
mkfunc = build_class[1]
if mkfunc[0] == 'kwargs':
if mkfunc[0] in ('kwargs', 'no_kwargs'):
if 3.0 <= self.version <= 3.2:
for n in mkfunc:
if hasattr(n, 'attr') and iscode(n.attr):
@@ -1608,8 +1312,15 @@ class SourceWalker(GenericASTTraversal, object):
subclass_info = node
subclass_code = build_class[1][0].attr
elif not subclass_info:
subclass_code = build_class[1][0].attr
subclass_info = node[0]
if mkfunc[0] in ('no_kwargs', 'kwargs'):
subclass_code = mkfunc[1].attr
else:
subclass_code = mkfunc[0].attr
if node == 'classdefdeco2':
subclass_info = node
else:
subclass_info = node[0]
else:
if node == 'classdefdeco2':
build_class = node
@@ -1680,11 +1391,17 @@ class SourceWalker(GenericASTTraversal, object):
self.write(')')
def print_super_classes3(self, node):
n = len(node)-1
n = len(node) - 1
if node.kind != 'expr':
assert node[n].kind.startswith('CALL_FUNCTION')
if node == 'kwarg':
self.write('(')
self.template_engine(('%[0]{pattr}=%c', 1), node)
self.write(')')
return
kwargs = None
assert node[n].kind.startswith('CALL_FUNCTION')
if node[n].kind.startswith('CALL_FUNCTION_KW'):
# 3.6+ starts does this
kwargs = node[n-1].attr
@@ -1692,12 +1409,13 @@ class SourceWalker(GenericASTTraversal, object):
i = n - (len(kwargs)+1)
j = 1 + n - node[n].attr
else:
for i in range(n-2, 0, -1):
if not node[i].kind in ['expr', 'LOAD_CLASSNAME']:
start = n-2
for i in range(start, 0, -1):
if not node[i].kind in ['expr', 'call', 'LOAD_CLASSNAME']:
break
pass
if i == n-2:
if i == start:
return
i += 2
@@ -1756,7 +1474,8 @@ class SourceWalker(GenericASTTraversal, object):
self.indent_more(INDENT_PER_LEVEL)
sep = INDENT_PER_LEVEL[:-1]
self.write('{')
if node[0] != 'dict_entry':
self.write('{')
line_number = self.line_number
if self.version >= 3.0 and not self.is_pypy:
@@ -1764,9 +1483,13 @@ class SourceWalker(GenericASTTraversal, object):
# Python 3.5+ style key/value list in dict
kv_node = node[0]
l = list(kv_node)
length = len(l)
if kv_node[-1].kind.startswith("BUILD_MAP"):
length -= 1
i = 0
# Respect line breaks from source
while i < len(l):
while i < length:
self.write(sep)
name = self.traverse(l[i], indent='')
if i > 0:
@@ -1776,7 +1499,7 @@ class SourceWalker(GenericASTTraversal, object):
self.write(name, ': ')
value = self.traverse(l[i+1], indent=self.indent+(len(name)+2)*' ')
self.write(value)
sep = ","
sep = ", "
if line_number != self.line_number:
sep += "\n" + self.indent + INDENT_PER_LEVEL[:-1]
line_number = self.line_number
@@ -1803,7 +1526,7 @@ class SourceWalker(GenericASTTraversal, object):
self.write(name, ': ')
value = self.traverse(l[i], indent=self.indent+(len(name)+2)*' ')
self.write(value)
sep = ","
sep = ", "
if line_number != self.line_number:
sep += "\n" + self.indent + INDENT_PER_LEVEL[:-1]
line_number = self.line_number
@@ -1823,7 +1546,7 @@ class SourceWalker(GenericASTTraversal, object):
line_number = self.line_number
self.write(':')
self.write(self.traverse(value[0]))
sep = ","
sep = ", "
if line_number != self.line_number:
sep += "\n" + self.indent + INDENT_PER_LEVEL[:-1]
line_number = self.line_number
@@ -1834,15 +1557,38 @@ class SourceWalker(GenericASTTraversal, object):
if sep.startswith(",\n"):
self.write(sep[1:])
pass
elif node[0].kind.startswith('dict_entry'):
assert self.version >= 3.5
template = ("%C", (0, len(node[0]), ", **"))
self.template_engine(template, node[0])
sep = ''
elif (node[-1].kind.startswith('BUILD_MAP_UNPACK')
or node[-1].kind.startswith('dict_entry')):
assert self.version >= 3.5
# FIXME: I think we can intermingle dict_comp's with other
# dictionary kinds of things. The most common though is
# a sequence of dict_comp's
kwargs = node[-1].attr
template = ("**%C", (0, kwargs, ", **"))
self.template_engine(template, node)
sep = ''
pass
else:
# Python 2 style kvlist
assert node[-1].kind.startswith('kvlist')
kv_node = node[-1] # goto kvlist
# Python 2 style kvlist. Find beginning of kvlist.
if node[0].kind.startswith("BUILD_MAP"):
if len(node) > 1 and node[1].kind in ('kvlist', 'kvlist_n'):
kv_node = node[1]
else:
kv_node = node[1:]
else:
assert node[-1].kind.startswith('kvlist')
kv_node = node[-1]
first_time = True
for kv in kv_node:
assert kv in ('kv', 'kv2', 'kv3')
# kv ::= DUP_TOP expr ROT_TWO expr STORE_SUBSCR
# kv2 ::= DUP_TOP expr expr ROT_THREE STORE_SUBSCR
# kv3 ::= expr expr STORE_MAP
@@ -1882,7 +1628,7 @@ class SourceWalker(GenericASTTraversal, object):
value = self.traverse(kv[0], indent=self.indent+(len(name)+2)*' ')
pass
self.write(value)
sep = ","
sep = ", "
if line_number != self.line_number:
sep += "\n" + self.indent + " "
line_number = self.line_number
@@ -1891,7 +1637,8 @@ class SourceWalker(GenericASTTraversal, object):
pass
if sep.startswith(",\n"):
self.write(sep[1:])
self.write('}')
if node[0] != 'dict_entry':
self.write('}')
self.indent_less(INDENT_PER_LEVEL)
self.prec = p
self.prune()
@@ -1936,6 +1683,7 @@ class SourceWalker(GenericASTTraversal, object):
else:
self.write('('); endchar = ')'
pass
elif lastnodetype.startswith('BUILD_SET'):
self.write('{'); endchar = '}'
elif lastnodetype.startswith('BUILD_MAP_UNPACK'):
@@ -2182,6 +1930,10 @@ class SourceWalker(GenericASTTraversal, object):
TABLE_R[k] = ('%c(%P)', 0, (1, -1, ', ', 100))
elif op in ('CALL_FUNCTION_VAR',
'CALL_FUNCTION_VAR_KW', 'CALL_FUNCTION_KW'):
# FIXME: handle everything in customize.
# Right now, some of this is here, and some in that.
if v == 0:
str = '%c(%C' # '%C' is a dummy here ...
p2 = (0, 0, None) # .. because of the None in this
@@ -2196,6 +1948,15 @@ class SourceWalker(GenericASTTraversal, object):
entry = ('%c(*%C, %c)', 0, p2, -2)
elif str == '%c(%C':
entry = ('%c(*%C)', 0, (1, 100, ''))
elif self.version == 3.4:
# CALL_FUNCTION_VAR's top element of the stack contains
# the variable argument list
if v == 0:
str = '%c(*%c)'
entry = (str, 0, -2)
else:
str = '%c(%C, *%c)'
entry = (str, 0, p2, -2)
else:
str += '*%c)'
entry = (str, 0, p2, -2)

View File

@@ -12,4 +12,4 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# This file is suitable for sourcing inside bash as
# well as importing into Python
VERSION='3.1.0'
VERSION='3.1.2'