Compare commits

...

28 Commits

Author SHA1 Message Date
rocky
b51039ac1e Get ready for release 2.12.0 2017-09-26 09:59:55 -04:00
rocky
f73f0ba41c No unicode in Python3.
but we need it in Python2. The bug was probably introduced
as a result of recent Python code type unteroperability canonicalization
2017-09-26 09:43:01 -04:00
rocky
114f979555 Pyton 3.1 Annotation args can be unicode? 2017-09-26 09:31:04 -04:00
rocky
7b38d2f1f8 Adjust for xdis opcode JUMP_OPS. release 2.12.0 2017-09-25 20:01:31 -04:00
rocky
dfbd60231b Get ready for release 2.12.0 2017-09-25 19:11:25 -04:00
rocky
8b67f2ccd0 Python 3 compatibility 2017-09-21 11:47:42 -04:00
rocky
aadea7224d Unit test for format-specifiers
And in the process we catch some small bugs
2017-09-21 11:25:51 -04:00
rocky
da7421da1c Tidy pysource and fragments a little more 2017-09-20 19:02:56 -04:00
rocky
96ca68a6fe Tidy/regularize table entry formatting 2017-09-20 17:47:56 -04:00
rocky
147b6e1cfe Small fixes
test_pyenvlib.py: it is sys.exit(), not exit()
pysource.py: reinstate nod type of async_func_call
2017-09-20 11:32:42 -04:00
rocky
d7b12f4da1 More small doc changes 2017-09-20 02:49:14 -04:00
rocky
c7b9e54e59 Update Table-driven info...
Start a pysource unit test.
2017-09-20 00:06:50 -04:00
rocky
3003070acb engine -> template_engine 2017-09-17 11:56:51 -04:00
rocky
19d6dedcf5 Need weak-verification on 3.4 for now 2017-09-13 01:09:04 -04:00
rocky
51ad3fb36e Revert one of the changes pending a better fix 2017-09-10 03:01:19 -04:00
rocky
f017acce21 More semantic action cleanup 2017-09-10 02:56:47 -04:00
rocky
5bef5683e4 Match Python 3.4's terms a little names better 2017-09-10 00:48:54 -04:00
rocky
4e1467adc8 Revert last revert 2017-09-09 08:08:40 -04:00
rocky
7cdf0abb43 Revert last change 2017-09-09 08:03:04 -04:00
rocky
9b336251a7 New-style Python classes only, please. 2017-09-09 07:47:21 -04:00
rocky
7844456e1e Skeletal support for Python 3.7
Largely failing though.
2017-08-31 10:12:09 -04:00
rocky
356ea6c770 Remove python versions tag
I think it's messing up Pypi's very fussy formatting
2017-08-31 09:50:48 -04:00
rocky
4d58438515 Get ready for release 2.11.5 2017-08-31 09:42:14 -04:00
rocky
f7bfe3f7b2 3.7 support 2017-08-15 21:52:43 -04:00
rocky
c54a47b15f Get ready for release 2.11.4 2017-08-15 10:57:14 -04:00
rocky
d1e02afb4b Misc cleanups...
remove code now in xdis
require at least xdis 3.5.4
PyPy tolerance in validate testing
2017-08-15 09:41:39 -04:00
rocky
f4ceb6304d Allow 3-part version string lookups, e.g 2.7.1
We allow a float here, but if passed a string like
'2.7'. or  '2.7.13', accept that in looking up
either a scanner or a parser.
2017-08-13 09:17:07 -04:00
rocky
503039ab51 Link typo
Name is trepan2 now not trepan
2017-08-10 09:41:48 -04:00
40 changed files with 768 additions and 292 deletions

134
ChangeLog
View File

@@ -1,7 +1,137 @@
2017-09-26 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: No unicode in Python3. but we need it in Python2. The bug was probably introduced as a
result of recent Python code type unteroperability canonicalization
2017-09-26 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: Pyton 3.1 Annotation args can be
unicode?
2017-09-25 rocky <rb@dustyfeet.com>
* : Adjust for xdis opcode JUMP_OPS. release 2.12.0
2017-09-21 rocky <rb@dustyfeet.com>
* pytest/test_pysource.py: Python 3 compatibility
2017-09-21 rocky <rb@dustyfeet.com>
* pytest/test_pysource.py, uncompyle6/semantics/consts.py,
uncompyle6/semantics/fragments.py, uncompyle6/semantics/pysource.py:
Unit test for format-specifiers And in the process we catch some small bugs
2017-09-20 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: Tidy pysource and fragments a
little more
2017-09-20 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/consts.py: Tidy/regularize table entry
formatting
2017-09-20 rocky <rb@dustyfeet.com>
* test/test_pythonlib.py, uncompyle6/semantics/pysource.py: Small
fixes test_pyenvlib.py: it is sys.exit(), not exit() pysource.py:
reinstate nod type of async_func_call
2017-09-20 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/consts.py, uncompyle6/semantics/pysource.py:
More small doc changes
2017-09-20 rocky <rb@dustyfeet.com>
* pytest/test_pysource.py, uncompyle6/semantics/pysource.py: Update
Table-driven info... Start a pysource unit test.
2017-09-17 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: engine -> template_engine
2017-09-13 rocky <rb@dustyfeet.com>
* test/Makefile: Need weak-verification on 3.4 for now
2017-09-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Revert one of the changes
pending a better fix
2017-09-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: More semantic action cleanup
2017-09-10 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py, uncompyle6/scanners/tok.py: Match
Python 3.4's terms a little names better
2017-09-09 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/tok.py: Revert last revert
2017-09-09 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/tok.py: Revert last change
2017-09-09 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/tok.py: New-style Python classes only, please.
2017-08-31 rocky <rb@dustyfeet.com>
* uncompyle6/scanner.py, uncompyle6/scanners/scanner37.py: Skeletal
support for Python 3.7 Largely failing though.
2017-08-31 rocky <rb@dustyfeet.com>
* README.rst: Remove python versions tag I think it's messing up Pypi's very fussy formatting
2017-08-31 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, README.rst, __pkginfo__.py,
uncompyle6/parsers/parse37.py,
uncompyle6/semantics/make_function.py, uncompyle6/version.py: Get
ready for release 2.11.5
2017-08-15 rocky <rb@dustyfeet.com>
* Makefile: 3.7 support
2017-08-15 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
2.11.4
2017-08-15 rocky <rb@dustyfeet.com>
* __pkginfo__.py, pytest/validate.py, uncompyle6/parser.py,
uncompyle6/scanner.py: Misc cleanups... remove code now in xdis require at least xdis 3.5.4 PyPy tolerance
in validate testing
2017-08-13 rocky <rb@dustyfeet.com>
* pytest/test_basic.py, uncompyle6/parser.py, uncompyle6/scanner.py:
Allow 3-part version string lookups, e.g 2.7.1 We allow a float here, but if passed a string like '2.7'. or
'2.7.13', accept that in looking up either a scanner or a parser.
2017-08-10 rocky <rb@dustyfeet.com>
* README.rst: Link typo Name is trepan2 now not trepan
2017-08-09 rocky <rb@dustyfeet.com> 2017-08-09 rocky <rb@dustyfeet.com>
* README.rst, __pkginfo__.py, uncompyle6/version.py: Get ready for * ChangeLog, NEWS, README.rst, __pkginfo__.py,
release 2.11.3 uncompyle6/semantics/consts.py, uncompyle6/version.py: Get ready for
release 2.11.3 need xdis 3.5.1 for now. Adjust for xdis "is-not" which we need as
"is not"
2017-08-02 rocky <rb@dustyfeet.com> 2017-08-02 rocky <rb@dustyfeet.com>

View File

@@ -36,6 +36,8 @@ check-2.7 check-3.3 check-3.4: pytest
check-3.0 check-3.1 check-3.2 check-3.5 check-3.6: check-3.0 check-3.1 check-3.2 check-3.5 check-3.6:
$(MAKE) -C test $@ $(MAKE) -C test $@
check-3.7: pytest
#:Tests for Python 2.6 (doesn't have pytest) #:Tests for Python 2.6 (doesn't have pytest)
check-2.6: check-2.6:
$(MAKE) -C test $@ $(MAKE) -C test $@

25
NEWS
View File

@@ -1,4 +1,27 @@
uncompyle6 2.11.2 2017-08-09 uncompyle6 2.12.0 2017-09-26
- Use xdis 3.6.0 or greater now
- Small semantic table cleanups
- Python 3.4's terms a little names better
- Slightly more Python 3.7, but still failing a lot
uncompyle6 2.11.5 2017-08-31
- Skeletal support for Python 3.7
uncompyle6 2.11.4 2017-08-15
* scanner and parser now allow 3-part version string lookups,
e.g. 2.7.1 We allow a float here, but if passed a string like '2.7'. or
* unpin 3.5.1. xdis 3.5.4 has been releasd and fixes the problems we had. Use that.
* some routnes here moved to xdis. Use the xdis version
* README.rst: Link typo Name is trepan2 now not trepan
* xdis-forced change adjust for COMPARE_OP "is-not" in
semanatic routines. We need "is not".
* Some PyPy tolerance in validate testing.
* Some pyston tolerance
uncompyle6 2.11.3 2017-08-09
Very minor changes Very minor changes

View File

@@ -1,4 +1,4 @@
|buildstatus| |Supported Python Versions| |buildstatus|
uncompyle6 uncompyle6
========== ==========
@@ -12,7 +12,7 @@ Introduction
*uncompyle6* translates Python bytecode back into equivalent Python *uncompyle6* translates Python bytecode back into equivalent Python
source code. It accepts bytecodes from Python version 1.5, and 2.1 to source code. It accepts bytecodes from Python version 1.5, and 2.1 to
3.6 or so, including PyPy bytecode and Dropbox's Python 2.5 bytecode. 3.7 or so, including PyPy bytecode and Dropbox's Python 2.5 bytecode.
Why this? Why this?
--------- ---------
@@ -176,7 +176,7 @@ See Also
* https://github.com/rocky/python-xasm : Cross Python version assembler * https://github.com/rocky/python-xasm : Cross Python version assembler
.. _trepan: https://pypi.python.org/pypi/trepan .. _trepan: https://pypi.python.org/pypi/trepan2
.. _HISTORY: https://github.com/rocky/python-uncompyle6/blob/master/HISTORY.md .. _HISTORY: https://github.com/rocky/python-uncompyle6/blob/master/HISTORY.md
.. _debuggers: https://pypi.python.org/pypi/trepan3k .. _debuggers: https://pypi.python.org/pypi/trepan3k
.. _remake: https://bashdb.sf.net/remake .. _remake: https://bashdb.sf.net/remake
@@ -184,7 +184,5 @@ See Also
.. _this: https://github.com/rocky/python-uncompyle6/wiki/Deparsing-technology-and-its-use-in-exact-location-reporting .. _this: https://github.com/rocky/python-uncompyle6/wiki/Deparsing-technology-and-its-use-in-exact-location-reporting
.. |buildstatus| image:: https://travis-ci.org/rocky/python-uncompyle6.svg .. |buildstatus| image:: https://travis-ci.org/rocky/python-uncompyle6.svg
:target: https://travis-ci.org/rocky/python-uncompyle6 :target: https://travis-ci.org/rocky/python-uncompyle6
.. |Supported Python Versions| image:: https://img.shields.io/pypi/pyversions/uncompyle6.svg
:target: https://pypi.python.org/pypi/uncompyle6/
.. _PJOrion: http://www.koreanrandom.com/forum/topic/15280-pjorion-%D1%80%D0%B5%D0%B4%D0%B0%D0%BA%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5-%D0%BA%D0%BE%D0%BC%D0%BF%D0%B8%D0%BB%D1%8F%D1%86%D0%B8%D1%8F-%D0%B4%D0%B5%D0%BA%D0%BE%D0%BC%D0%BF%D0%B8%D0%BB%D1%8F%D1%86%D0%B8%D1%8F-%D0%BE%D0%B1%D1%84 .. _PJOrion: http://www.koreanrandom.com/forum/topic/15280-pjorion-%D1%80%D0%B5%D0%B4%D0%B0%D0%BA%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5-%D0%BA%D0%BE%D0%BC%D0%BF%D0%B8%D0%BB%D1%8F%D1%86%D0%B8%D1%8F-%D0%B4%D0%B5%D0%BA%D0%BE%D0%BC%D0%BF%D0%B8%D0%BB%D1%8F%D1%86%D0%B8%D1%8F-%D0%BE%D0%B1%D1%84
.. _Deobfuscator: https://github.com/extremecoders-re/PjOrion-Deobfuscator .. _Deobfuscator: https://github.com/extremecoders-re/PjOrion-Deobfuscator

View File

@@ -40,7 +40,7 @@ entry_points = {
]} ]}
ftp_url = None ftp_url = None
install_requires = ['spark-parser >= 1.6.1, < 1.7.0', install_requires = ['spark-parser >= 1.6.1, < 1.7.0',
'xdis == 3.5.1', 'six'] 'xdis >= 3.6.0, < 3.7.0', 'six']
license = 'MIT' license = 'MIT'
mailing_list = 'python-debugger@googlegroups.com' mailing_list = 'python-debugger@googlegroups.com'
modname = 'uncompyle6' modname = 'uncompyle6'

11
pytest/test_basic.py Normal file
View File

@@ -0,0 +1,11 @@
from uncompyle6.scanner import get_scanner
from uncompyle6.parser import get_python_parser
def test_get_scanner():
# See that we can retrieve a scanner using a full version number
assert get_scanner('2.7.13')
def test_get_parser():
# See that we can retrieve a sparser using a full version number
assert get_python_parser('2.7.13')

168
pytest/test_pysource.py Normal file
View File

@@ -0,0 +1,168 @@
from uncompyle6 import PYTHON3
from uncompyle6.semantics.consts import (
escape, NONE,
# RETURN_NONE, PASS, RETURN_LOCALS
)
if PYTHON3:
from io import StringIO
def iteritems(d):
return d.items()
else:
from StringIO import StringIO
def iteritems(d):
return d.iteritems()
from uncompyle6.semantics.pysource import SourceWalker as SourceWalker
def test_template_engine():
s = StringIO()
sw = SourceWalker(2.7, s, None)
sw.ast = NONE
sw.template_engine(('--%c--', 0), NONE)
print(sw.f.getvalue())
assert sw.f.getvalue() == '--None--'
# FIXME: and so on...
from uncompyle6.semantics.consts import (
TABLE_DIRECT, TABLE_R,
)
from uncompyle6.semantics.fragments import (
TABLE_DIRECT_FRAGMENT,
)
skip_for_now = "DELETE_DEREF".split()
def test_tables():
for t, name, fragment in (
(TABLE_DIRECT, 'TABLE_DIRECT', False),
(TABLE_R, 'TABLE_R', False),
(TABLE_DIRECT_FRAGMENT, 'TABLE_DIRECT_FRAGMENT', True)):
for k, entry in iteritems(t):
if k in skip_for_now:
continue
fmt = entry[0]
arg = 1
i = 0
m = escape.search(fmt)
print("%s[%s]" % (name, k))
while m:
i = m.end()
typ = m.group('type') or '{'
if typ in frozenset(['%', '+', '-', '|', ',', '{']):
# No args
pass
elif typ in frozenset(['c', 'p', 'P', 'C', 'D']):
# One arg - should be int or tuple of int
if typ == 'c':
assert isinstance(entry[arg], int), (
"%s[%s][%d] type %s is '%s' should be an int but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
elif typ in frozenset(['C', 'D']):
tup = entry[arg]
assert isinstance(tup, tuple), (
"%s[%s][%d] type %s is %s should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert len(tup) == 3
for j, x in enumerate(tup[:-1]):
assert isinstance(x, int), (
"%s[%s][%d][%d] type %s is %s should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
assert isinstance(tup[-1], str) or tup[-1] is None, (
"%s[%s][%d][%d] sep type %s is %s should be an string but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, tup[-1], type(x), entry)
)
elif typ == 'P':
tup = entry[arg]
assert isinstance(tup, tuple), (
"%s[%s][%d] type %s is %s should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert len(tup) == 4
for j, x in enumerate(tup[:-2]):
assert isinstance(x, int), (
"%s[%s][%d][%d] type %s is '%s' should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
assert isinstance(tup[-2], str), (
"%s[%s][%d][%d] sep type %s is '%s' should be an string but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
assert isinstance(tup[1], int), (
"%s[%s][%d][%d] prec type %s is '%s' should be an int but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
else:
# Should be a tuple which contains only ints
tup = entry[arg]
assert isinstance(tup, tuple), (
"%s[%s][%d] type %s is '%s' should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert len(tup) == 2
for j, x in enumerate(tup):
assert isinstance(x, int), (
"%s[%s][%d][%d] type '%s' is '%s should be an int but is %s. Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
pass
arg += 1
elif typ in frozenset(['r']) and fragment:
pass
elif typ == 'b' and fragment:
assert isinstance(entry[arg], int), (
"%s[%s][%d] type %s is '%s' should be an int but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
arg += 1
elif typ == 'x' and fragment:
tup = entry[arg]
assert isinstance(tup, tuple), (
"%s[%s][%d] type %s is '%s' should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert len(tup) == 2
assert isinstance(tup[0], int), (
"%s[%s][%d] source type %s is '%s' should be an int but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert isinstance(tup[1], tuple), (
"%s[%s][%d] dest type %s is '%s' should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
for j, x in enumerate(tup[1]):
assert isinstance(x, int), (
"%s[%s][%d][%d] type %s is %s should be an int but is %s. Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
arg += 1
pass
else:
assert False, (
"%s[%s][%d] type %s is not known. Full entry: %s" %
(name, k, arg, typ, entry)
)
m = escape.search(fmt, i)
pass
assert arg == len(entry), (
"%s[%s] arg %d should be length of entry %d. Full entry: %s" %
(name, k, arg, len(entry), entry))

View File

@@ -123,7 +123,9 @@ def validate_uncompyle(text, mode='exec'):
original_text = text original_text = text
deparsed = deparse_code(PYTHON_VERSION, original_code, deparsed = deparse_code(PYTHON_VERSION, original_code,
compile_mode=mode, out=six.StringIO()) compile_mode=mode,
out=six.StringIO(),
is_pypy=IS_PYPY)
uncompyled_text = deparsed.text uncompyled_text = deparsed.text
uncompyled_code = compile(uncompyled_text, '<string>', 'exec') uncompyled_code = compile(uncompyled_text, '<string>', 'exec')

View File

@@ -39,7 +39,7 @@ check-3.3: check-bytecode
#: Run working tests from Python 3.4 #: Run working tests from Python 3.4
check-3.4: check-bytecode check-3.4-ok check-2.7-ok check-3.4: check-bytecode check-3.4-ok check-2.7-ok
$(PYTHON) test_pythonlib.py --bytecode-3.4 --verify $(COMPILE) $(PYTHON) test_pythonlib.py --bytecode-3.4 --weak-verify $(COMPILE)
#: Run working tests from Python 3.5 #: Run working tests from Python 3.5
check-3.5: check-bytecode check-3.5: check-bytecode

View File

@@ -169,13 +169,13 @@ def do_tests(src_dir, obj_patterns, target_dir, opts):
main(src_dir, target_dir, files, [], main(src_dir, target_dir, files, [],
do_verify=opts['do_verify']) do_verify=opts['do_verify'])
if failed_files != 0: if failed_files != 0:
exit(2) sys.exit(2)
elif failed_verify != 0: elif failed_verify != 0:
exit(3) sys.exit(3)
except (KeyboardInterrupt, OSError): except (KeyboardInterrupt, OSError):
print() print()
exit(1) sys.exit(1)
if test_opts['rmtree']: if test_opts['rmtree']:
parent_dir = os.path.dirname(target_dir) parent_dir = os.path.dirname(target_dir)
print("Everything good, removing %s" % parent_dir) print("Everything good, removing %s" % parent_dir)

View File

@@ -11,10 +11,10 @@ from __future__ import print_function
import sys import sys
from xdis.code import iscode from xdis.code import iscode
from xdis.magics import py_str2float
from spark_parser import GenericASTBuilder, DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG from spark_parser import GenericASTBuilder, DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from uncompyle6.show import maybe_show_asm from uncompyle6.show import maybe_show_asm
class ParserError(Exception): class ParserError(Exception):
def __init__(self, token, offset): def __init__(self, token, offset):
self.token = token self.token = token
@@ -605,7 +605,15 @@ def get_python_parser(
explanation of the different modes. explanation of the different modes.
""" """
# If version is a string, turn that into the corresponding float.
if isinstance(version, str):
version = py_str2float(version)
# FIXME: there has to be a better way... # FIXME: there has to be a better way...
# We could do this as a table lookup, but that would force us
# in import all of the parsers all of the time. Perhaps there is
# a lazy way of doing the import?
if version < 3.0: if version < 3.0:
if version == 1.5: if version == 1.5:
import uncompyle6.parsers.parse15 as parse15 import uncompyle6.parsers.parse15 as parse15
@@ -758,6 +766,7 @@ def python_parser(version, co, out=sys.stdout, showasm=False,
if __name__ == '__main__': if __name__ == '__main__':
def parse_test(co): def parse_test(co):
from uncompyle6 import PYTHON_VERSION, IS_PYPY from uncompyle6 import PYTHON_VERSION, IS_PYPY
ast = python_parser('2.7.13', co, showasm=True, is_pypy=True)
ast = python_parser(PYTHON_VERSION, co, showasm=True, is_pypy=IS_PYPY) ast = python_parser(PYTHON_VERSION, co, showasm=True, is_pypy=IS_PYPY)
print(ast) print(ast)
return return

View File

@@ -20,6 +20,7 @@ from __future__ import print_function
from uncompyle6.parser import PythonParser, PythonParserSingle, nop_func from uncompyle6.parser import PythonParser, PythonParserSingle, nop_func
from uncompyle6.parsers.astnode import AST from uncompyle6.parsers.astnode import AST
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from xdis import PYTHON3
class Python3Parser(PythonParser): class Python3Parser(PythonParser):
@@ -889,7 +890,11 @@ class Python3Parser(PythonParser):
elif lhs == 'annotate_tuple': elif lhs == 'annotate_tuple':
return not isinstance(tokens[first].attr, tuple) return not isinstance(tokens[first].attr, tuple)
elif lhs == 'kwarg': elif lhs == 'kwarg':
return not isinstance(tokens[first].attr, str) arg = tokens[first].attr
if PYTHON3:
return not isinstance(arg, str)
else:
return not (isinstance(arg, str) or isinstance(arg, unicode))
elif lhs == 'while1elsestmt': elif lhs == 'while1elsestmt':
# if SETUP_LOOP target spans the else part, then this is # if SETUP_LOOP target spans the else part, then this is
# not while1else. Also do for whileTrue? # not while1else. Also do for whileTrue?

View File

@@ -0,0 +1,41 @@
# Copyright (c) 2017 Rocky Bernstein
"""
spark grammar differences over Python 3.6 for Python 3.7
"""
from __future__ import print_function
from uncompyle6.parser import PythonParserSingle
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from uncompyle6.parsers.parse36 import Python37Parser
class Python36Parser(Python35Parser):
def __init__(self, debug_parser=PARSER_DEFAULT_DEBUG):
super(Python37Parser, self).__init__(debug_parser)
self.customized = {}
class Python37ParserSingle(Python37Parser, PythonParserSingle):
pass
if __name__ == '__main__':
# Check grammar
p = Python37Parser()
p.checkGrammar()
from uncompyle6 import PYTHON_VERSION, IS_PYPY
if PYTHON_VERSION == 3.7:
lhs, rhs, tokens, right_recursive = p.checkSets()
from uncompyle6.scanner import get_scanner
s = get_scanner(PYTHON_VERSION, IS_PYPY)
opcode_set = set(s.opc.opname).union(set(
"""JUMP_BACK CONTINUE RETURN_END_IF COME_FROM
LOAD_GENEXPR LOAD_ASSERT LOAD_SETCOMP LOAD_DICTCOMP LOAD_CLASSNAME
LAMBDA_MARKER RETURN_LAST
""".split()))
remain_tokens = set(tokens) - opcode_set
import re
remain_tokens = set([re.sub('_\d+$', '', t) for t in remain_tokens])
remain_tokens = set([re.sub('_CONT$', '', t) for t in remain_tokens])
remain_tokens = set(remain_tokens) - opcode_set
print(remain_tokens)
# print(sorted(p.rule2name.items()))

View File

@@ -17,11 +17,12 @@ import sys
from uncompyle6 import PYTHON3, IS_PYPY from uncompyle6 import PYTHON3, IS_PYPY
from uncompyle6.scanners.tok import Token from uncompyle6.scanners.tok import Token
from xdis.bytecode import op_size from xdis.bytecode import op_size
from xdis.magics import py_str2float
# The byte code versions we support # The byte code versions we support
PYTHON_VERSIONS = (1.5, PYTHON_VERSIONS = (1.5,
2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7,
3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6) 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7)
# FIXME: DRY # FIXME: DRY
if PYTHON3: if PYTHON3:
@@ -55,7 +56,7 @@ class Scanner(object):
if version in PYTHON_VERSIONS: if version in PYTHON_VERSIONS:
if is_pypy: if is_pypy:
v_str = "opcode_pypy%s" % (int(version * 10)) v_str = "opcode_%spypy" % (int(version * 10))
else: else:
v_str = "opcode_%s" % (int(version * 10)) v_str = "opcode_%s" % (int(version * 10))
exec("from xdis.opcodes import %s" % v_str) exec("from xdis.opcodes import %s" % v_str)
@@ -64,6 +65,7 @@ class Scanner(object):
raise TypeError("%s is not a Python version I know about" % version) raise TypeError("%s is not a Python version I know about" % version)
self.opname = self.opc.opname self.opname = self.opc.opname
# FIXME: This weird Python2 behavior is not Python3 # FIXME: This weird Python2 behavior is not Python3
self.resetTokenClass() self.resetTokenClass()
@@ -100,7 +102,7 @@ class Scanner(object):
def print_bytecode(self): def print_bytecode(self):
for i in self.op_range(0, len(self.code)): for i in self.op_range(0, len(self.code)):
op = self.code[i] op = self.code[i]
if op in self.JUMP_OPs: if op in self.JUMP_OPS:
dest = self.get_target(i, op) dest = self.get_target(i, op)
print('%i\t%s\t%i' % (i, self.opname[op], dest)) print('%i\t%s\t%i' % (i, self.opname[op], dest))
else: else:
@@ -257,7 +259,13 @@ class Scanner(object):
def parse_fn_counts(argc): def parse_fn_counts(argc):
return ((argc & 0xFF), (argc >> 8) & 0xFF, (argc >> 16) & 0x7FFF) return ((argc & 0xFF), (argc >> 8) & 0xFF, (argc >> 16) & 0x7FFF)
def get_scanner(version, is_pypy=False, show_asm=None): def get_scanner(version, is_pypy=False, show_asm=None):
# If version is a string, turn that into the corresponding float.
if isinstance(version, str):
version = py_str2float(version)
# Pick up appropriate scanner # Pick up appropriate scanner
if version in PYTHON_VERSIONS: if version in PYTHON_VERSIONS:
v_str = "%s" % (int(version * 10)) v_str = "%s" % (int(version * 10))
@@ -284,5 +292,6 @@ def get_scanner(version, is_pypy=False, show_asm=None):
if __name__ == "__main__": if __name__ == "__main__":
import inspect, uncompyle6 import inspect, uncompyle6
co = inspect.currentframe().f_code co = inspect.currentframe().f_code
scanner = get_scanner('2.7.13', True)
scanner = get_scanner(uncompyle6.PYTHON_VERSION, IS_PYPY, True) scanner = get_scanner(uncompyle6.PYTHON_VERSION, IS_PYPY, True)
tokens, customize = scanner.ingest(co, {}) tokens, customize = scanner.ingest(co, {})

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein # Copyright (c) 2016-2017 by Rocky Bernstein
""" """
Python PyPy 2.7 bytecode scanner/deparser Python PyPy 2.7 bytecode scanner/deparser
@@ -10,8 +10,8 @@ information for later use in deparsing.
import uncompyle6.scanners.scanner27 as scan import uncompyle6.scanners.scanner27 as scan
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_pypy27 from xdis.opcodes import opcode_27pypy
JUMP_OPs = opcode_pypy27.JUMP_OPs JUMP_OPS = opcode_27pypy.JUMP_OPS
# We base this off of 2.6 instead of the other way around # We base this off of 2.6 instead of the other way around
# because we cleaned things up this way. # because we cleaned things up this way.

View File

@@ -8,9 +8,9 @@ make things easier for decompilation.
import uncompyle6.scanners.scanner35 as scan import uncompyle6.scanners.scanner35 as scan
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPS from here
from xdis.opcodes import opcode_35 as opc # is this right? from xdis.opcodes import opcode_35 as opc # is this right?
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs) JUMP_OPs = opc.JUMP_OPS
# We base this off of 3.5 # We base this off of 3.5
class ScannerPyPy35(scan.Scanner35): class ScannerPyPy35(scan.Scanner35):

View File

@@ -11,7 +11,7 @@ import uncompyle6.scanners.scanner21 as scan
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_15 from xdis.opcodes import opcode_15
JUMP_OPs = opcode_15.JUMP_OPs JUMP_OPS = opcode_15.JUMP_OPS
# We base this off of 2.2 instead of the other way around # We base this off of 2.2 instead of the other way around
# because we cleaned things up this way. # because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein # Copyright (c) 2016-2017 by Rocky Bernstein
""" """
Python 2.1 bytecode scanner/deparser Python 2.1 bytecode scanner/deparser
@@ -11,7 +11,7 @@ import uncompyle6.scanners.scanner22 as scan
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_21 from xdis.opcodes import opcode_21
JUMP_OPs = opcode_21.JUMP_OPs JUMP_OPS = opcode_21.JUMP_OPS
# We base this off of 2.2 instead of the other way around # We base this off of 2.2 instead of the other way around
# because we cleaned things up this way. # because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein # Copyright (c) 2016-2017 by Rocky Bernstein
""" """
Python 2.2 bytecode ingester. Python 2.2 bytecode ingester.
@@ -11,7 +11,7 @@ import uncompyle6.scanners.scanner23 as scan
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_22 from xdis.opcodes import opcode_22
JUMP_OPs = opcode_22.JUMP_OPs JUMP_OPS = opcode_22.JUMP_OPS
# We base this off of 2.3 instead of the other way around # We base this off of 2.3 instead of the other way around
# because we cleaned things up this way. # because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein # Copyright (c) 2016-2017 by Rocky Bernstein
""" """
Python 2.3 bytecode scanner/deparser Python 2.3 bytecode scanner/deparser
@@ -10,7 +10,7 @@ import uncompyle6.scanners.scanner24 as scan
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_23 from xdis.opcodes import opcode_23
JUMP_OPs = opcode_23.JUMP_OPs JUMP_OPS = opcode_23.JUMP_OPS
# We base this off of 2.4 instead of the other way around # We base this off of 2.4 instead of the other way around
# because we cleaned things up this way. # because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein # Copyright (c) 2016-2017 by Rocky Bernstein
""" """
Python 2.4 bytecode scanner/deparser Python 2.4 bytecode scanner/deparser
@@ -10,7 +10,7 @@ import uncompyle6.scanners.scanner25 as scan
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_24 from xdis.opcodes import opcode_24
JUMP_OPs = opcode_24.JUMP_OPs JUMP_OPS = opcode_24.JUMP_OPS
# We base this off of 2.5 instead of the other way around # We base this off of 2.5 instead of the other way around
# because we cleaned things up this way. # because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015-2016 by Rocky Bernstein # Copyright (c) 2015-2017 by Rocky Bernstein
""" """
Python 2.5 bytecode scanner/deparser Python 2.5 bytecode scanner/deparser
@@ -11,7 +11,7 @@ import uncompyle6.scanners.scanner26 as scan
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_25 from xdis.opcodes import opcode_25
JUMP_OPs = opcode_25.JUMP_OPs JUMP_OPS = opcode_25.JUMP_OPS
# We base this off of 2.6 instead of the other way around # We base this off of 2.6 instead of the other way around
# because we cleaned things up this way. # because we cleaned things up this way.

View File

@@ -19,7 +19,7 @@ from uncompyle6.scanner import L65536
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_26 from xdis.opcodes import opcode_26
JUMP_OPs = opcode_26.JUMP_OPs JUMP_OPS = opcode_26.JUMP_OPS
class Scanner26(scan.Scanner2): class Scanner26(scan.Scanner2):
def __init__(self, show_asm=False): def __init__(self, show_asm=False):

View File

@@ -18,7 +18,7 @@ if PYTHON3:
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_27 from xdis.opcodes import opcode_27
JUMP_OPs = opcode_27.JUMP_OPs JUMP_OPS = opcode_27.JUMP_OPs
class Scanner27(Scanner2): class Scanner27(Scanner2):
def __init__(self, show_asm=False, is_pypy=False): def __init__(self, show_asm=False, is_pypy=False):

View File

@@ -330,7 +330,7 @@ class Scanner3(Scanner):
attr = (pos_args, name_pair_args, annotate_args) attr = (pos_args, name_pair_args, annotate_args)
tokens.append( tokens.append(
Token( Token(
type_ = opname, opname = opname,
attr = attr, attr = attr,
pattr = pattr, pattr = pattr,
offset = inst.offset, offset = inst.offset,
@@ -408,7 +408,7 @@ class Scanner3(Scanner):
last_op_was_break = opname == 'BREAK_LOOP' last_op_was_break = opname == 'BREAK_LOOP'
tokens.append( tokens.append(
Token( Token(
type_ = opname, opname = opname,
attr = argval, attr = argval,
pattr = pattr, pattr = pattr,
offset = inst.offset, offset = inst.offset,

View File

@@ -12,7 +12,7 @@ from __future__ import print_function
from xdis.opcodes import opcode_30 as opc from xdis.opcodes import opcode_30 as opc
from xdis.bytecode import op_size from xdis.bytecode import op_size
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs) JUMP_OPS = opc.JUMP_OPS
JUMP_TF = frozenset([opc.JUMP_IF_FALSE, opc.JUMP_IF_TRUE]) JUMP_TF = frozenset([opc.JUMP_IF_FALSE, opc.JUMP_IF_TRUE])

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein # Copyright (c) 2016-2017 by Rocky Bernstein
""" """
Python 3.1 bytecode scanner/deparser Python 3.1 bytecode scanner/deparser
@@ -10,7 +10,7 @@ from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_31 as opc from xdis.opcodes import opcode_31 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs) JUMP_OPS = opc.JUMP_OPS
from uncompyle6.scanners.scanner3 import Scanner3 from uncompyle6.scanners.scanner3 import Scanner3
class Scanner31(Scanner3): class Scanner31(Scanner3):

View File

@@ -13,7 +13,7 @@ from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_32 as opc from xdis.opcodes import opcode_32 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs) JUMP_OPS = opc.JUMP_OPS
from uncompyle6.scanners.scanner3 import Scanner3 from uncompyle6.scanners.scanner3 import Scanner3
class Scanner32(Scanner3): class Scanner32(Scanner3):

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015-2016 by Rocky Bernstein # Copyright (c) 2015-2017 by Rocky Bernstein
""" """
Python 3.3 bytecode scanner/deparser Python 3.3 bytecode scanner/deparser
@@ -10,7 +10,7 @@ from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_33 as opc from xdis.opcodes import opcode_33 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs) JUMP_OPS = opc.JUMP_OPS
from uncompyle6.scanners.scanner3 import Scanner3 from uncompyle6.scanners.scanner3 import Scanner3
class Scanner33(Scanner3): class Scanner33(Scanner3):

View File

@@ -14,7 +14,7 @@ from __future__ import print_function
from xdis.opcodes import opcode_34 as opc from xdis.opcodes import opcode_34 as opc
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs) JUMP_OPS = opc.JUMP_OPS
from uncompyle6.scanners.scanner3 import Scanner3 from uncompyle6.scanners.scanner3 import Scanner3

View File

@@ -15,7 +15,7 @@ from uncompyle6.scanners.scanner3 import Scanner3
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_35 as opc from xdis.opcodes import opcode_35 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs) JUMP_OPS = opc.JUMP_OPS
class Scanner35(Scanner3): class Scanner35(Scanner3):

View File

@@ -13,9 +13,9 @@ from __future__ import print_function
from uncompyle6.scanners.scanner3 import Scanner3 from uncompyle6.scanners.scanner3 import Scanner3
# bytecode verification, verify(), uses JUMP_OPs from here # bytecode verification, verify(), uses JUMP_OPS from here
from xdis.opcodes import opcode_36 as opc from xdis.opcodes import opcode_36 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs) JUMP_OPS = opc.JUMP_OPS
class Scanner36(Scanner3): class Scanner36(Scanner3):

View File

@@ -0,0 +1,38 @@
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 3.7 bytecode decompiler scanner
Does some additional massaging of xdis-disassembled instructions to
make things easier for decompilation.
This sets up opcodes Python's 3.6 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
from uncompyle6.scanners.scanner3 import Scanner3
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_36 as opc
JUMP_OPs = opc.JUMP_OPS
class Scanner37(Scanner3):
def __init__(self, show_asm=None):
Scanner3.__init__(self, 3.7, show_asm)
return
pass
if __name__ == "__main__":
from uncompyle6 import PYTHON_VERSION
if PYTHON_VERSION == 3.7:
import inspect
co = inspect.currentframe().f_code
tokens, customize = Scanner37().ingest(co)
for t in tokens:
print(t.format())
pass
else:
print("Need to be Python 3.7 to demo; I am %s." %
PYTHON_VERSION)

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein # Copyright (c) 2016-2017 by Rocky Bernstein
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com> # Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
# Copyright (c) 1999 John Aycock # Copyright (c) 1999 John Aycock
@@ -8,7 +8,7 @@ from uncompyle6 import PYTHON3
if PYTHON3: if PYTHON3:
intern = sys.intern intern = sys.intern
class Token: class Token():
""" """
Class representing a byte-code instruction. Class representing a byte-code instruction.
@@ -16,13 +16,12 @@ class Token:
the contents of one line as output by dis.dis(). the contents of one line as output by dis.dis().
""" """
# FIXME: match Python 3.4's terms: # FIXME: match Python 3.4's terms:
# type_ should be opname
# linestart = starts_line # linestart = starts_line
# attr = argval # attr = argval
# pattr = argrepr # pattr = argrepr
def __init__(self, type_, attr=None, pattr=None, offset=-1, def __init__(self, opname, attr=None, pattr=None, offset=-1,
linestart=None, op=None, has_arg=None, opc=None): linestart=None, op=None, has_arg=None, opc=None):
self.type = intern(type_) self.type = intern(opname)
self.op = op self.op = op
self.has_arg = has_arg self.has_arg = has_arg
self.attr = attr self.attr = attr

View File

@@ -1,5 +1,5 @@
# Copyright (c) 2017 by Rocky Bernstein # Copyright (c) 2017 by Rocky Bernstein
"""Constants used in pysource.py""" """Constants and initial table values used in pysource.py and fragments.py"""
import re, sys import re, sys
from uncompyle6.parsers.astnode import AST from uncompyle6.parsers.astnode import AST
@@ -57,9 +57,7 @@ INDENT_PER_LEVEL = ' ' # additional intent per pretty-print level
TABLE_R = { TABLE_R = {
'STORE_ATTR': ( '%c.%[1]{pattr}', 0), 'STORE_ATTR': ( '%c.%[1]{pattr}', 0),
# 'STORE_SUBSCR': ( '%c[%c]', 0, 1 ),
'DELETE_ATTR': ( '%|del %c.%[-1]{pattr}\n', 0 ), 'DELETE_ATTR': ( '%|del %c.%[-1]{pattr}\n', 0 ),
# 'EXEC_STMT': ( '%|exec %c in %[1]C\n', 0, (0,maxint,', ') ),
} }
TABLE_R0 = { TABLE_R0 = {
@@ -67,6 +65,7 @@ TABLE_R0 = {
# 'BUILD_TUPLE': ( '(%C)', (0,-1,', ') ), # 'BUILD_TUPLE': ( '(%C)', (0,-1,', ') ),
# 'CALL_FUNCTION': ( '%c(%P)', 0, (1,-1,', ') ), # 'CALL_FUNCTION': ( '%c(%P)', 0, (1,-1,', ') ),
} }
TABLE_DIRECT = { TABLE_DIRECT = {
'BINARY_ADD': ( '+' ,), 'BINARY_ADD': ( '+' ,),
'BINARY_SUBTRACT': ( '-' ,), 'BINARY_SUBTRACT': ( '-' ,),
@@ -100,7 +99,7 @@ TABLE_DIRECT = {
'UNARY_POSITIVE': ( '+',), 'UNARY_POSITIVE': ( '+',),
'UNARY_NEGATIVE': ( '-',), 'UNARY_NEGATIVE': ( '-',),
'UNARY_INVERT': ( '~%c'), 'UNARY_INVERT': ( '~'),
'unary_expr': ( '%c%c', 1, 0), 'unary_expr': ( '%c%c', 1, 0),
'unary_not': ( 'not %c', 0 ), 'unary_not': ( 'not %c', 0 ),
@@ -276,7 +275,7 @@ MAP = {
} }
# Operator precidence # Operator precidence
# See https://docs.python.org/3/reference/expressions.html # See https://docs.python.org/2/reference/expressions.html
# or https://docs.python.org/3/reference/expressions.html # or https://docs.python.org/3/reference/expressions.html
# for a list. # for a list.
PRECEDENCE = { PRECEDENCE = {

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015, 2016 by Rocky Bernstein # Copyright (c) 2015-2017 by Rocky Bernstein
# Copyright (c) 2005 by Dan Pascu <dan@windowmaker.org> # Copyright (c) 2005 by Dan Pascu <dan@windowmaker.org>
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com> # Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
# Copyright (c) 1999 John Aycock # Copyright (c) 1999 John Aycock
@@ -8,8 +8,8 @@ Creates Python source code from an uncompyle6 abstract syntax tree,
and indexes fragments which can be accessed by instruction offset and indexes fragments which can be accessed by instruction offset
address. address.
See the comments in pysource for information on the abstract sytax tree See https://github.com/rocky/python-uncompyle6/wiki/Table-driven-semantic-actions.
and how semantic actions are written. for a more complete explanation, nicely marked up and with examples.
We add some format specifiers here not used in pysource We add some format specifiers here not used in pysource
@@ -40,7 +40,8 @@ do it recursively which is where offsets are probably located.
2. %b 2. %b
----- -----
%b associates the text from the previous start node up to what we have now %b associates the text from the specified index to what we have now.
it takes an integer argument.
For example in: For example in:
'importmultiple': ( '%|import%b %c%c\n', 0, 2, 3 ), 'importmultiple': ( '%|import%b %c%c\n', 0, 2, 3 ),
@@ -95,7 +96,7 @@ TABLE_DIRECT_FRAGMENT = {
'list_for': (' for %c%x in %c%c', 2, (2, (1, )), 0, 3 ), 'list_for': (' for %c%x in %c%c', 2, (2, (1, )), 0, 3 ),
'forstmt': ( '%|for%b %c%x in %c:\n%+%c%-\n\n', 0, 3, (3, (2, )), 1, 4 ), 'forstmt': ( '%|for%b %c%x in %c:\n%+%c%-\n\n', 0, 3, (3, (2, )), 1, 4 ),
'forelsestmt': ( 'forelsestmt': (
'%|for %c in %c%x:\n%+%c%-%|else:\n%+%c%-\n\n', 3, (3, (2,)), 1, 4, -2), '%|for %c%x in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 3, (3, (2,)), 1, 4, -2),
'forelselaststmt': ( 'forelselaststmt': (
'%|for %c%x in %c:\n%+%c%-%|else:\n%+%c%-', 3, (3, (2,)), 1, 4, -2), '%|for %c%x in %c:\n%+%c%-%|else:\n%+%c%-', 3, (3, (2,)), 1, 4, -2),
'forelselaststmtl': ( 'forelselaststmtl': (
@@ -421,10 +422,10 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write(self.indent, 'if ') self.write(self.indent, 'if ')
self.preorder(node[0]) self.preorder(node[0])
self.println(':') self.println(':')
self.indentMore() self.indent_more()
node[1].parent = node node[1].parent = node
self.preorder(node[1]) self.preorder(node[1])
self.indentLess() self.indent_less()
if_ret_at_end = False if_ret_at_end = False
if len(node[2][0]) >= 3: if len(node[2][0]) >= 3:
@@ -443,17 +444,17 @@ class FragmentsWalker(pysource.SourceWalker, object):
prev_stmt_is_if_ret = False prev_stmt_is_if_ret = False
if not past_else and not if_ret_at_end: if not past_else and not if_ret_at_end:
self.println(self.indent, 'else:') self.println(self.indent, 'else:')
self.indentMore() self.indent_more()
past_else = True past_else = True
n.parent = node n.parent = node
self.preorder(n) self.preorder(n)
if not past_else or if_ret_at_end: if not past_else or if_ret_at_end:
self.println(self.indent, 'else:') self.println(self.indent, 'else:')
self.indentMore() self.indent_more()
node[2][1].parent = node node[2][1].parent = node
self.preorder(node[2][1]) self.preorder(node[2][1])
self.set_pos_info(node, start, len(self.f.getvalue())) self.set_pos_info(node, start, len(self.f.getvalue()))
self.indentLess() self.indent_less()
self.prune() self.prune()
def n_elifelsestmtr(self, node): def n_elifelsestmtr(self, node):
@@ -470,20 +471,20 @@ class FragmentsWalker(pysource.SourceWalker, object):
node[0].parent = node node[0].parent = node
self.preorder(node[0]) self.preorder(node[0])
self.println(':') self.println(':')
self.indentMore() self.indent_more()
node[1].parent = node node[1].parent = node
self.preorder(node[1]) self.preorder(node[1])
self.indentLess() self.indent_less()
for n in node[2][0]: for n in node[2][0]:
n[0].type = 'elifstmt' n[0].type = 'elifstmt'
n.parent = node n.parent = node
self.preorder(n) self.preorder(n)
self.println(self.indent, 'else:') self.println(self.indent, 'else:')
self.indentMore() self.indent_more()
node[2][1].parent = node node[2][1].parent = node
self.preorder(node[2][1]) self.preorder(node[2][1])
self.indentLess() self.indent_less()
self.set_pos_info(node, start, len(self.f.getvalue())) self.set_pos_info(node, start, len(self.f.getvalue()))
self.prune() self.prune()
@@ -527,7 +528,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write(func_name) self.write(func_name)
self.set_pos_info(code_node, start, len(self.f.getvalue())) self.set_pos_info(code_node, start, len(self.f.getvalue()))
self.indentMore() self.indent_more()
start = len(self.f.getvalue()) start = len(self.f.getvalue())
self.make_function(node, isLambda=False, codeNode=code_node) self.make_function(node, isLambda=False, codeNode=code_node)
@@ -537,7 +538,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write('\n\n') self.write('\n\n')
else: else:
self.write('\n\n\n') self.write('\n\n\n')
self.indentLess() self.indent_less()
self.prune() # stop recursing self.prune() # stop recursing
def n_list_compr(self, node): def n_list_compr(self, node):
@@ -977,9 +978,9 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.println(':') self.println(':')
# class body # class body
self.indentMore() self.indent_more()
self.build_class(subclass) self.build_class(subclass)
self.indentLess() self.indent_less()
self.currentclass = cclass self.currentclass = cclass
self.set_pos_info(node, start, len(self.f.getvalue())) self.set_pos_info(node, start, len(self.f.getvalue()))
@@ -1316,7 +1317,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
p = self.prec p = self.prec
self.prec = 100 self.prec = 100
self.indentMore(INDENT_PER_LEVEL) self.indent_more(INDENT_PER_LEVEL)
line_seperator = ',\n' + self.indent line_seperator = ',\n' + self.indent
sep = INDENT_PER_LEVEL[:-1] sep = INDENT_PER_LEVEL[:-1]
start = len(self.f.getvalue()) start = len(self.f.getvalue())
@@ -1393,7 +1394,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
n.parent = node n.parent = node
self.set_pos_info(n, start, finish) self.set_pos_info(n, start, finish)
self.set_pos_info(node, start, finish) self.set_pos_info(node, start, finish)
self.indentLess(INDENT_PER_LEVEL) self.indent_less(INDENT_PER_LEVEL)
self.prec = p self.prec = p
self.prune() self.prune()
@@ -1429,7 +1430,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
else: else:
flat_elems.append(elem) flat_elems.append(elem)
self.indentMore(INDENT_PER_LEVEL) self.indent_more(INDENT_PER_LEVEL)
if len(node) > 3: if len(node) > 3:
line_separator = ',\n' + self.indent line_separator = ',\n' + self.indent
else: else:
@@ -1454,14 +1455,14 @@ class FragmentsWalker(pysource.SourceWalker, object):
n.parent = node.parent n.parent = node.parent
self.set_pos_info(n, start, finish) self.set_pos_info(n, start, finish)
self.set_pos_info(node, start, finish) self.set_pos_info(node, start, finish)
self.indentLess(INDENT_PER_LEVEL) self.indent_less(INDENT_PER_LEVEL)
self.prec = p self.prec = p
self.prune() self.prune()
def engine(self, entry, startnode): def template_engine(self, entry, startnode):
"""The format template interpetation engine. See the comment at the """The format template interpetation engine. See the comment at the
beginning of this module for the how we interpret format specifications such as beginning of this module for the how we interpret format
%c, %C, and so on. specifications such as %c, %C, and so on.
""" """
# print("-----") # print("-----")
@@ -1498,8 +1499,8 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write('%') self.write('%')
self.set_pos_info(node, start, len(self.f.getvalue())) self.set_pos_info(node, start, len(self.f.getvalue()))
elif typ == '+': self.indentMore() elif typ == '+': self.indent_more()
elif typ == '-': self.indentLess() elif typ == '-': self.indent_less()
elif typ == '|': self.write(self.indent) elif typ == '|': self.write(self.indent)
# no longer used, since BUILD_TUPLE_n is pretty printed: # no longer used, since BUILD_TUPLE_n is pretty printed:
elif typ == 'r': recurse_node = True elif typ == 'r': recurse_node = True

View File

@@ -3,7 +3,7 @@
""" """
All the crazy things we have to do to handle Python functions All the crazy things we have to do to handle Python functions
""" """
from xdis.code import iscode from xdis.code import iscode, code_has_star_arg, code_has_star_star_arg
from uncompyle6.scanner import Code from uncompyle6.scanner import Code
from uncompyle6.parsers.astnode import AST from uncompyle6.parsers.astnode import AST
from uncompyle6 import PYTHON3 from uncompyle6 import PYTHON3
@@ -45,17 +45,6 @@ def find_none(node):
return True return True
return False return False
# FIXME: put this in xdis
def code_has_star_arg(code):
"""Return True iff
the code object has a variable positional parameter (*args-like)"""
return (code.co_flags & 4) != 0
def code_has_star_star_arg(code):
"""Return True iff
The code object has a variable keyword parameter (**kwargs-like)."""
return (code.co_flags & 8) != 0
# FIXME: DRY the below code... # FIXME: DRY the below code...
def make_function3_annotate(self, node, isLambda, nested=1, def make_function3_annotate(self, node, isLambda, nested=1,

View File

@@ -11,62 +11,87 @@ and what they mean).
Upper levels of the grammar is a more-or-less conventional grammar for Upper levels of the grammar is a more-or-less conventional grammar for
Python. Python.
Semantic action rules for nonterminal symbols can be specified here by
creating a method prefaced with "n_" for that nonterminal. For
example, "n_exec_stmt" handles the semantic actions for the
"exec_smnt" nonterminal symbol. Similarly if a method with the name
of the nonterminal is suffixed with "_exit" it will be called after
all of its children are called.
Another other way to specify a semantic rule for a nonterminal is via
rule given in one of the tables MAP_R0, MAP_R, or MAP_DIRECT.
These uses a printf-like syntax to direct substitution from attributes
of the nonterminal and its children..
The rest of the below describes how table-driven semantic actions work
and gives a list of the format specifiers. The default() and engine()
methods implement most of the below.
Step 1 determines a table (T) and a path to a
table key (K) from the node type (N) (other nodes are shown as O):
N N N&K
/ | ... \ / | ... \ / | ... \
O O O O O K O O O
|
K
MAP_R0 (TABLE_R0) MAP_R (TABLE_R) MAP_DIRECT (TABLE_DIRECT)
The default is a direct mapping. The key K is then extracted from the
subtree and used to find a table entry T[K], if any. The result is a
format string and arguments (a la printf()) for the formatting engine.
Escapes in the format string are:
%c evaluate children N[A] recursively*
%C evaluate children N[A[0]]..N[A[1]-1] recursively, separate by A[2]*
%P same as %C but sets operator precedence
%D same as %C but is for left-recursive lists like kwargs which
goes to epsilon at the beginning. Using %C an extra separator
with an epsilon appears at the beginning
%, print ',' if last %C only printed one item. This is mostly for tuples
on the LHS of an assignment statement since BUILD_TUPLE_n pretty-prints
other tuples.
%| tab to current indentation level
%+ increase current indentation level
%- decrease current indentation level
%{...} evaluate ... in context of N
%% literal '%'
%p evaluate N setting precedence
* indicates an argument (A) required.
The '%' may optionally be followed by a number (C) in square brackets, which
makes the engine walk down to N[C] before evaluating the escape code.
""" """
# The below is a bit long, but still it is somehwat abbreviated.
# See https://github.com/rocky/python-uncompyle6/wiki/Table-driven-semantic-actions.
# for a more complete explanation, nicely marked up and with examples.
#
#
# Semantic action rules for nonterminal symbols can be specified here by
# creating a method prefaced with "n_" for that nonterminal. For
# example, "n_exec_stmt" handles the semantic actions for the
# "exec_stmt" nonterminal symbol. Similarly if a method with the name
# of the nonterminal is suffixed with "_exit" it will be called after
# all of its children are called.
#
# However if this were done for all of the rules, this file would be even longer
# than it is already.
#
# Another more compact way to specify a semantic rule for a nonterminal is via
# rule given in one of the tables MAP_R0, MAP_R, or MAP_DIRECT.
#
# These uses a printf-like syntax to direct substitution from attributes
# of the nonterminal and its children..
#
# The rest of the below describes how table-driven semantic actions work
# and gives a list of the format specifiers. The default() and
# template_engine() methods implement most of the below.
#
# Step 1 determines a table (T) and a path to a
# table key (K) from the node type (N) (other nodes are shown as O):
#
# N&K N N
# / | ... \ / | ... \ / | ... \
# O O O O O K O O O
# |
# K
# TABLE_DIRECT TABLE_R TABLE_R0
#
# The default is a "TABLE_DIRECT" mapping. The key K is then extracted from the
# subtree and used to find a table entry T[K], if any. The result is a
# format string and arguments (a la printf()) for the formatting engine.
# Escapes in the format string are:
#
# %c evaluate the node recursively. Its argument is a single
# integer representing a node index.
# %p like %c but sets the operator precedence.
# Its argument then is a tuple indicating the node
# index and the precidence value, an integer.
#
# %C evaluate children recursively, with sibling children separated by the
# given string. It needs a 3-tuple: a starting node, the maximimum
# value of an end node, and a string to be inserted between sibling children
#
# %, Append ',' if last %C only printed one item. This is mostly for tuples
# on the LHS of an assignment statement since BUILD_TUPLE_n pretty-prints
# other tuples. The specifier takes no arguments
#
# %P same as %C but sets operator precedence. Its argument is a 4-tuple:
# the node low and high indices, the separator, a string the precidence
# value, an integer.
#
# %D Same as `%C` this is for left-recursive lists like kwargs where goes
# to epsilon at the beginning. It needs a 3-tuple: a starting node, the
# maximimum value of an end node, and a string to be inserted between
# sibling children. If we were to use `%C` an extra separator with an
# epsilon would appear at the beginning.
#
# %| Insert spaces to the current indentation level. Takes no arguments.
#
# %+ increase current indentation level. Takes no arguments.
#
# %- decrease current indentation level. Takes no arguments.
#
# %{...} evaluate ... in context of N
#
# %% literal '%'. Takes no arguments.
#
#
# The '%' may optionally be followed by a number (C) in square
# brackets, which makes the template_engine walk down to N[C] before
# evaluating the escape code.
from __future__ import print_function from __future__ import print_function
import sys import sys
@@ -124,6 +149,29 @@ class SourceWalker(GenericASTTraversal, object):
debug_parser=PARSER_DEFAULT_DEBUG, debug_parser=PARSER_DEFAULT_DEBUG,
compile_mode='exec', is_pypy=False, compile_mode='exec', is_pypy=False,
linestarts={}): linestarts={}):
"""version is the Python version (a float) of the Python dialect
of both the AST and language we should produce.
out is IO-like file pointer to where the output should go. It
whould have a getvalue() method.
scanner is a method to call when we need to scan tokens. Sometimes
in producing output we will run across further tokens that need
to be scaned.
If showast is True, we print the AST tree.
compile_mode is is either 'exec' or 'single'. It isthe compile
mode that was used to create the AST and specifies a gramar variant within
a Python version to use.
is_pypy should be True if the AST was generated for PyPy.
linestarts is a dictionary of line number to bytecode offset. This
can sometimes assist in determinte which kind of source-code construct
to use when there is ambiguity.
"""
GenericASTTraversal.__init__(self, ast=None) GenericASTTraversal.__init__(self, ast=None)
self.scanner = scanner self.scanner = scanner
params = { params = {
@@ -306,7 +354,7 @@ class SourceWalker(GenericASTTraversal, object):
# MAKE_FUNCTION .. # MAKE_FUNCTION ..
code = node[-3] code = node[-3]
self.indentMore() self.indent_more()
for annotate_last in range(len(node)-1, -1, -1): for annotate_last in range(len(node)-1, -1, -1):
if node[annotate_last] == 'annotate_tuple': if node[annotate_last] == 'annotate_tuple':
break break
@@ -326,7 +374,7 @@ class SourceWalker(GenericASTTraversal, object):
self.write('\n\n') self.write('\n\n')
else: else:
self.write('\n\n\n') self.write('\n\n\n')
self.indentLess() self.indent_less()
self.prune() # stop recursing self.prune() # stop recursing
self.n_mkfunc_annotate = n_mkfunc_annotate self.n_mkfunc_annotate = n_mkfunc_annotate
@@ -361,8 +409,10 @@ class SourceWalker(GenericASTTraversal, object):
node.type == 'call_function' node.type == 'call_function'
p = self.prec p = self.prec
self.prec = 80 self.prec = 80
self.engine(('%c(%P)', 0, (1, -4, ', ', 100)), node) self.template_engine(('%c(%P)', 0,
(1, -4, ', ', 100)), node)
self.prec = p self.prec = p
node.type == 'async_call_function'
self.prune() self.prune()
self.n_async_call_function = n_async_call_function self.n_async_call_function = n_async_call_function
self.n_build_list_unpack = self.n_build_list self.n_build_list_unpack = self.n_build_list
@@ -402,9 +452,11 @@ class SourceWalker(GenericASTTraversal, object):
is_code = hasattr(code_node, 'attr') and iscode(code_node.attr) is_code = hasattr(code_node, 'attr') and iscode(code_node.attr)
if (is_code and if (is_code and
(code_node.attr.co_flags & COMPILER_FLAG_BIT['COROUTINE'])): (code_node.attr.co_flags & COMPILER_FLAG_BIT['COROUTINE'])):
self.engine(('\n\n%|async def %c\n', -2), node) self.template_engine(('\n\n%|async def %c\n',
-2), node)
else: else:
self.engine(('\n\n%|def %c\n', -2), node) self.template_engine(('\n\n%|def %c\n', -2),
node)
self.prune() self.prune()
self.n_funcdef = n_funcdef self.n_funcdef = n_funcdef
@@ -502,10 +554,10 @@ class SourceWalker(GenericASTTraversal, object):
super(SourceWalker, self).preorder(node) super(SourceWalker, self).preorder(node)
self.set_pos_info(node) self.set_pos_info(node)
def indentMore(self, indent=TAB): def indent_more(self, indent=TAB):
self.indent += indent self.indent += indent
def indentLess(self, indent=TAB): def indent_less(self, indent=TAB):
self.indent = self.indent[:-len(indent)] self.indent = self.indent[:-len(indent)]
def traverse(self, node, indent=None, isLambda=False): def traverse(self, node, indent=None, isLambda=False):
@@ -823,9 +875,9 @@ class SourceWalker(GenericASTTraversal, object):
self.write(self.indent, 'if ') self.write(self.indent, 'if ')
self.preorder(node[0]) self.preorder(node[0])
self.println(':') self.println(':')
self.indentMore() self.indent_more()
self.preorder(node[1]) self.preorder(node[1])
self.indentLess() self.indent_less()
if_ret_at_end = False if_ret_at_end = False
if len(return_stmts_node[0]) >= 3: if len(return_stmts_node[0]) >= 3:
@@ -844,14 +896,14 @@ class SourceWalker(GenericASTTraversal, object):
prev_stmt_is_if_ret = False prev_stmt_is_if_ret = False
if not past_else and not if_ret_at_end: if not past_else and not if_ret_at_end:
self.println(self.indent, 'else:') self.println(self.indent, 'else:')
self.indentMore() self.indent_more()
past_else = True past_else = True
self.preorder(n) self.preorder(n)
if not past_else or if_ret_at_end: if not past_else or if_ret_at_end:
self.println(self.indent, 'else:') self.println(self.indent, 'else:')
self.indentMore() self.indent_more()
self.preorder(return_stmts_node[1]) self.preorder(return_stmts_node[1])
self.indentLess() self.indent_less()
self.prune() self.prune()
n_ifelsestmtr2 = n_ifelsestmtr n_ifelsestmtr2 = n_ifelsestmtr
@@ -873,17 +925,17 @@ class SourceWalker(GenericASTTraversal, object):
self.write(self.indent, 'elif ') self.write(self.indent, 'elif ')
self.preorder(node[0]) self.preorder(node[0])
self.println(':') self.println(':')
self.indentMore() self.indent_more()
self.preorder(node[1]) self.preorder(node[1])
self.indentLess() self.indent_less()
for n in return_stmts_node[0]: for n in return_stmts_node[0]:
n[0].type = 'elifstmt' n[0].type = 'elifstmt'
self.preorder(n) self.preorder(n)
self.println(self.indent, 'else:') self.println(self.indent, 'else:')
self.indentMore() self.indent_more()
self.preorder(return_stmts_node[1]) self.preorder(return_stmts_node[1])
self.indentLess() self.indent_less()
self.prune() self.prune()
def n_import_as(self, node): def n_import_as(self, node):
@@ -924,14 +976,14 @@ class SourceWalker(GenericASTTraversal, object):
func_name = code_node.attr.co_name func_name = code_node.attr.co_name
self.write(func_name) self.write(func_name)
self.indentMore() self.indent_more()
self.make_function(node, isLambda=False, codeNode=code_node) self.make_function(node, isLambda=False, codeNode=code_node)
if len(self.param_stack) > 1: if len(self.param_stack) > 1:
self.write('\n\n') self.write('\n\n')
else: else:
self.write('\n\n\n') self.write('\n\n\n')
self.indentLess() self.indent_less()
self.prune() # stop recursing self.prune() # stop recursing
def make_function(self, node, isLambda, nested=1, def make_function(self, node, isLambda, nested=1,
@@ -1402,9 +1454,9 @@ class SourceWalker(GenericASTTraversal, object):
self.println(':') self.println(':')
# class body # class body
self.indentMore() self.indent_more()
self.build_class(subclass_code) self.build_class(subclass_code)
self.indentLess() self.indent_less()
self.currentclass = cclass self.currentclass = cclass
if len(self.param_stack) > 1: if len(self.param_stack) > 1:
@@ -1475,7 +1527,7 @@ class SourceWalker(GenericASTTraversal, object):
p = self.prec p = self.prec
self.prec = 100 self.prec = 100
self.indentMore(INDENT_PER_LEVEL) self.indent_more(INDENT_PER_LEVEL)
sep = INDENT_PER_LEVEL[:-1] sep = INDENT_PER_LEVEL[:-1]
self.write('{') self.write('{')
line_number = self.line_number line_number = self.line_number
@@ -1613,7 +1665,7 @@ class SourceWalker(GenericASTTraversal, object):
if sep.startswith(",\n"): if sep.startswith(",\n"):
self.write(sep[1:]) self.write(sep[1:])
self.write('}') self.write('}')
self.indentLess(INDENT_PER_LEVEL) self.indent_less(INDENT_PER_LEVEL)
self.prec = p self.prec = p
self.prune() self.prune()
@@ -1664,7 +1716,7 @@ class SourceWalker(GenericASTTraversal, object):
else: else:
flat_elems.append(elem) flat_elems.append(elem)
self.indentMore(INDENT_PER_LEVEL) self.indent_more(INDENT_PER_LEVEL)
sep = '' sep = ''
for elem in flat_elems: for elem in flat_elems:
@@ -1689,7 +1741,7 @@ class SourceWalker(GenericASTTraversal, object):
if lastnode.attr == 1 and lastnodetype.startswith('BUILD_TUPLE'): if lastnode.attr == 1 and lastnodetype.startswith('BUILD_TUPLE'):
self.write(',') self.write(',')
self.write(endchar) self.write(endchar)
self.indentLess(INDENT_PER_LEVEL) self.indent_less(INDENT_PER_LEVEL)
self.prec = p self.prec = p
self.prune() self.prune()
@@ -1740,11 +1792,12 @@ class SourceWalker(GenericASTTraversal, object):
node[-2][0].type = 'unpack_w_parens' node[-2][0].type = 'unpack_w_parens'
self.default(node) self.default(node)
def engine(self, entry, startnode): def template_engine(self, entry, startnode):
"""The format template interpetation engine. See the comment at the """The format template interpetation engine. See the comment at the
beginning of this module for the how we interpret format specifications such as beginning of this module for the how we interpret format
%c, %C, and so on. specifications such as %c, %C, and so on.
""" """
# self.println("----> ", startnode.type, ', ', entry[0]) # self.println("----> ", startnode.type, ', ', entry[0])
fmt = entry[0] fmt = entry[0]
arg = 1 arg = 1
@@ -1763,10 +1816,10 @@ class SourceWalker(GenericASTTraversal, object):
if typ == '%': self.write('%') if typ == '%': self.write('%')
elif typ == '+': elif typ == '+':
self.line_number += 1 self.line_number += 1
self.indentMore() self.indent_more()
elif typ == '-': elif typ == '-':
self.line_number += 1 self.line_number += 1
self.indentLess() self.indent_less()
elif typ == '|': elif typ == '|':
self.line_number += 1 self.line_number += 1
self.write(self.indent) self.write(self.indent)
@@ -1777,7 +1830,6 @@ class SourceWalker(GenericASTTraversal, object):
node[0].attr == 1): node[0].attr == 1):
self.write(',') self.write(',')
elif typ == 'c': elif typ == 'c':
if isinstance(entry[arg], int):
entry_node = node[entry[arg]] entry_node = node[entry[arg]]
self.preorder(entry_node) self.preorder(entry_node)
arg += 1 arg += 1
@@ -1847,7 +1899,7 @@ class SourceWalker(GenericASTTraversal, object):
pass pass
if key.type in table: if key.type in table:
self.engine(table[key.type], node) self.template_engine(table[key.type], node)
self.prune() self.prune()
def customize(self, customize): def customize(self, customize):
@@ -1871,7 +1923,7 @@ class SourceWalker(GenericASTTraversal, object):
'CALL_FUNCTION_VAR_KW', 'CALL_FUNCTION_KW'): 'CALL_FUNCTION_VAR_KW', 'CALL_FUNCTION_KW'):
if v == 0: if v == 0:
str = '%c(%C' # '%C' is a dummy here ... str = '%c(%C' # '%C' is a dummy here ...
p2 = (0, 0, None) # .. because of this p2 = (0, 0, None) # .. because of the None in this
else: else:
str = '%c(%C, ' str = '%c(%C, '
p2 = (1, -2, ', ') p2 = (1, -2, ', ')

View File

@@ -45,7 +45,7 @@ BIN_OP_FUNCS = {
'BINARY_OR': operator.or_, 'BINARY_OR': operator.or_,
} }
JUMP_OPs = None JUMP_OPS = None
# --- exceptions --- # --- exceptions ---
@@ -227,8 +227,8 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2,
import uncompyle6.scanners.scanner36 as scan import uncompyle6.scanners.scanner36 as scan
scanner = scan.Scanner36() scanner = scan.Scanner36()
global JUMP_OPs global JUMP_OPS
JUMP_OPs = list(scan.JUMP_OPs) + ['JUMP_BACK'] JUMP_OPS = list(scan.JUMP_OPS) + ['JUMP_BACK']
# use changed Token class # use changed Token class
# We (re)set this here to save exception handling, # We (re)set this here to save exception handling,
@@ -333,7 +333,7 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2,
else: else:
raise CmpErrorCode(name, tokens1[i1].offset, tokens1[i1], raise CmpErrorCode(name, tokens1[i1].offset, tokens1[i1],
tokens2[i2], tokens1, tokens2) tokens2[i2], tokens1, tokens2)
elif tokens1[i1].type in JUMP_OPs and tokens1[i1].pattr != tokens2[i2].pattr: elif tokens1[i1].type in JUMP_OPS and tokens1[i1].pattr != tokens2[i2].pattr:
if tokens1[i1].type == 'JUMP_BACK': if tokens1[i1].type == 'JUMP_BACK':
dest1 = int(tokens1[i1].pattr) dest1 = int(tokens1[i1].pattr)
dest2 = int(tokens2[i2].pattr) dest2 = int(tokens2[i2].pattr)
@@ -396,7 +396,7 @@ class Token(scanner.Token):
return 0 return 0
if t == 'JUMP_IF_FALSE_OR_POP' and o.type == 'POP_JUMP_IF_FALSE': if t == 'JUMP_IF_FALSE_OR_POP' and o.type == 'POP_JUMP_IF_FALSE':
return 0 return 0
if JUMP_OPs and t in JUMP_OPs: if JUMP_OPS and t in JUMP_OPS:
# ignore offset # ignore offset
return t == o.type return t == o.type
return (t == o.type) or self.pattr == o.pattr return (t == o.type) or self.pattr == o.pattr

View File

@@ -1,3 +1,3 @@
# This file is suitable for sourcing inside bash as # This file is suitable for sourcing inside bash as
# well as importing into Python # well as importing into Python
VERSION='2.11.3' VERSION='2.12.0'