You've already forked python-uncompyle6
mirror of
https://github.com/rocky/python-uncompyle6.git
synced 2025-08-04 01:09:52 +08:00
Compare commits
25 Commits
release-3.
...
release-3.
Author | SHA1 | Date | |
---|---|---|---|
|
9415ad34ff | ||
|
476db9ad1d | ||
|
9ba0b1bad2 | ||
|
b7942bc5f2 | ||
|
3ac4f1ee61 | ||
|
55d2df04f7 | ||
|
6d529d3cee | ||
|
201e5b18b1 | ||
|
ac2bbfc65a | ||
|
3b8e6635e2 | ||
|
7e172b63d1 | ||
|
c01eb554ed | ||
|
d32e67891b | ||
|
78b8d1cd06 | ||
|
600e56b1d7 | ||
|
170504c518 | ||
|
e3d918df3d | ||
|
e7b62a722f | ||
|
92d63ac598 | ||
|
79bed3419f | ||
|
0353b74a7a | ||
|
67910e7d8e | ||
|
61fa4fe391 | ||
|
a8cdcc4d85 | ||
|
f0176add7a |
9
NEWS
9
NEWS
@@ -1,3 +1,12 @@
|
||||
uncompyle6 3.2.3 2018-06-04 Michael Cohen flips and Fleetwood Redux
|
||||
|
||||
- Python 1.3 support 3.0 bug and
|
||||
- fix botched parameter ordering of 3.x in last release
|
||||
|
||||
uncompyle6 3.2.2 2018-06-04 When I'm 64
|
||||
|
||||
- Python 3.0 support and bug fixes
|
||||
|
||||
uncompyle6 3.2.1 2018-06-04 MF
|
||||
|
||||
- Python 1.4 and 1.5 bug fixes
|
||||
|
65
README.rst
65
README.rst
@@ -1,4 +1,4 @@
|
||||
|buildstatus|
|
||||
|buildstatus| |Latest Version| |Supported Python Versions|
|
||||
|
||||
uncompyle6
|
||||
==========
|
||||
@@ -11,8 +11,9 @@ Introduction
|
||||
------------
|
||||
|
||||
*uncompyle6* translates Python bytecode back into equivalent Python
|
||||
source code. It accepts bytecodes from Python version 1.5, and 2.1 to
|
||||
3.7 or so, including PyPy bytecode and Dropbox's Python 2.5 bytecode.
|
||||
source code. It accepts bytecodes from Python version 1.3 to version
|
||||
3.7, spanning over 22 years of Python releases. We include Dropbox's
|
||||
Python 2.5 bytecode and some PyPy bytecode.
|
||||
|
||||
Why this?
|
||||
---------
|
||||
@@ -29,11 +30,11 @@ CPython bytecode decompilers is the ability to deparse just
|
||||
*fragments* of source code and give source-code information around a
|
||||
given bytecode offset.
|
||||
|
||||
I use the tree fragments to deparse fragments of code inside my
|
||||
trepan_ debuggers_. For that, bytecode offsets are recorded and
|
||||
associated with fragments of the source code. This purpose, although
|
||||
compatible with the original intention, is yet a little bit different.
|
||||
See this_ for more information.
|
||||
I use the tree fragments to deparse fragments of code *at run time*
|
||||
inside my trepan_ debuggers_. For that, bytecode offsets are recorded
|
||||
and associated with fragments of the source code. This purpose,
|
||||
although compatible with the original intention, is yet a little bit
|
||||
different. See this_ for more information.
|
||||
|
||||
Python fragment deparsing given an instruction offset is useful in
|
||||
showing stack traces and can be encorporated into any program that
|
||||
@@ -58,7 +59,7 @@ provides decompilation for subset of Python versions, we generally do
|
||||
demonstrably better for those as well.
|
||||
|
||||
How can we tell? By taking Python bytecode that comes distributed with
|
||||
that version of Python and decompiling these. Among htose that
|
||||
that version of Python and decompiling these. Among those that
|
||||
successfully decompile, we can then make sure the resulting programs
|
||||
are syntactically correct by running the Python interpreter for that
|
||||
bytecode version. Finally, in cases where the program has a test for
|
||||
@@ -75,7 +76,7 @@ Requirements
|
||||
The code here can be run on Python versions 2.6 or later, PyPy 3-2.4,
|
||||
or PyPy-5.0.1. Python versions 2.4-2.7 are supported in the
|
||||
python-2.4 branch. The bytecode files it can read have been tested on
|
||||
Python bytecodes from versions 1.5, 2.1-2.7, and 3.0-3.6 and the
|
||||
Python bytecodes from versions 1.4, 2.1-2.7, and 3.0-3.6 and the
|
||||
above-mentioned PyPy versions.
|
||||
|
||||
Installation
|
||||
@@ -151,12 +152,12 @@ for that bytecode version. Having done this the bytecode produced
|
||||
could be compared with the original bytecode. However as Python's code
|
||||
generation got better, this is no longer feasible.
|
||||
|
||||
There is a kind of *weak verification* that we use that doesn't check
|
||||
bytecode for equivalence but does check to see if the resulting
|
||||
decompiled source is a valid Python program by running the Python
|
||||
interpreter. Because the Python language has changed so much, for best
|
||||
results you should use the same Python version in checking as was used
|
||||
in creating the bytecode.
|
||||
There verification that we use that doesn't check bytecode for
|
||||
equivalence but does check to see if the resulting decompiled source
|
||||
is a valid Python program by running the Python interpreter. Because
|
||||
the Python language has changed so much, for best results you should
|
||||
use the same Python version in checking as was used in creating the
|
||||
bytecode.
|
||||
|
||||
There are however an interesting class of these programs that is
|
||||
readily available give stronger verification: those programs that
|
||||
@@ -174,19 +175,21 @@ that era was minimal)
|
||||
|
||||
There is some work to do on the lower end Python versions which is
|
||||
more difficult for us to handle since we don't have a Python
|
||||
interpreter for versions 1.5, 1.6, and 2.0.
|
||||
interpreter for versions 1.6, and 2.0.
|
||||
|
||||
In the Python 3 series, Python support is is strongest around 3.4 or
|
||||
3.3 and drops off as you move further away from those versions. Python
|
||||
3.6 changes things drastically by using word codes rather than byte
|
||||
codes. As a result, the jump offset field in a jump instruction
|
||||
argument has been reduced. This makes the `EXTENDED_ARG` instructions
|
||||
are now more prevalent in jump instruction; previously they had been
|
||||
rare. Perhaps to compensate for the additional `EXTENDED_ARG`
|
||||
instructions, additional jump optimization has been added. So in sum
|
||||
handling control flow by ad hoc means as is currently done is worse.
|
||||
3.0 is weird in that it in some ways resembles 2.6 more than it does
|
||||
3.1 or 2.7. Python 3.6 changes things drastically by using word codes
|
||||
rather than byte codes. As a result, the jump offset field in a jump
|
||||
instruction argument has been reduced. This makes the `EXTENDED_ARG`
|
||||
instructions are now more prevalent in jump instruction; previously
|
||||
they had been rare. Perhaps to compensate for the additional
|
||||
`EXTENDED_ARG` instructions, additional jump optimization has been
|
||||
added. So in sum handling control flow by ad hoc means as is currently
|
||||
done is worse.
|
||||
|
||||
Also, between Python 3.5, 3.6 and 3.7 there have been major changes to the
|
||||
Between Python 3.5, 3.6 and 3.7 there have been major changes to the
|
||||
`MAKE_FUNCTION` and `CALL_FUNCTION` instructions.
|
||||
|
||||
Currently not all Python magic numbers are supported. Specifically in
|
||||
@@ -212,11 +215,12 @@ There is lots to do, so please dig in and help.
|
||||
See Also
|
||||
--------
|
||||
|
||||
* https://github.com/zrax/pycdc : supports all versions of Python and is written in C++. Support for Python 3 is a bit lacking though.
|
||||
* https://code.google.com/archive/p/unpyc3/ : supports Python 3.2 only. The above projects use a different decompiling technique than what is used here.
|
||||
* https://github.com/figment/unpyc3/ : fork of above, but supports Python 3.3 only. Includes some fixes like supporting function annotations
|
||||
* The HISTORY_ file.
|
||||
* https://github.com/zrax/pycdc : purports to support all versions of Python. It is written in C++ and is most accurate for Python versions around 2.7 and 3.3 when the code was more actively developed. Accuracy for more recent versions of Python 3 and early versions of Python are especially lacking. See its `issue tracker <https://github.com/zrax/pycdc/issues>`_ for details. Currently lightly maintained.
|
||||
* https://code.google.com/archive/p/unpyc3/ : supports Python 3.2 only. The above projects use a different decompiling technique than what is used here. Currently unmaintained.
|
||||
* https://github.com/figment/unpyc3/ : fork of above, but supports Python 3.3 only. Includes some fixes like supporting function annotations. Currently unmaintained.
|
||||
* https://github.com/wibiti/uncompyle2 : supports Python 2.7 only, but does that fairly well. There are situtations where `uncompyle6` results are incorrect while `uncompyle2` results are not, but more often uncompyle6 is correct when uncompyle2 is not. Because `uncompyle6` adheres to accuracy over idiomatic Python, `uncompyle2` can produce more natural-looking code when it is correct. Currently `uncompyle2` is lightly maintained. See its issue `tracker <https://github.com/wibiti/uncompyle2/issues>`_ for more details
|
||||
* `How to report a bug <https://github.com/rocky/python-uncompyle6/blob/master/HOW-TO-REPORT-A-BUG.md>`_
|
||||
* The HISTORY_ file.
|
||||
* https://github.com/rocky/python-xdis : Cross Python version disassembler
|
||||
* https://github.com/rocky/python-xasm : Cross Python version assembler
|
||||
* https://github.com/rocky/python-uncompyle6/wiki : Wiki Documents which describe the code and aspects of it in more detail
|
||||
@@ -234,3 +238,6 @@ See Also
|
||||
.. _PJOrion: http://www.koreanrandom.com/forum/topic/15280-pjorion-%D1%80%D0%B5%D0%B4%D0%B0%D0%BA%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5-%D0%BA%D0%BE%D0%BC%D0%BF%D0%B8%D0%BB%D1%8F%D1%86%D0%B8%D1%8F-%D0%B4%D0%B5%D0%BA%D0%BE%D0%BC%D0%BF%D0%B8%D0%BB%D1%8F%D1%86%D0%B8%D1%8F-%D0%BE%D0%B1%D1%84
|
||||
.. _Deobfuscator: https://github.com/extremecoders-re/PjOrion-Deobfuscator
|
||||
.. _Py2EXE: https://en.wikipedia.org/wiki/Py2exe
|
||||
.. |Supported Python Versions| image:: https://img.shields.io/pypi/pyversions/uncompyle6.svg
|
||||
.. |Latest Version| image:: https://badge.fury.io/py/uncompyle6.svg
|
||||
:target: https://badge.fury.io/py/uncompyle6
|
||||
|
@@ -35,6 +35,7 @@ classifiers = ['Development Status :: 5 - Production/Stable',
|
||||
'Programming Language :: Python :: 2.5',
|
||||
'Programming Language :: Python :: 2.6',
|
||||
'Programming Language :: Python :: 2.7',
|
||||
'Programming Language :: Python :: 3.0',
|
||||
'Programming Language :: Python :: 3.1',
|
||||
'Programming Language :: Python :: 3.2',
|
||||
'Programming Language :: Python :: 3.3',
|
||||
@@ -56,7 +57,7 @@ entry_points = {
|
||||
]}
|
||||
ftp_url = None
|
||||
install_requires = ['spark-parser >= 1.8.5, < 1.9.0',
|
||||
'xdis >= 3.8.2, < 3.9.0', 'six']
|
||||
'xdis >= 3.8.4, < 3.9.0']
|
||||
|
||||
license = 'GPL3'
|
||||
mailing_list = 'python-debugger@googlegroups.com'
|
||||
|
@@ -16,6 +16,7 @@ if ! source ./setup-master.sh ; then
|
||||
fi
|
||||
cd ..
|
||||
for version in $PYVERSIONS; do
|
||||
echo --- $version ---
|
||||
if ! pyenv local $version ; then
|
||||
exit $?
|
||||
fi
|
||||
@@ -23,4 +24,5 @@ for version in $PYVERSIONS; do
|
||||
if ! make check; then
|
||||
exit $?
|
||||
fi
|
||||
echo === $version ===
|
||||
done
|
||||
|
@@ -15,6 +15,7 @@ fi
|
||||
|
||||
cd ..
|
||||
for version in $PYVERSIONS; do
|
||||
echo --- $version ---
|
||||
if ! pyenv local $version ; then
|
||||
exit $?
|
||||
fi
|
||||
@@ -22,4 +23,5 @@ for version in $PYVERSIONS; do
|
||||
if ! make check ; then
|
||||
exit $?
|
||||
fi
|
||||
echo === $version ===
|
||||
done
|
||||
|
@@ -1,150 +1,154 @@
|
||||
# std
|
||||
import os
|
||||
# test
|
||||
import pytest
|
||||
import hypothesis
|
||||
from hypothesis import strategies as st
|
||||
# uncompyle6
|
||||
from uncompyle6 import PYTHON_VERSION, deparse_code
|
||||
import pytest
|
||||
pytestmark = pytest.mark.skipif(PYTHON_VERSION <= 2.6,
|
||||
reason='hypothesis needs 2.7 or later')
|
||||
if PYTHON_VERSION > 2.6:
|
||||
|
||||
import hypothesis
|
||||
from hypothesis import strategies as st
|
||||
|
||||
# uncompyle6
|
||||
|
||||
|
||||
@st.composite
|
||||
def expressions(draw):
|
||||
# todo : would be nice to generate expressions using hypothesis however
|
||||
# this is pretty involved so for now just use a corpus of expressions
|
||||
# from which to select.
|
||||
return draw(st.sampled_from((
|
||||
'abc',
|
||||
'len(items)',
|
||||
'x + 1',
|
||||
'lineno',
|
||||
'container',
|
||||
'self.attribute',
|
||||
'self.method()',
|
||||
# These expressions are failing, I think these are control
|
||||
# flow problems rather than problems with FORMAT_VALUE,
|
||||
# however I need to confirm this...
|
||||
#'sorted(items, key=lambda x: x.name)',
|
||||
#'func(*args, **kwargs)',
|
||||
#'text or default',
|
||||
#'43 if life_the_universe and everything else None'
|
||||
)))
|
||||
@st.composite
|
||||
def expressions(draw):
|
||||
# todo : would be nice to generate expressions using hypothesis however
|
||||
# this is pretty involved so for now just use a corpus of expressions
|
||||
# from which to select.
|
||||
return draw(st.sampled_from((
|
||||
'abc',
|
||||
'len(items)',
|
||||
'x + 1',
|
||||
'lineno',
|
||||
'container',
|
||||
'self.attribute',
|
||||
'self.method()',
|
||||
# These expressions are failing, I think these are control
|
||||
# flow problems rather than problems with FORMAT_VALUE,
|
||||
# however I need to confirm this...
|
||||
#'sorted(items, key=lambda x: x.name)',
|
||||
#'func(*args, **kwargs)',
|
||||
#'text or default',
|
||||
#'43 if life_the_universe and everything else None'
|
||||
)))
|
||||
|
||||
|
||||
@st.composite
|
||||
def format_specifiers(draw):
|
||||
"""
|
||||
Generate a valid format specifier using the rules:
|
||||
@st.composite
|
||||
def format_specifiers(draw):
|
||||
"""
|
||||
Generate a valid format specifier using the rules:
|
||||
|
||||
format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]
|
||||
fill ::= <any character>
|
||||
align ::= "<" | ">" | "=" | "^"
|
||||
sign ::= "+" | "-" | " "
|
||||
width ::= integer
|
||||
precision ::= integer
|
||||
type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"
|
||||
format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]
|
||||
fill ::= <any character>
|
||||
align ::= "<" | ">" | "=" | "^"
|
||||
sign ::= "+" | "-" | " "
|
||||
width ::= integer
|
||||
precision ::= integer
|
||||
type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"
|
||||
|
||||
See https://docs.python.org/2/library/string.html
|
||||
See https://docs.python.org/2/library/string.html
|
||||
|
||||
:param draw: Let hypothesis draw from other strategies.
|
||||
:param draw: Let hypothesis draw from other strategies.
|
||||
|
||||
:return: An example format_specifier.
|
||||
"""
|
||||
alphabet_strategy = st.characters(min_codepoint=ord('a'), max_codepoint=ord('z'))
|
||||
fill = draw(st.one_of(alphabet_strategy, st.none()))
|
||||
align = draw(st.sampled_from(list('<>=^')))
|
||||
fill_align = (fill + align or '') if fill else ''
|
||||
:return: An example format_specifier.
|
||||
"""
|
||||
alphabet_strategy = st.characters(min_codepoint=ord('a'), max_codepoint=ord('z'))
|
||||
fill = draw(st.one_of(alphabet_strategy, st.none()))
|
||||
align = draw(st.sampled_from(list('<>=^')))
|
||||
fill_align = (fill + align or '') if fill else ''
|
||||
|
||||
type_ = draw(st.sampled_from('bcdeEfFgGnosxX%'))
|
||||
can_have_sign = type_ in 'deEfFgGnoxX%'
|
||||
can_have_comma = type_ in 'deEfFgG%'
|
||||
can_have_precision = type_ in 'fFgG'
|
||||
can_have_pound = type_ in 'boxX%'
|
||||
can_have_zero = type_ in 'oxX'
|
||||
type_ = draw(st.sampled_from('bcdeEfFgGnosxX%'))
|
||||
can_have_sign = type_ in 'deEfFgGnoxX%'
|
||||
can_have_comma = type_ in 'deEfFgG%'
|
||||
can_have_precision = type_ in 'fFgG'
|
||||
can_have_pound = type_ in 'boxX%'
|
||||
can_have_zero = type_ in 'oxX'
|
||||
|
||||
sign = draw(st.sampled_from(list('+- ') + [''])) if can_have_sign else ''
|
||||
pound = draw(st.sampled_from(('#', '',))) if can_have_pound else ''
|
||||
zero = draw(st.sampled_from(('0', '',))) if can_have_zero else ''
|
||||
sign = draw(st.sampled_from(list('+- ') + [''])) if can_have_sign else ''
|
||||
pound = draw(st.sampled_from(('#', '',))) if can_have_pound else ''
|
||||
zero = draw(st.sampled_from(('0', '',))) if can_have_zero else ''
|
||||
|
||||
int_strategy = st.integers(min_value=1, max_value=1000)
|
||||
int_strategy = st.integers(min_value=1, max_value=1000)
|
||||
|
||||
width = draw(st.one_of(int_strategy, st.none()))
|
||||
width = str(width) if width is not None else ''
|
||||
width = draw(st.one_of(int_strategy, st.none()))
|
||||
width = str(width) if width is not None else ''
|
||||
|
||||
comma = draw(st.sampled_from((',', '',))) if can_have_comma else ''
|
||||
if can_have_precision:
|
||||
precision = draw(st.one_of(int_strategy, st.none()))
|
||||
precision = '.' + str(precision) if precision else ''
|
||||
else:
|
||||
precision = ''
|
||||
comma = draw(st.sampled_from((',', '',))) if can_have_comma else ''
|
||||
if can_have_precision:
|
||||
precision = draw(st.one_of(int_strategy, st.none()))
|
||||
precision = '.' + str(precision) if precision else ''
|
||||
else:
|
||||
precision = ''
|
||||
|
||||
return ''.join((fill_align, sign, pound, zero, width, comma, precision, type_,))
|
||||
return ''.join((fill_align, sign, pound, zero, width, comma, precision, type_,))
|
||||
|
||||
|
||||
@st.composite
|
||||
def fstrings(draw):
|
||||
"""
|
||||
Generate a valid f-string.
|
||||
See https://www.python.org/dev/peps/pep-0498/#specification
|
||||
@st.composite
|
||||
def fstrings(draw):
|
||||
"""
|
||||
Generate a valid f-string.
|
||||
See https://www.python.org/dev/peps/pep-0498/#specification
|
||||
|
||||
:param draw: Let hypothsis draw from other strategies.
|
||||
:param draw: Let hypothsis draw from other strategies.
|
||||
|
||||
:return: A valid f-string.
|
||||
"""
|
||||
character_strategy = st.characters(
|
||||
blacklist_characters='\r\n\'\\s{}',
|
||||
min_codepoint=1,
|
||||
max_codepoint=1000,
|
||||
)
|
||||
is_raw = draw(st.booleans())
|
||||
integer_strategy = st.integers(min_value=0, max_value=3)
|
||||
expression_count = draw(integer_strategy)
|
||||
content = []
|
||||
for _ in range(expression_count):
|
||||
expression = draw(expressions())
|
||||
conversion = draw(st.sampled_from(('', '!s', '!r', '!a',)))
|
||||
has_specifier = draw(st.booleans())
|
||||
specifier = ':' + draw(format_specifiers()) if has_specifier else ''
|
||||
content.append('{{{}{}}}'.format(expression, conversion, specifier))
|
||||
content.append(draw(st.text(character_strategy)))
|
||||
content = ''.join(content)
|
||||
return "f{}'{}'".format('r' if is_raw else '', content)
|
||||
:return: A valid f-string.
|
||||
"""
|
||||
character_strategy = st.characters(
|
||||
blacklist_characters='\r\n\'\\s{}',
|
||||
min_codepoint=1,
|
||||
max_codepoint=1000,
|
||||
)
|
||||
is_raw = draw(st.booleans())
|
||||
integer_strategy = st.integers(min_value=0, max_value=3)
|
||||
expression_count = draw(integer_strategy)
|
||||
content = []
|
||||
for _ in range(expression_count):
|
||||
expression = draw(expressions())
|
||||
conversion = draw(st.sampled_from(('', '!s', '!r', '!a',)))
|
||||
has_specifier = draw(st.booleans())
|
||||
specifier = ':' + draw(format_specifiers()) if has_specifier else ''
|
||||
content.append('{{{}{}}}'.format(expression, conversion, specifier))
|
||||
content.append(draw(st.text(character_strategy)))
|
||||
content = ''.join(content)
|
||||
return "f{}'{}'".format('r' if is_raw else '', content)
|
||||
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
|
||||
@hypothesis.given(format_specifiers())
|
||||
def test_format_specifiers(format_specifier):
|
||||
"""Verify that format_specifiers generates valid specifiers"""
|
||||
try:
|
||||
exec('"{:' + format_specifier + '}".format(0)')
|
||||
except ValueError as e:
|
||||
if 'Unknown format code' not in str(e):
|
||||
raise
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
|
||||
@hypothesis.given(format_specifiers())
|
||||
def test_format_specifiers(format_specifier):
|
||||
"""Verify that format_specifiers generates valid specifiers"""
|
||||
try:
|
||||
exec('"{:' + format_specifier + '}".format(0)')
|
||||
except ValueError as e:
|
||||
if 'Unknown format code' not in str(e):
|
||||
raise
|
||||
|
||||
|
||||
def run_test(text):
|
||||
hypothesis.assume(len(text))
|
||||
hypothesis.assume("f'{" in text)
|
||||
expr = text + '\n'
|
||||
code = compile(expr, '<string>', 'single')
|
||||
deparsed = deparse_code(PYTHON_VERSION, code, compile_mode='single')
|
||||
recompiled = compile(deparsed.text, '<string>', 'single')
|
||||
if recompiled != code:
|
||||
assert 'dis(' + deparsed.text.strip('\n') + ')' == 'dis(' + expr.strip('\n') + ')'
|
||||
def run_test(text):
|
||||
hypothesis.assume(len(text))
|
||||
hypothesis.assume("f'{" in text)
|
||||
expr = text + '\n'
|
||||
code = compile(expr, '<string>', 'single')
|
||||
deparsed = deparse_code(PYTHON_VERSION, code, compile_mode='single')
|
||||
recompiled = compile(deparsed.text, '<string>', 'single')
|
||||
if recompiled != code:
|
||||
assert 'dis(' + deparsed.text.strip('\n') + ')' == 'dis(' + expr.strip('\n') + ')'
|
||||
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
|
||||
@hypothesis.given(fstrings())
|
||||
def test_uncompyle_fstring(fstring):
|
||||
"""Verify uncompyling fstring bytecode"""
|
||||
run_test(fstring)
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
|
||||
@hypothesis.given(fstrings())
|
||||
def test_uncompyle_fstring(fstring):
|
||||
"""Verify uncompyling fstring bytecode"""
|
||||
run_test(fstring)
|
||||
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
|
||||
@pytest.mark.parametrize('fstring', [
|
||||
"f'{abc}{abc!s}'",
|
||||
"f'{abc}0'",
|
||||
])
|
||||
def test_uncompyle_direct(fstring):
|
||||
"""useful for debugging"""
|
||||
run_test(fstring)
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
|
||||
@pytest.mark.parametrize('fstring', [
|
||||
"f'{abc}{abc!s}'",
|
||||
"f'{abc}0'",
|
||||
])
|
||||
def test_uncompyle_direct(fstring):
|
||||
"""useful for debugging"""
|
||||
run_test(fstring)
|
||||
|
@@ -1,184 +1,185 @@
|
||||
# std
|
||||
import string
|
||||
# 3rd party
|
||||
from hypothesis import given, assume, example, settings, strategies as st
|
||||
import pytest
|
||||
# uncompyle
|
||||
from validate import validate_uncompyle
|
||||
from test_fstring import expressions
|
||||
from uncompyle6 import PYTHON_VERSION
|
||||
import pytest
|
||||
pytestmark = pytest.mark.skip(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
|
||||
alpha = st.sampled_from(string.ascii_lowercase)
|
||||
numbers = st.sampled_from(string.digits)
|
||||
alphanum = st.sampled_from(string.ascii_lowercase + string.digits)
|
||||
if PYTHON_VERSION > 2.6:
|
||||
from hypothesis import given, assume, example, settings, strategies as st
|
||||
from validate import validate_uncompyle
|
||||
from test_fstring import expressions
|
||||
|
||||
alpha = st.sampled_from(string.ascii_lowercase)
|
||||
numbers = st.sampled_from(string.digits)
|
||||
alphanum = st.sampled_from(string.ascii_lowercase + string.digits)
|
||||
|
||||
|
||||
@st.composite
|
||||
def function_calls(draw,
|
||||
min_keyword_args=0, max_keyword_args=5,
|
||||
min_positional_args=0, max_positional_args=5,
|
||||
min_star_args=0, max_star_args=1,
|
||||
min_double_star_args=0, max_double_star_args=1):
|
||||
"""
|
||||
Strategy factory for generating function calls.
|
||||
@st.composite
|
||||
def function_calls(draw,
|
||||
min_keyword_args=0, max_keyword_args=5,
|
||||
min_positional_args=0, max_positional_args=5,
|
||||
min_star_args=0, max_star_args=1,
|
||||
min_double_star_args=0, max_double_star_args=1):
|
||||
"""
|
||||
Strategy factory for generating function calls.
|
||||
|
||||
:param draw: Callable which draws examples from other strategies.
|
||||
:param draw: Callable which draws examples from other strategies.
|
||||
|
||||
:return: The function call text.
|
||||
"""
|
||||
st_positional_args = st.lists(
|
||||
alpha,
|
||||
min_size=min_positional_args,
|
||||
max_size=max_positional_args
|
||||
)
|
||||
st_keyword_args = st.lists(
|
||||
alpha,
|
||||
min_size=min_keyword_args,
|
||||
max_size=max_keyword_args
|
||||
)
|
||||
st_star_args = st.lists(
|
||||
alpha,
|
||||
min_size=min_star_args,
|
||||
max_size=max_star_args
|
||||
)
|
||||
st_double_star_args = st.lists(
|
||||
alpha,
|
||||
min_size=min_double_star_args,
|
||||
max_size=max_double_star_args
|
||||
)
|
||||
:return: The function call text.
|
||||
"""
|
||||
st_positional_args = st.lists(
|
||||
alpha,
|
||||
min_size=min_positional_args,
|
||||
max_size=max_positional_args
|
||||
)
|
||||
st_keyword_args = st.lists(
|
||||
alpha,
|
||||
min_size=min_keyword_args,
|
||||
max_size=max_keyword_args
|
||||
)
|
||||
st_star_args = st.lists(
|
||||
alpha,
|
||||
min_size=min_star_args,
|
||||
max_size=max_star_args
|
||||
)
|
||||
st_double_star_args = st.lists(
|
||||
alpha,
|
||||
min_size=min_double_star_args,
|
||||
max_size=max_double_star_args
|
||||
)
|
||||
|
||||
positional_args = draw(st_positional_args)
|
||||
keyword_args = draw(st_keyword_args)
|
||||
st_values = st.lists(
|
||||
expressions(),
|
||||
min_size=len(keyword_args),
|
||||
max_size=len(keyword_args)
|
||||
)
|
||||
keyword_args = [
|
||||
x + '=' + e
|
||||
for x, e in
|
||||
zip(keyword_args, draw(st_values))
|
||||
]
|
||||
star_args = ['*' + x for x in draw(st_star_args)]
|
||||
double_star_args = ['**' + x for x in draw(st_double_star_args)]
|
||||
positional_args = draw(st_positional_args)
|
||||
keyword_args = draw(st_keyword_args)
|
||||
st_values = st.lists(
|
||||
expressions(),
|
||||
min_size=len(keyword_args),
|
||||
max_size=len(keyword_args)
|
||||
)
|
||||
keyword_args = [
|
||||
x + '=' + e
|
||||
for x, e in
|
||||
zip(keyword_args, draw(st_values))
|
||||
]
|
||||
star_args = ['*' + x for x in draw(st_star_args)]
|
||||
double_star_args = ['**' + x for x in draw(st_double_star_args)]
|
||||
|
||||
arguments = positional_args + keyword_args + star_args + double_star_args
|
||||
draw(st.randoms()).shuffle(arguments)
|
||||
arguments = ','.join(arguments)
|
||||
arguments = positional_args + keyword_args + star_args + double_star_args
|
||||
draw(st.randoms()).shuffle(arguments)
|
||||
arguments = ','.join(arguments)
|
||||
|
||||
function_call = 'fn({arguments})'.format(arguments=arguments)
|
||||
try:
|
||||
# TODO: Figure out the exact rules for ordering of positional, keyword,
|
||||
# star args, double star args and in which versions the various
|
||||
# types of arguments are supported so we don't need to check that the
|
||||
# expression compiles like this.
|
||||
compile(function_call, '<string>', 'single')
|
||||
except:
|
||||
assume(False)
|
||||
return function_call
|
||||
function_call = 'fn({arguments})'.format(arguments=arguments)
|
||||
try:
|
||||
# TODO: Figure out the exact rules for ordering of positional, keyword,
|
||||
# star args, double star args and in which versions the various
|
||||
# types of arguments are supported so we don't need to check that the
|
||||
# expression compiles like this.
|
||||
compile(function_call, '<string>', 'single')
|
||||
except:
|
||||
assume(False)
|
||||
return function_call
|
||||
|
||||
|
||||
def test_function_no_args():
|
||||
validate_uncompyle("fn()")
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
def isolated_function_calls(which):
|
||||
"""
|
||||
Returns a strategy for generating function calls, but isolated to
|
||||
particular types of arguments, for example only positional arguments.
|
||||
|
||||
This can help reason about debugging errors in specific types of function
|
||||
calls.
|
||||
|
||||
:param which: One of 'keyword', 'positional', 'star', 'double_star'
|
||||
|
||||
:return: Strategy for generating an function call isolated to specific
|
||||
argument types.
|
||||
"""
|
||||
kwargs = dict(
|
||||
max_keyword_args=0,
|
||||
max_positional_args=0,
|
||||
max_star_args=0,
|
||||
max_double_star_args=0,
|
||||
)
|
||||
kwargs['_'.join(('min', which, 'args'))] = 1
|
||||
kwargs['_'.join(('max', which, 'args'))] = 5 if 'star' not in which else 1
|
||||
return function_calls(**kwargs)
|
||||
|
||||
|
||||
with settings(max_examples=25):
|
||||
def test_function_no_args():
|
||||
validate_uncompyle("fn()")
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
@given(isolated_function_calls('positional'))
|
||||
@example("fn(0)")
|
||||
def test_function_positional_only(expr):
|
||||
validate_uncompyle(expr)
|
||||
def isolated_function_calls(which):
|
||||
"""
|
||||
Returns a strategy for generating function calls, but isolated to
|
||||
particular types of arguments, for example only positional arguments.
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
@given(isolated_function_calls('keyword'))
|
||||
@example("fn(a=0)")
|
||||
def test_function_call_keyword_only(expr):
|
||||
validate_uncompyle(expr)
|
||||
This can help reason about debugging errors in specific types of function
|
||||
calls.
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
@given(isolated_function_calls('star'))
|
||||
@example("fn(*items)")
|
||||
def test_function_call_star_only(expr):
|
||||
validate_uncompyle(expr)
|
||||
:param which: One of 'keyword', 'positional', 'star', 'double_star'
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
@given(isolated_function_calls('double_star'))
|
||||
@example("fn(**{})")
|
||||
def test_function_call_double_star_only(expr):
|
||||
validate_uncompyle(expr)
|
||||
:return: Strategy for generating an function call isolated to specific
|
||||
argument types.
|
||||
"""
|
||||
kwargs = dict(
|
||||
max_keyword_args=0,
|
||||
max_positional_args=0,
|
||||
max_star_args=0,
|
||||
max_double_star_args=0,
|
||||
)
|
||||
kwargs['_'.join(('min', which, 'args'))] = 1
|
||||
kwargs['_'.join(('max', which, 'args'))] = 5 if 'star' not in which else 1
|
||||
return function_calls(**kwargs)
|
||||
|
||||
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_CONST_KEY_MAP_BUILD_MAP_UNPACK_WITH_CALL_BUILD_TUPLE_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(w=0,m=0,**v)")
|
||||
with settings(max_examples=25):
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
@given(isolated_function_calls('positional'))
|
||||
@example("fn(0)")
|
||||
def test_function_positional_only(expr):
|
||||
validate_uncompyle(expr)
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
@given(isolated_function_calls('keyword'))
|
||||
@example("fn(a=0)")
|
||||
def test_function_call_keyword_only(expr):
|
||||
validate_uncompyle(expr)
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
@given(isolated_function_calls('star'))
|
||||
@example("fn(*items)")
|
||||
def test_function_call_star_only(expr):
|
||||
validate_uncompyle(expr)
|
||||
|
||||
@pytest.mark.skipif(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
@given(isolated_function_calls('double_star'))
|
||||
@example("fn(**{})")
|
||||
def test_function_call_double_star_only(expr):
|
||||
validate_uncompyle(expr)
|
||||
|
||||
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_MAP_BUILD_MAP_UNPACK_WITH_CALL_BUILD_TUPLE_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(a=0,**g)")
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_CONST_KEY_MAP_BUILD_MAP_UNPACK_WITH_CALL_BUILD_TUPLE_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(w=0,m=0,**v)")
|
||||
|
||||
|
||||
@pytest.mark.xfail()
|
||||
def test_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(*g,**j)")
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_MAP_BUILD_MAP_UNPACK_WITH_CALL_BUILD_TUPLE_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(a=0,**g)")
|
||||
|
||||
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_MAP_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(*z,u=0)")
|
||||
@pytest.mark.xfail()
|
||||
def test_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(*g,**j)")
|
||||
|
||||
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_TUPLE_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(**a)")
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_MAP_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(*z,u=0)")
|
||||
|
||||
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_MAP_BUILD_TUPLE_BUILD_TUPLE_UNPACK_WITH_CALL_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(b,b,b=0,*a)")
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_TUPLE_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(**a)")
|
||||
|
||||
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_TUPLE_BUILD_TUPLE_UNPACK_WITH_CALL_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(*c,v)")
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_MAP_BUILD_TUPLE_BUILD_TUPLE_UNPACK_WITH_CALL_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(b,b,b=0,*a)")
|
||||
|
||||
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_CONST_KEY_MAP_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(i=0,y=0,*p)")
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_TUPLE_BUILD_TUPLE_UNPACK_WITH_CALL_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(*c,v)")
|
||||
|
||||
|
||||
@pytest.mark.skip(reason='skipping property based test until all individual tests are passing')
|
||||
@given(function_calls())
|
||||
def test_function_call(function_call):
|
||||
validate_uncompyle(function_call)
|
||||
@pytest.mark.xfail()
|
||||
def test_BUILD_CONST_KEY_MAP_CALL_FUNCTION_EX():
|
||||
validate_uncompyle("fn(i=0,y=0,*p)")
|
||||
|
||||
|
||||
@pytest.mark.skip(reason='skipping property based test until all individual tests are passing')
|
||||
@given(function_calls())
|
||||
def test_function_call(function_call):
|
||||
validate_uncompyle(function_call)
|
||||
|
@@ -1,21 +1,22 @@
|
||||
import pytest
|
||||
from uncompyle6 import PYTHON_VERSION, deparse_code
|
||||
pytestmark = pytest.mark.skip(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
|
||||
@pytest.mark.skip(PYTHON_VERSION < 2.7,
|
||||
reason="need at least Python 2.7")
|
||||
def test_single_mode():
|
||||
single_expressions = (
|
||||
'i = 1',
|
||||
'i and (j or k)',
|
||||
'i += 1',
|
||||
'i = j % 4',
|
||||
'i = {}',
|
||||
'i = []',
|
||||
'for i in range(10):\n i\n',
|
||||
'for i in range(10):\n for j in range(10):\n i + j\n',
|
||||
'try:\n i\nexcept Exception:\n j\nelse:\n k\n'
|
||||
)
|
||||
if PYTHON_VERSION > 2.6:
|
||||
def test_single_mode():
|
||||
single_expressions = (
|
||||
'i = 1',
|
||||
'i and (j or k)',
|
||||
'i += 1',
|
||||
'i = j % 4',
|
||||
'i = {}',
|
||||
'i = []',
|
||||
'for i in range(10):\n i\n',
|
||||
'for i in range(10):\n for j in range(10):\n i + j\n',
|
||||
'try:\n i\nexcept Exception:\n j\nelse:\n k\n'
|
||||
)
|
||||
|
||||
for expr in single_expressions:
|
||||
code = compile(expr + '\n', '<string>', 'single')
|
||||
assert deparse_code(PYTHON_VERSION, code, compile_mode='single').text == expr + '\n'
|
||||
for expr in single_expressions:
|
||||
code = compile(expr + '\n', '<string>', 'single')
|
||||
assert deparse_code(PYTHON_VERSION, code, compile_mode='single').text == expr + '\n'
|
||||
|
@@ -6,16 +6,18 @@ import difflib
|
||||
import subprocess
|
||||
import tempfile
|
||||
import functools
|
||||
# compatability
|
||||
import six
|
||||
# uncompyle6 / xdis
|
||||
from uncompyle6 import PYTHON_VERSION, IS_PYPY, deparse_code
|
||||
from uncompyle6 import PYTHON_VERSION, PYTHON3, IS_PYPY, deparse_code
|
||||
# TODO : I think we can get xdis to support the dis api (python 3 version) by doing something like this there
|
||||
from xdis.bytecode import Bytecode
|
||||
from xdis.main import get_opcode
|
||||
opc = get_opcode(PYTHON_VERSION, IS_PYPY)
|
||||
Bytecode = functools.partial(Bytecode, opc=opc)
|
||||
|
||||
if PYTHON3:
|
||||
from io import StringIO
|
||||
else:
|
||||
from StringIO import StringIO
|
||||
|
||||
def _dis_to_text(co):
|
||||
return Bytecode(co).dis()
|
||||
|
@@ -1,2 +1,4 @@
|
||||
# Pick up stuff from setup.py
|
||||
hypothesis==2.0.0
|
||||
pytest
|
||||
-e .
|
||||
|
6
setup.py
6
setup.py
@@ -4,12 +4,12 @@ import sys
|
||||
"""Setup script for the 'uncompyle6' distribution."""
|
||||
|
||||
SYS_VERSION = sys.version_info[0:2]
|
||||
if not ((2, 6) <= SYS_VERSION <= (3, 7)) or ((3, 0) <= SYS_VERSION <= (3, 0)):
|
||||
mess = "Python Release 2.6 .. 3.7 excluding 3.0 are supported in this code branch."
|
||||
if not ((2, 6) <= SYS_VERSION <= (3, 7)):
|
||||
mess = "Python Release 2.6 .. 3.7 are supported in this code branch."
|
||||
if ((2, 4) <= SYS_VERSION <= (2, 7)):
|
||||
mess += ("\nFor your Python, version %s, use the python-2.4 code/branch." %
|
||||
sys.version[0:3])
|
||||
elif SYS_VERSION < (2, 4) or (3, 0) <= SYS_VERSION:
|
||||
elif SYS_VERSION < (2, 4):
|
||||
mess += ("\nThis package is not supported for Python version %s."
|
||||
% sys.version[0:3])
|
||||
print(mess)
|
||||
|
@@ -22,7 +22,7 @@ COVER_DIR=../tmp/grammar-cover
|
||||
# Run short tests
|
||||
check-short:
|
||||
@$(PYTHON) -V && PYTHON_VERSION=`$(PYTHON) -V 2>&1 | cut -d ' ' -f 2 | cut -d'.' -f1,2`; \
|
||||
$(MAKE) check-bytecode
|
||||
$(MAKE) check-bytecode-short
|
||||
|
||||
# Run all tests
|
||||
check:
|
||||
@@ -91,16 +91,31 @@ check-bytecode-3:
|
||||
--bytecode-3.1 --bytecode-3.2 --bytecode-3.3 \
|
||||
--bytecode-3.4 --bytecode-3.5 --bytecode-3.6 --bytecode-pypy3.2
|
||||
|
||||
#: Check deparsing bytecode that works running Python 2 and Python 3
|
||||
#: Check deparsing on selected bytecode 3.x
|
||||
check-bytecode-3-short:
|
||||
$(PYTHON) test_pythonlib.py \
|
||||
--bytecode-3.4 --bytecode-3.5 --bytecode-3.6
|
||||
|
||||
#: Check deparsing bytecode on all Python 2 and Python 3 versions
|
||||
check-bytecode: check-bytecode-3
|
||||
$(PYTHON) test_pythonlib.py \
|
||||
--bytecode-1.4 --bytecode-1.5 \
|
||||
--bytecode-1.3 --bytecode-1.4 --bytecode-1.5 \
|
||||
--bytecode-2.2 --bytecode-2.3 --bytecode-2.4 \
|
||||
--bytecode-2.1 --bytecode-2.2 --bytecode-2.3 --bytecode-2.4 \
|
||||
--bytecode-2.5 --bytecode-2.6 --bytecode-2.7 \
|
||||
--bytecode-pypy2.7
|
||||
|
||||
|
||||
#: Check deparsing bytecode on selected Python 2 and Python 3 versions
|
||||
check-bytecode-short: check-bytecode-3-short
|
||||
$(PYTHON) test_pythonlib.py \
|
||||
--bytecode-2.6 --bytecode-2.7 --bytecode-pypy2.7
|
||||
|
||||
|
||||
#: Check deparsing bytecode 1.3 only
|
||||
check-bytecode-1.3:
|
||||
$(PYTHON) test_pythonlib.py --bytecode-1.3
|
||||
|
||||
#: Check deparsing bytecode 1.4 only
|
||||
check-bytecode-1.4:
|
||||
$(PYTHON) test_pythonlib.py --bytecode-1.4
|
||||
@@ -210,6 +225,11 @@ check-bytecode-2.7:
|
||||
check-bytecode-3.0:
|
||||
$(PYTHON) test_pythonlib.py --bytecode-3.0 --weak-verify
|
||||
|
||||
#: Check deparsing Python 3.0
|
||||
check-bytecode-3.0:
|
||||
$(PYTHON) test_pythonlib.py --bytecode-3.0 --weak-verify
|
||||
# $(PYTHON) test_pythonlib.py --bytecode-3.0-run --verify-run
|
||||
|
||||
#: Check deparsing Python 3.1
|
||||
check-bytecode-3.1:
|
||||
$(PYTHON) test_pythonlib.py --bytecode-3.1 --weak-verify
|
||||
|
BIN
test/bytecode_1.3/test_builtin.pyc
Normal file
BIN
test/bytecode_1.3/test_builtin.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_1.3/test_exceptions.pyc
Normal file
BIN
test/bytecode_1.3/test_exceptions.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_1.3/test_grammar.pyc
Normal file
BIN
test/bytecode_1.3/test_grammar.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_1.3/test_operations.pyc
Normal file
BIN
test/bytecode_1.3/test_operations.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_1.3/testall.pyc
Normal file
BIN
test/bytecode_1.3/testall.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_1.4/addpack.pyc
Normal file
BIN
test/bytecode_1.4/addpack.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_1.4/anydbm.pyc
Normal file
BIN
test/bytecode_1.4/anydbm.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_3.0/01_comprehension.pyc
Normal file
BIN
test/bytecode_3.0/01_comprehension.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_3.0/02_ifelse_lambda.pyc
Normal file
BIN
test/bytecode_3.0/02_ifelse_lambda.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_3.0/03_pop_top.pyc
Normal file
BIN
test/bytecode_3.0/03_pop_top.pyc
Normal file
Binary file not shown.
20
test/simple_source/bug30/03_pop_top.py
Normal file
20
test/simple_source/bug30/03_pop_top.py
Normal file
@@ -0,0 +1,20 @@
|
||||
# From 3.0.1 __dummy_thread.py
|
||||
# bug was handling else:
|
||||
def interrupt_main():
|
||||
"""Set _interrupt flag to True to have start_new_thread raise
|
||||
KeyboardInterrupt upon exiting."""
|
||||
if _main:
|
||||
raise KeyboardInterrupt
|
||||
else:
|
||||
global _interrupt
|
||||
_interrupt = True
|
||||
|
||||
# From 3.0.1 ast.py bug was mangling prototype
|
||||
# def parse(expr, filename='<unknown>', mode='exec'):
|
||||
|
||||
# From 3.0.1 bisect
|
||||
def bisect_left(a, x, lo=0, hi=None):
|
||||
while lo:
|
||||
if a[mid] < x: lo = mid+1
|
||||
else: hi = mid
|
||||
return lo
|
@@ -105,6 +105,7 @@ case $PYVERSION in
|
||||
[test_curses.py]=1 # Possibly fails on its own but not detected
|
||||
[test_dis.py]=1 # We change line numbers - duh!
|
||||
[test_doctest.py]=1 # Fails on its own
|
||||
[test_format.py]=1 # control flow. uncompyle2 does not have problems here
|
||||
[test_generators.py]=1 # control flow. uncompyle2 has problem here too
|
||||
[test_grammar.py]=1 # Too many stmts. Handle large stmts
|
||||
[test_io.py]=1 # Test takes too long to run
|
||||
|
@@ -78,7 +78,7 @@ for vers in (2.7, 3.4, 3.5, 3.6):
|
||||
test_options[key] = (os.path.join(src_dir, pythonlib), PYOC, key, vers)
|
||||
pass
|
||||
|
||||
for vers in (1.4, 1.5,
|
||||
for vers in (1.3, 1.4, 1.5,
|
||||
2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7,
|
||||
3.0, 3.1, 3.2, 3.3,
|
||||
3.4, 3.5, 3.6, 3.7, 'pypy3.2', 'pypy2.7'):
|
||||
|
@@ -69,12 +69,12 @@ def usage():
|
||||
|
||||
|
||||
def main_bin():
|
||||
if not (sys.version_info[0:2] in ((2, 6), (2, 7),
|
||||
if not (sys.version_info[0:2] in ((2, 6), (2, 7), (3, 0),
|
||||
(3, 1), (3, 2), (3, 3),
|
||||
(3, 4), (3, 5), (3, 6),
|
||||
(3, 7)
|
||||
)):
|
||||
print('Error: %s requires Python 2.6-2.7, or 3.1-3.7' % program,
|
||||
print('Error: %s requires Python 2.6-3.7' % program,
|
||||
file=sys.stderr)
|
||||
sys.exit(-1)
|
||||
|
||||
|
@@ -77,6 +77,19 @@ def disco_loop(disasm, queue, real_out):
|
||||
pass
|
||||
pass
|
||||
|
||||
def disassemble_fp(fp, outstream=None):
|
||||
"""
|
||||
disassemble Python byte-code from an open file
|
||||
"""
|
||||
(version, timestamp, magic_int, co, is_pypy,
|
||||
source_size) = load_from_fp(fp)
|
||||
if type(co) == list:
|
||||
for con in co:
|
||||
disco(version, con, outstream)
|
||||
else:
|
||||
disco(version, co, outstream, is_pypy=is_pypy)
|
||||
co = None
|
||||
|
||||
def disassemble_file(filename, outstream=None):
|
||||
"""
|
||||
disassemble Python byte-code file (.pyc)
|
||||
|
@@ -621,7 +621,13 @@ def get_python_parser(
|
||||
|
||||
if version < 3.0:
|
||||
if version < 2.2:
|
||||
if version == 1.4:
|
||||
if version == 1.3:
|
||||
import uncompyle6.parsers.parse13 as parse13
|
||||
if compile_mode == 'exec':
|
||||
p = parse13.Python14Parser(debug_parser)
|
||||
else:
|
||||
p = parse13.Python14ParserSingle(debug_parser)
|
||||
elif version == 1.4:
|
||||
import uncompyle6.parsers.parse14 as parse14
|
||||
if compile_mode == 'exec':
|
||||
p = parse14.Python14Parser(debug_parser)
|
||||
|
49
uncompyle6/parsers/parse13.py
Normal file
49
uncompyle6/parsers/parse13.py
Normal file
@@ -0,0 +1,49 @@
|
||||
# Copyright (c) 2018 Rocky Bernstein
|
||||
|
||||
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
|
||||
from uncompyle6.parser import PythonParserSingle
|
||||
from uncompyle6.parsers.parse14 import Python14Parser
|
||||
|
||||
class Python13Parser(Python14Parser):
|
||||
|
||||
def p_misc13(self, args):
|
||||
"""
|
||||
# Nothing here yet, but will need to add LOAD_GLOBALS
|
||||
"""
|
||||
|
||||
def __init__(self, debug_parser=PARSER_DEFAULT_DEBUG):
|
||||
super(Python13Parser, self).__init__(debug_parser)
|
||||
self.customized = {}
|
||||
|
||||
# def customize_grammar_rules(self, tokens, customize):
|
||||
# super(Python13Parser, self).customize_grammar_rules(tokens, customize)
|
||||
# self.remove_rules("""
|
||||
# whileelsestmt ::= SETUP_LOOP testexpr l_stmts_opt
|
||||
# jb_pop
|
||||
# POP_BLOCK else_suitel COME_FROM
|
||||
# """)
|
||||
# self.check_reduce['doc_junk'] = 'tokens'
|
||||
|
||||
|
||||
# def reduce_is_invalid(self, rule, ast, tokens, first, last):
|
||||
# invalid = super(Python14Parser,
|
||||
# self).reduce_is_invalid(rule, ast,
|
||||
# tokens, first, last)
|
||||
# if invalid or tokens is None:
|
||||
# return invalid
|
||||
# if rule[0] == 'doc_junk':
|
||||
# return not isinstance(tokens[first].pattr, str)
|
||||
|
||||
|
||||
|
||||
class Python13ParserSingle(Python13Parser, PythonParserSingle):
|
||||
pass
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Check grammar
|
||||
p = Python13Parser()
|
||||
p.check_grammar()
|
||||
p.dump_grammar()
|
||||
|
||||
# local variables:
|
||||
# tab-width: 4
|
@@ -8,12 +8,11 @@ class Python14Parser(Python15Parser):
|
||||
|
||||
def p_misc14(self, args):
|
||||
"""
|
||||
# Nothing here yet, but will need to add UNARY_CALL, BINARY_CALL,
|
||||
# Not much here yet, but will probably need to add UNARY_CALL, BINARY_CALL,
|
||||
# RAISE_EXCEPTION, BUILD_FUNCTION, UNPACK_ARG, UNPACK_VARARG, LOAD_LOCAL,
|
||||
# SET_FUNC_ARGS, and RESERVE_FAST
|
||||
|
||||
# FIXME: should check that this indeed around __doc__
|
||||
# Possibly not strictly needed
|
||||
# Not strictly needed, but tidies up output
|
||||
stmt ::= doc_junk
|
||||
doc_junk ::= LOAD_CONST POP_TOP
|
||||
|
||||
|
@@ -52,6 +52,8 @@ class Python27Parser(Python2Parser):
|
||||
def p_try27(self, args):
|
||||
"""
|
||||
# If the last except is a "raise" we might not have a final COME_FROM
|
||||
# FIXME: need a check on this rule since this accepts try_except when
|
||||
# we shouldn't
|
||||
try_except ::= SETUP_EXCEPT suite_stmts_opt POP_BLOCK
|
||||
except_handler
|
||||
|
||||
|
@@ -12,6 +12,12 @@ class Python30Parser(Python31Parser):
|
||||
def p_30(self, args):
|
||||
"""
|
||||
|
||||
assert ::= assert_expr jmp_true LOAD_ASSERT RAISE_VARARGS_1 POP_TOP
|
||||
return_if_lambda ::= RETURN_END_IF_LAMBDA POP_TOP
|
||||
compare_chained1 ::= expr DUP_TOP ROT_THREE COMPARE_OP
|
||||
JUMP_IF_FALSE POP_TOP compare_chained2
|
||||
compare_chained2 ::= expr COMPARE_OP RETURN_END_IF_LAMBDA
|
||||
|
||||
# FIXME: combine with parse3.2
|
||||
whileTruestmt ::= SETUP_LOOP l_stmts_opt JUMP_BACK
|
||||
COME_FROM_LOOP
|
||||
@@ -20,10 +26,10 @@ class Python30Parser(Python31Parser):
|
||||
|
||||
# In many ways Python 3.0 code generation is more like Python 2.6 than
|
||||
# it is 2.7 or 3.1. So we have a number of 2.6ish (and before) rules below
|
||||
# Specifically POP_TOP is more prevelant since there is no POP_JUMP_IF_...
|
||||
# instructions
|
||||
|
||||
_ifstmts_jump ::= c_stmts_opt JUMP_FORWARD _come_froms POP_TOP COME_FROM
|
||||
jmp_true ::= JUMP_IF_TRUE POP_TOP
|
||||
jmp_false ::= JUMP_IF_FALSE POP_TOP
|
||||
|
||||
# Used to keep index order the same in semantic actions
|
||||
jb_pop_top ::= JUMP_BACK POP_TOP
|
||||
@@ -33,16 +39,67 @@ class Python30Parser(Python31Parser):
|
||||
else_suitel ::= l_stmts COME_FROM_LOOP JUMP_BACK
|
||||
|
||||
ifelsestmtl ::= testexpr c_stmts_opt jb_pop_top else_suitel
|
||||
iflaststmtl ::= testexpr c_stmts_opt jb_pop_top
|
||||
iflaststmt ::= testexpr c_stmts_opt JUMP_ABSOLUTE POP_TOP
|
||||
|
||||
withasstmt ::= expr setupwithas store suite_stmts_opt
|
||||
POP_BLOCK LOAD_CONST COME_FROM_FINALLY
|
||||
LOAD_FAST DELETE_FAST WITH_CLEANUP END_FINALLY
|
||||
setupwithas ::= DUP_TOP LOAD_ATTR STORE_FAST LOAD_ATTR CALL_FUNCTION_0 setup_finally
|
||||
setup_finally ::= STORE_FAST SETUP_FINALLY LOAD_FAST DELETE_FAST
|
||||
|
||||
# Need to keep LOAD_FAST as index 1
|
||||
set_comp_func_header ::= BUILD_SET_0 DUP_TOP STORE_FAST
|
||||
set_comp_func ::= set_comp_func_header
|
||||
LOAD_FAST FOR_ITER store comp_iter
|
||||
JUMP_BACK POP_TOP JUMP_BACK RETURN_VALUE RETURN_LAST
|
||||
|
||||
list_comp_header ::= BUILD_LIST_0 DUP_TOP STORE_FAST
|
||||
list_comp ::= list_comp_header
|
||||
LOAD_FAST FOR_ITER store comp_iter
|
||||
JUMP_BACK
|
||||
|
||||
|
||||
comp_if ::= expr jmp_false comp_iter
|
||||
comp_iter ::= expr expr SET_ADD
|
||||
comp_iter ::= expr expr LIST_APPEND
|
||||
|
||||
jump_forward_else ::= JUMP_FORWARD POP_TOP
|
||||
jump_absolute_else ::= JUMP_ABSOLUTE POP_TOP
|
||||
|
||||
# In many ways 3.0 is like 2.6. The below rules in fact are the same or similar.
|
||||
|
||||
jmp_true ::= JUMP_IF_TRUE POP_TOP
|
||||
jmp_false ::= JUMP_IF_FALSE POP_TOP
|
||||
for_block ::= l_stmts_opt _come_froms POP_TOP JUMP_BACK
|
||||
except_handler ::= JUMP_FORWARD COME_FROM_EXCEPT except_stmts
|
||||
POP_TOP END_FINALLY come_froms
|
||||
except_handler ::= jmp_abs COME_FROM_EXCEPT except_stmts
|
||||
POP_TOP END_FINALLY
|
||||
return_if_stmt ::= ret_expr RETURN_END_IF POP_TOP
|
||||
and ::= expr JUMP_IF_FALSE POP_TOP expr COME_FROM
|
||||
whilestmt ::= SETUP_LOOP testexpr l_stmts_opt
|
||||
JUMP_BACK POP_TOP POP_BLOCK COME_FROM_LOOP
|
||||
"""
|
||||
|
||||
def customize_grammar_rules(self, tokens, customize):
|
||||
super(Python30Parser, self).customize_grammar_rules(tokens, customize)
|
||||
self.remove_rules("""
|
||||
iflaststmtl ::= testexpr c_stmts_opt JUMP_BACK COME_FROM_LOOP
|
||||
ifelsestmtl ::= testexpr c_stmts_opt JUMP_BACK else_suitel
|
||||
iflaststmt ::= testexpr c_stmts_opt JUMP_ABSOLUTE
|
||||
_ifstmts_jump ::= c_stmts_opt JUMP_FORWARD _come_froms
|
||||
jump_forward_else ::= JUMP_FORWARD ELSE
|
||||
jump_absolute_else ::= JUMP_ABSOLUTE ELSE
|
||||
whilestmt ::= SETUP_LOOP testexpr l_stmts_opt COME_FROM JUMP_BACK POP_BLOCK
|
||||
COME_FROM_LOOP
|
||||
assert ::= assert_expr jmp_true LOAD_ASSERT RAISE_VARARGS_1
|
||||
return_if_lambda ::= RETURN_END_IF_LAMBDA
|
||||
compare_chained1 ::= expr DUP_TOP ROT_THREE COMPARE_OP JUMP_IF_FALSE_OR_POP compare_chained2 COME_FROM
|
||||
except_handler ::= jmp_abs COME_FROM_EXCEPT except_stmts END_FINALLY
|
||||
|
||||
""")
|
||||
|
||||
return
|
||||
pass
|
||||
|
||||
|
@@ -37,7 +37,7 @@ from xdis.util import code2num
|
||||
|
||||
# The byte code versions we support.
|
||||
# Note: these all have to be floats
|
||||
PYTHON_VERSIONS = frozenset((1.4, 1.5,
|
||||
PYTHON_VERSIONS = frozenset((1.3, 1.4, 1.5,
|
||||
2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7,
|
||||
3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7))
|
||||
|
||||
@@ -503,22 +503,26 @@ def get_scanner(version, is_pypy=False, show_asm=None):
|
||||
# Pick up appropriate scanner
|
||||
if version in PYTHON_VERSIONS:
|
||||
v_str = "%s" % (int(version * 10))
|
||||
if PYTHON3:
|
||||
try:
|
||||
import importlib
|
||||
if is_pypy:
|
||||
scan = importlib.import_module("uncompyle6.scanners.pypy%s" % v_str)
|
||||
else:
|
||||
scan = importlib.import_module("uncompyle6.scanners.scanner%s" % v_str)
|
||||
if False: print(scan) # Avoid unused scan
|
||||
else:
|
||||
except ImportError:
|
||||
if is_pypy:
|
||||
exec("import uncompyle6.scanners.pypy%s as scan" % v_str)
|
||||
exec("import uncompyle6.scanners.pypy%s as scan" % v_str,
|
||||
locals(), globals())
|
||||
else:
|
||||
exec("import uncompyle6.scanners.scanner%s as scan" % v_str)
|
||||
exec("import uncompyle6.scanners.scanner%s as scan" % v_str,
|
||||
locals(), globals())
|
||||
if is_pypy:
|
||||
scanner = eval("scan.ScannerPyPy%s(show_asm=show_asm)" % v_str)
|
||||
scanner = eval("scan.ScannerPyPy%s(show_asm=show_asm)" % v_str,
|
||||
locals(), globals())
|
||||
else:
|
||||
scanner = eval("scan.Scanner%s(show_asm=show_asm)" % v_str)
|
||||
scanner = eval("scan.Scanner%s(show_asm=show_asm)" % v_str,
|
||||
locals(), globals())
|
||||
else:
|
||||
raise RuntimeError("Unsupported Python version %s" % version)
|
||||
return scanner
|
||||
|
35
uncompyle6/scanners/scanner13.py
Normal file
35
uncompyle6/scanners/scanner13.py
Normal file
@@ -0,0 +1,35 @@
|
||||
# Copyright (c) 2018 by Rocky Bernstein
|
||||
"""
|
||||
Python 1.3 bytecode decompiler massaging.
|
||||
|
||||
This massages tokenized 1.3 bytecode to make it more amenable for
|
||||
grammar parsing.
|
||||
"""
|
||||
|
||||
import uncompyle6.scanners.scanner14 as scan
|
||||
# from uncompyle6.scanners.scanner26 import ingest as ingest26
|
||||
|
||||
# bytecode verification, verify(), uses JUMP_OPs from here
|
||||
from xdis.opcodes import opcode_13
|
||||
JUMP_OPS = opcode_13.JUMP_OPS
|
||||
|
||||
# We base this off of 1.4 instead of the other way around
|
||||
# because we cleaned things up this way.
|
||||
# The history is that 2.7 support is the cleanest,
|
||||
# then from that we got 2.6 and so on.
|
||||
class Scanner13(scan.Scanner14):
|
||||
def __init__(self, show_asm=False):
|
||||
scan.Scanner14.__init__(self, show_asm)
|
||||
self.opc = opcode_13
|
||||
self.opname = opcode_13.opname
|
||||
self.version = 1.3
|
||||
return
|
||||
|
||||
# def ingest22(self, co, classname=None, code_objects={}, show_asm=None):
|
||||
# tokens, customize = self.parent_ingest(co, classname, code_objects, show_asm)
|
||||
# tokens = [t for t in tokens if t.kind != 'SET_LINENO']
|
||||
|
||||
# # for t in tokens:
|
||||
# # print(t)
|
||||
#
|
||||
# return tokens, customize
|
@@ -1081,7 +1081,7 @@ class Scanner2(Scanner):
|
||||
# FIXME FIXME FIXME
|
||||
# All the conditions are horrible, and I am not sure I
|
||||
# undestand fully what's going l
|
||||
# WeR REALLY REALLY need a better way to handle control flow
|
||||
# We REALLY REALLY need a better way to handle control flow
|
||||
# Expecially for < 2.7
|
||||
if label is not None and label != -1:
|
||||
if self.version == 2.7:
|
||||
|
@@ -73,8 +73,10 @@ class Scanner3(Scanner):
|
||||
|
||||
if self.version == 3.0:
|
||||
self.pop_jump_tf = frozenset([self.opc.JUMP_IF_FALSE, self.opc.JUMP_IF_TRUE])
|
||||
self.not_continue_follow = ('END_FINALLY', 'POP_BLOCK', 'POP_TOP')
|
||||
else:
|
||||
self.pop_jump_tf = frozenset([self.opc.PJIF, self.opc.PJIT])
|
||||
self.not_continue_follow = ('END_FINALLY', 'POP_BLOCK')
|
||||
|
||||
self.setup_ops_no_loop = frozenset(setup_ops) - frozenset([self.opc.SETUP_LOOP])
|
||||
|
||||
@@ -209,7 +211,12 @@ class Scanner3(Scanner):
|
||||
# If we have a JUMP_FORWARD after the
|
||||
# RAISE_VARARGS then we have a "raise" statement
|
||||
# else we have an "assert" statement.
|
||||
if inst.opname == 'POP_JUMP_IF_TRUE' and i+1 < n:
|
||||
if self.version == 3.0:
|
||||
# There is a an implied JUMP_IF_TRUE that we are not testing for (yet?) here
|
||||
assert_can_follow = inst.opname == 'POP_TOP' and i+1 < n
|
||||
else:
|
||||
assert_can_follow = inst.opname == 'POP_JUMP_IF_TRUE' and i+1 < n
|
||||
if assert_can_follow:
|
||||
next_inst = self.insts[i+1]
|
||||
if (next_inst.opname == 'LOAD_GLOBAL' and
|
||||
next_inst.argval == 'AssertionError'):
|
||||
@@ -387,13 +394,9 @@ class Scanner3(Scanner):
|
||||
.opname == 'FOR_ITER'
|
||||
and self.insts[i+1].opname == 'JUMP_FORWARD')
|
||||
|
||||
|
||||
if (is_continue or
|
||||
(inst.offset in self.stmts and
|
||||
(self.version != 3.0 or (hasattr(inst, 'linestart'))) and
|
||||
(next_opname not in ('END_FINALLY', 'POP_BLOCK',
|
||||
# Python 3.0 only uses POP_TOP
|
||||
'POP_TOP')))):
|
||||
(inst.offset in self.stmts and (inst.starts_line and
|
||||
next_opname not in self.not_continue_follow))):
|
||||
opname = 'CONTINUE'
|
||||
else:
|
||||
opname = 'JUMP_BACK'
|
||||
|
@@ -523,7 +523,7 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
||||
lc_index = -3
|
||||
pass
|
||||
|
||||
if (3.1 <= self.version <= 3.3 and len(node) > 2 and
|
||||
if (3.0 <= self.version <= 3.3 and len(node) > 2 and
|
||||
node[lambda_index] != 'LOAD_LAMBDA' and
|
||||
(have_kwargs or node[lc_index].kind != 'load_closure')):
|
||||
|
||||
@@ -591,8 +591,8 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
||||
paramnames = list(scanner_code.co_varnames[:argc])
|
||||
|
||||
# defaults are for last n parameters, thus reverse
|
||||
if not 3.0 == self.version or self.version >= 3.6:
|
||||
paramnames.reverse(); defparams.reverse()
|
||||
paramnames.reverse();
|
||||
defparams.reverse()
|
||||
|
||||
try:
|
||||
ast = self.build_ast(scanner_code._tokens,
|
||||
@@ -621,8 +621,7 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
||||
else:
|
||||
params = paramnames
|
||||
|
||||
if not 3.0 == self.version or self.version >= 3.6:
|
||||
params.reverse() # back to correct order
|
||||
params.reverse() # back to correct order
|
||||
|
||||
if code_has_star_arg(code):
|
||||
if self.version > 3.0:
|
||||
|
@@ -940,7 +940,7 @@ class SourceWalker(GenericASTTraversal, object):
|
||||
self.prec = 27
|
||||
|
||||
# FIXME: clean this up
|
||||
if self.version > 3.0 and node == 'dict_comp':
|
||||
if self.version >= 3.0 and node == 'dict_comp':
|
||||
cn = node[1]
|
||||
elif self.version < 2.7 and node == 'generator_exp':
|
||||
if node[0] == 'LOAD_GENEXPR':
|
||||
@@ -948,7 +948,7 @@ class SourceWalker(GenericASTTraversal, object):
|
||||
elif node[0] == 'load_closure':
|
||||
cn = node[1]
|
||||
|
||||
elif self.version > 3.0 and node == 'generator_exp':
|
||||
elif self.version >= 3.0 and node == 'generator_exp':
|
||||
if node[0] == 'load_genexpr':
|
||||
load_genexpr = node[0]
|
||||
elif node[1] == 'load_genexpr':
|
||||
@@ -1052,7 +1052,9 @@ class SourceWalker(GenericASTTraversal, object):
|
||||
ast = ast[0]
|
||||
|
||||
store = None
|
||||
if ast in ['set_comp_func', 'dict_comp_func']:
|
||||
n = ast[iter_index]
|
||||
if ast in ['set_comp_func', 'dict_comp_func',
|
||||
'list_comp', 'set_comp_func_header']:
|
||||
for k in ast:
|
||||
if k == 'comp_iter':
|
||||
n = k
|
||||
@@ -1062,7 +1064,6 @@ class SourceWalker(GenericASTTraversal, object):
|
||||
pass
|
||||
pass
|
||||
else:
|
||||
n = ast[iter_index]
|
||||
assert n == 'list_iter', n
|
||||
|
||||
# FIXME: I'm not totally sure this is right.
|
||||
@@ -1070,15 +1071,19 @@ class SourceWalker(GenericASTTraversal, object):
|
||||
# Find the list comprehension body. It is the inner-most
|
||||
# node that is not list_.. .
|
||||
if_node = None
|
||||
comp_for = None
|
||||
comp_store = None
|
||||
if n == 'comp_iter':
|
||||
comp_for = n
|
||||
comp_store = ast[3]
|
||||
|
||||
have_not = False
|
||||
while n in ('list_iter', 'comp_iter'):
|
||||
n = n[0] # iterate one nesting deeper
|
||||
# iterate one nesting deeper
|
||||
if self.version == 3.0 and len(n) == 3:
|
||||
assert n[0] == 'expr' and n[1] == 'expr'
|
||||
n = n[1]
|
||||
else:
|
||||
n = n[0]
|
||||
|
||||
if n in ('list_for', 'comp_for'):
|
||||
if n[2] == 'store':
|
||||
store = n[2]
|
||||
@@ -1094,7 +1099,9 @@ class SourceWalker(GenericASTTraversal, object):
|
||||
|
||||
# Python 2.7+ starts including set_comp_body
|
||||
# Python 3.5+ starts including set_comp_func
|
||||
assert n.kind in ('lc_body', 'comp_body', 'set_comp_func', 'set_comp_body'), ast
|
||||
# Python 3.0 is yet another snowflake
|
||||
if self.version != 3.0:
|
||||
assert n.kind in ('lc_body', 'comp_body', 'set_comp_func', 'set_comp_body'), ast
|
||||
assert store, "Couldn't find store in list/set comprehension"
|
||||
|
||||
# A problem created with later Python code generation is that there
|
||||
@@ -1114,11 +1121,10 @@ class SourceWalker(GenericASTTraversal, object):
|
||||
self.preorder(store)
|
||||
|
||||
# FIXME this is all merely approximate
|
||||
# from trepan.api import debug; debug()
|
||||
self.write(' in ')
|
||||
self.preorder(node[-3])
|
||||
|
||||
if ast == 'list_comp':
|
||||
if ast == 'list_comp' and self.version != 3.0:
|
||||
list_iter = ast[1]
|
||||
assert list_iter == 'list_iter'
|
||||
if list_iter == 'list_for':
|
||||
@@ -1127,9 +1133,7 @@ class SourceWalker(GenericASTTraversal, object):
|
||||
return
|
||||
pass
|
||||
|
||||
if comp_store:
|
||||
self.preorder(comp_for)
|
||||
elif if_node:
|
||||
if if_node:
|
||||
self.write(' if ')
|
||||
if have_not:
|
||||
self.write('not ')
|
||||
|
@@ -12,4 +12,4 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# This file is suitable for sourcing inside bash as
|
||||
# well as importing into Python
|
||||
VERSION='3.2.1'
|
||||
VERSION='3.2.3'
|
||||
|
Reference in New Issue
Block a user