You've already forked python-uncompyle6
mirror of
https://github.com/rocky/python-uncompyle6.git
synced 2025-08-04 09:22:40 +08:00
Compare commits
137 Commits
release-py
...
release-py
Author | SHA1 | Date | |
---|---|---|---|
|
f3b72884c6 | ||
|
504164fcea | ||
|
3c06b82931 | ||
|
c680416f92 | ||
|
58c8fe5a66 | ||
|
aea1adeb85 | ||
|
c871a4ecc5 | ||
|
cd9eca7bff | ||
|
002720988c | ||
|
08f23567a6 | ||
|
43348d7d24 | ||
|
164e9d4b5c | ||
|
37e4754268 | ||
|
c3257a9b79 | ||
|
70b0704967 | ||
|
76dcaf9bf0 | ||
|
21fd506fbb | ||
|
efe0914814 | ||
|
5981c7eae9 | ||
|
36ef1607af | ||
|
b2d97f9847 | ||
|
24ba5d7f40 | ||
|
eae3f0d77b | ||
|
a54fba7993 | ||
|
719d2d7232 | ||
|
e82cabc278 | ||
|
9ab086b207 | ||
|
4022e80d6d | ||
|
9811c5bc42 | ||
|
354796fffd | ||
|
ab696b316a | ||
|
2f99da8199 | ||
|
fd5f4fa5b8 | ||
|
8e4168674d | ||
|
c8fc6a704c | ||
|
622d6f849c | ||
|
aa21fe0b31 | ||
|
2995acb8d9 | ||
|
10d8aed4c0 | ||
|
86fd5dbf7a | ||
|
9fe1752359 | ||
|
48ae7a6964 | ||
|
117b4ff4f1 | ||
|
e9002038f8 | ||
|
9d47b99932 | ||
|
59b012df6f | ||
|
44d7cbcf6f | ||
|
9bae73679f | ||
|
ceebe9ab60 | ||
|
b7e22b4530 | ||
|
c7b20edba0 | ||
|
64e35b09db | ||
|
3436a3a256 | ||
|
a0d4daf5ff | ||
|
afa6a00db8 | ||
|
d634c5c17a | ||
|
d8f0d31475 | ||
|
dd76a6f253 | ||
|
cb40caa73c | ||
|
fd59879510 | ||
|
c9cae2d09e | ||
|
af209dc142 | ||
|
f9fd63d5f5 | ||
|
ad419e0ed9 | ||
|
ee5c7da790 | ||
|
39c12704a8 | ||
|
3b3fc09b60 | ||
|
f7697ccd7b | ||
|
e364499bb9 | ||
|
9db59f1b80 | ||
|
a5cdb50154 | ||
|
792ef5b5b8 | ||
|
123be56e5d | ||
|
7f46d8bb2a | ||
|
47ed0795b2 | ||
|
cccf33573b | ||
|
3c3e5c82fc | ||
|
436260dc9a | ||
|
8f0674706b | ||
|
01cc184716 | ||
|
2771cb46ab | ||
|
9ed4326f7e | ||
|
e3b10b62d7 | ||
|
59b8f18486 | ||
|
bcf6939312 | ||
|
3b7f49c01d | ||
|
ae976e991a | ||
|
60d96b6a5a | ||
|
8fe6309650 | ||
|
4c4aa393df | ||
|
a8b8c2908c | ||
|
cb406e2581 | ||
|
20b16c44ff | ||
|
3abe8d11d3 | ||
|
26140934da | ||
|
b62752eca1 | ||
|
9db446d928 | ||
|
46acb74745 | ||
|
44e1288e2f | ||
|
ce9270dda0 | ||
|
3d732db3cc | ||
|
009a74da7d | ||
|
251eb6da1b | ||
|
8b5e0f49f8 | ||
|
1cc08d9598 | ||
|
d99e78d46d | ||
|
b94cce7b12 | ||
|
fe786b2b95 | ||
|
bf56fbeeec | ||
|
6d8d9fd83b | ||
|
78ca6a0c1f | ||
|
86dd321256 | ||
|
4db364f701 | ||
|
c03b039714 | ||
|
d97509495e | ||
|
4d793ba1b2 | ||
|
590d2f44f1 | ||
|
e875b79a75 | ||
|
b57ca392a2 | ||
|
a132e2ace6 | ||
|
f9bb0b0a46 | ||
|
b05500dd49 | ||
|
325bba5be5 | ||
|
65307f257c | ||
|
715bf9cbab | ||
|
8909fe8d37 | ||
|
733a44e22f | ||
|
8187fdf4a6 | ||
|
f2f17740ee | ||
|
393e5c9303 | ||
|
8c611476fe | ||
|
6df65a87bc | ||
|
bb94c7f5bc | ||
|
8e9ce0be31 | ||
|
bc49469704 | ||
|
5905cce1de | ||
|
af816c9e60 |
@@ -115,7 +115,7 @@ mechanisms and addressed problems and extensions by some other means.
|
|||||||
Specifically, in `uncompyle`, decompilation of python bytecode 2.5 &
|
Specifically, in `uncompyle`, decompilation of python bytecode 2.5 &
|
||||||
2.6 is done by transforming the byte code into a pseudo-2.7 Python
|
2.6 is done by transforming the byte code into a pseudo-2.7 Python
|
||||||
bytecode and is based on code from Eloi Vanderbeken. A bit of this
|
bytecode and is based on code from Eloi Vanderbeken. A bit of this
|
||||||
could have bene easily added by modifying grammar rules.
|
could have been easily added by modifying grammar rules.
|
||||||
|
|
||||||
This project, `uncompyle6`, abandons that approach for various
|
This project, `uncompyle6`, abandons that approach for various
|
||||||
reasons. Having a grammar per Python version is much cleaner and it
|
reasons. Having a grammar per Python version is much cleaner and it
|
||||||
|
68
NEWS.md
68
NEWS.md
@@ -1,3 +1,46 @@
|
|||||||
|
3.3.4 2019-05-19 Fleetwood at 65
|
||||||
|
================================
|
||||||
|
|
||||||
|
Most of the work in this is release is thanks to x0ret.
|
||||||
|
|
||||||
|
- Major work was done by x0ret to correct function signatures and include annotation types
|
||||||
|
- Handle Python 3.6 STORE_ANNOTATION [#58](https://github.com/rocky/python-uncompyle6/issues/58)
|
||||||
|
- Friendlier assembly output
|
||||||
|
- `LOAD_CONST` replaced by `LOAD_STR` where appropriate to simplify parsing and improve clarity
|
||||||
|
- remove unneeded parenthesis in a generator expression when it is the single argument to the function [#247](https://github.com/rocky/python-uncompyle6/issues/246)
|
||||||
|
- Bug in noting an async function [#246](https://github.com/rocky/python-uncompyle6/issues/246)
|
||||||
|
- Handle unicode docstrings and fix docstring bugs [#241](https://github.com/rocky/python-uncompyle6/issues/241)
|
||||||
|
- Add short option -T as an alternate for --tree+
|
||||||
|
- Some grammar cleanup
|
||||||
|
|
||||||
|
3.3.3 2019-05-19 Henry and Lewis
|
||||||
|
================================
|
||||||
|
|
||||||
|
As before, decomplation bugs fixed. The focus has primarily been on
|
||||||
|
Python 3.7. But with this release, releases will be put on hold,as a
|
||||||
|
better control-flow detection is worked on . This has been needed for a
|
||||||
|
while, and is long overdue. It will probably also take a while to get
|
||||||
|
done as good as what we have now.
|
||||||
|
|
||||||
|
However this work will be done in a new project
|
||||||
|
[decompyle3](https://github.com/rocky/python-decompile3). In contrast
|
||||||
|
to _uncompyle6_ the code will be written assuming a modern Python 3,
|
||||||
|
e.g. 3.7. It is originally intended to decompile Python version 3.7
|
||||||
|
and greater.
|
||||||
|
|
||||||
|
* A number of Python 3.7+ chained comparisons were fixed
|
||||||
|
* Revise Python 3.6ish format string handling
|
||||||
|
* Go over operator precedence, e.g. for AST `IfExp`
|
||||||
|
|
||||||
|
Reported Bug Fixes
|
||||||
|
------------------
|
||||||
|
|
||||||
|
* [#239: 3.7 handling of 4-level attribute import](https://github.com/rocky/python-uncompyle6/issues/239),
|
||||||
|
* [#229: Inconsistent if block in python3.6](https://github.com/rocky/python-uncompyle6/issues/229),
|
||||||
|
* [#227: Args not appearing in decompiled src when kwargs is specified explicitly (call_ex_kw)](https://github.com/rocky/python-uncompyle6/issues/227)
|
||||||
|
2.7 confusion around "and" versus comprehension "if"
|
||||||
|
* [#225: 2.7 confusion around "and" vs comprehension "if"](https://github.com/rocky/python-uncompyle6/issues/225)
|
||||||
|
|
||||||
3.3.2 2019-05-03 Better Friday
|
3.3.2 2019-05-03 Better Friday
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
@@ -12,7 +55,6 @@ get addressed in future releases
|
|||||||
|
|
||||||
Pypy 3.6 support was started. Pypy 3.x detection fixed (via xdis)
|
Pypy 3.6 support was started. Pypy 3.x detection fixed (via xdis)
|
||||||
|
|
||||||
|
|
||||||
3.3.1 2019-04-19 Good Friday
|
3.3.1 2019-04-19 Good Friday
|
||||||
==========================
|
==========================
|
||||||
|
|
||||||
@@ -20,14 +62,14 @@ Lots of decomplation bugs, especially in the 3.x series fixed. Don't worry thoug
|
|||||||
|
|
||||||
* Add annotation return values in 3.6+
|
* Add annotation return values in 3.6+
|
||||||
* Fix 3.6+ lambda parameter handling decompilation
|
* Fix 3.6+ lambda parameter handling decompilation
|
||||||
* Fix 3.7+ chained comparision decompilation
|
* Fix 3.7+ chained comparison decompilation
|
||||||
* split out semantic-action customization into more separate files
|
* split out semantic-action customization into more separate files
|
||||||
* Add 3.8 try/else
|
* Add 3.8 try/else
|
||||||
* Fix 2.7 generator decompilation
|
* Fix 2.7 generator decompilation
|
||||||
* Fix some parser failures fixes in 3.4+ using test_pyenvlib
|
* Fix some parser failures fixes in 3.4+ using test_pyenvlib
|
||||||
* Add more run tests
|
* Add more run tests
|
||||||
|
|
||||||
3.3.0 2019-43-14 Holy Week
|
3.3.0 2019-04-14 Holy Week
|
||||||
==========================
|
==========================
|
||||||
|
|
||||||
* First cut at Python 3.8 (many bug remain)
|
* First cut at Python 3.8 (many bug remain)
|
||||||
@@ -41,23 +83,25 @@ Mostly more of the same: bug fixes and pull requests.
|
|||||||
Bug Fixes
|
Bug Fixes
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
* [#155: Python 3.x bytecode confusing "try/else" with "try" in a loop](https://github.com/rocky/python-uncompyle6/issues/155),
|
* [#221: Wrong grammar for nested ifelsestmt (in Python 3.7 at least)](https://github.com/rocky/python-uncompyle6/issues/221)
|
||||||
* [#200: Python 3 bug in not detecting end bounds of an "if" ... "elif"](https://github.com/rocky/python-uncompyle6/issues/200),
|
|
||||||
* [#208: Comma placement in 3.6 and 3.7 **kwargs](https://github.com/rocky/python-uncompyle6/issues/208),
|
|
||||||
* [#209: Fix "if" return boundary in 3.6+](https://github.com/rocky/python-uncompyle6/issues/209),
|
|
||||||
* [#215: 2.7 can have two JUMP_BACKs at the end of a while loop](https://github.com/rocky/python-uncompyle6/issues/215)
|
* [#215: 2.7 can have two JUMP_BACKs at the end of a while loop](https://github.com/rocky/python-uncompyle6/issues/215)
|
||||||
|
* [#209: Fix "if" return boundary in 3.6+](https://github.com/rocky/python-uncompyle6/issues/209),
|
||||||
|
* [#208: Comma placement in 3.6 and 3.7 **kwargs](https://github.com/rocky/python-uncompyle6/issues/208),
|
||||||
|
* [#200: Python 3 bug in not detecting end bounds of an "if" ... "elif"](https://github.com/rocky/python-uncompyle6/issues/200),
|
||||||
|
* [#155: Python 3.x bytecode confusing "try/else" with "try" in a loop](https://github.com/rocky/python-uncompyle6/issues/155),
|
||||||
|
|
||||||
|
|
||||||
Pull Requests
|
Pull Requests
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
* [#202: Better "assert" statement detemination in Python 2.7](https://github.com/rocky/python-uncompyle6/pull/211)
|
* [#202: Better "assert" statement determination in Python 2.7](https://github.com/rocky/python-uncompyle6/pull/211)
|
||||||
* [#204: Python 3.7 testing](https://github.com/rocky/python-uncompyle6/pull/204)
|
* [#204: Python 3.7 testing](https://github.com/rocky/python-uncompyle6/pull/204)
|
||||||
* [#205: Run more f-string tests on Python 3.7](https://github.com/rocky/python-uncompyle6/pull/205)
|
* [#205: Run more f-string tests on Python 3.7](https://github.com/rocky/python-uncompyle6/pull/205)
|
||||||
* [#211: support utf-8 chars in Python 3 sourcecode](https://github.com/rocky/python-uncompyle6/pull/202)
|
* [#211: support utf-8 chars in Python 3 sourcecode](https://github.com/rocky/python-uncompyle6/pull/202)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
3.2.5 2018-12-30 Clearout sale
|
3.2.5 2018-12-30 Clear-out sale
|
||||||
======================================
|
======================================
|
||||||
|
|
||||||
- 3.7.2 Remove deprecation warning on regexp string that isn't raw
|
- 3.7.2 Remove deprecation warning on regexp string that isn't raw
|
||||||
@@ -122,14 +166,14 @@ Jesus on Friday's New York Times puzzle: "I'm stuck on 2A"
|
|||||||
- reduce 3.5, 3.6 control-flow bugs
|
- reduce 3.5, 3.6 control-flow bugs
|
||||||
- reduce ambiguity in rules that lead to long (exponential?) parses
|
- reduce ambiguity in rules that lead to long (exponential?) parses
|
||||||
- limit/isolate some 2.6/2.7,3.x grammar rules
|
- limit/isolate some 2.6/2.7,3.x grammar rules
|
||||||
- more runtime testing of decompiled code
|
- more run-time testing of decompiled code
|
||||||
- more removal of parenthesis around calls via setting precidence
|
- more removal of parenthesis around calls via setting precedence
|
||||||
|
|
||||||
3.1.0 2018-03-21 Equinox
|
3.1.0 2018-03-21 Equinox
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
- Add code_deparse_with_offset() fragment function.
|
- Add code_deparse_with_offset() fragment function.
|
||||||
- Correct paramenter call fragment deparse_code()
|
- Correct parameter call fragment deparse_code()
|
||||||
- Lots of 3.6, 3.x, and 2.7 bug fixes
|
- Lots of 3.6, 3.x, and 2.7 bug fixes
|
||||||
About 5% of 3.6 fail parsing now. But
|
About 5% of 3.6 fail parsing now. But
|
||||||
semantics still needs much to be desired.
|
semantics still needs much to be desired.
|
||||||
|
45
README.rst
45
README.rst
@@ -93,8 +93,8 @@ This uses setup.py, so it follows the standard Python routine:
|
|||||||
A GNU makefile is also provided so :code:`make install` (possibly as root or
|
A GNU makefile is also provided so :code:`make install` (possibly as root or
|
||||||
sudo) will do the steps above.
|
sudo) will do the steps above.
|
||||||
|
|
||||||
Testing
|
Running Tests
|
||||||
-------
|
-------------
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@@ -133,18 +133,8 @@ You can also cross compare the results with pycdc_ . Since they work
|
|||||||
differently, bugs here often aren't in that, and vice versa.
|
differently, bugs here often aren't in that, and vice versa.
|
||||||
|
|
||||||
|
|
||||||
Known Bugs/Restrictions
|
Verification
|
||||||
-----------------------
|
------------
|
||||||
|
|
||||||
The biggest known and possibly fixable (but hard) problem has to do
|
|
||||||
with handling control flow. (Python has probably the most diverse and
|
|
||||||
screwy set of compound statements I've ever seen; there
|
|
||||||
are "else" clauses on loops and try blocks that I suspect many
|
|
||||||
programmers don't know about.)
|
|
||||||
|
|
||||||
All of the Python decompilers that I have looked at have problems
|
|
||||||
decompiling Python's control flow. In some cases we can detect an
|
|
||||||
erroneous decompilation and report that.
|
|
||||||
|
|
||||||
In older versions of Python it was possible to verify bytecode by
|
In older versions of Python it was possible to verify bytecode by
|
||||||
decompiling bytecode, and then compiling using the Python interpreter
|
decompiling bytecode, and then compiling using the Python interpreter
|
||||||
@@ -152,7 +142,7 @@ for that bytecode version. Having done this the bytecode produced
|
|||||||
could be compared with the original bytecode. However as Python's code
|
could be compared with the original bytecode. However as Python's code
|
||||||
generation got better, this is no longer feasible.
|
generation got better, this is no longer feasible.
|
||||||
|
|
||||||
There verification that we use that doesn't check bytecode for
|
The verification that we use that doesn't check bytecode for
|
||||||
equivalence but does check to see if the resulting decompiled source
|
equivalence but does check to see if the resulting decompiled source
|
||||||
is a valid Python program by running the Python interpreter. Because
|
is a valid Python program by running the Python interpreter. Because
|
||||||
the Python language has changed so much, for best results you should
|
the Python language has changed so much, for best results you should
|
||||||
@@ -167,6 +157,19 @@ And already Python has a set of programs like this: the test suite
|
|||||||
for the standard library that comes with Python. We have some
|
for the standard library that comes with Python. We have some
|
||||||
code in `test/stdlib` to facilitate this kind of checking.
|
code in `test/stdlib` to facilitate this kind of checking.
|
||||||
|
|
||||||
|
Known Bugs/Restrictions
|
||||||
|
-----------------------
|
||||||
|
|
||||||
|
The biggest known and possibly fixable (but hard) problem has to do
|
||||||
|
with handling control flow. (Python has probably the most diverse and
|
||||||
|
screwy set of compound statements I've ever seen; there
|
||||||
|
are "else" clauses on loops and try blocks that I suspect many
|
||||||
|
programmers don't know about.)
|
||||||
|
|
||||||
|
All of the Python decompilers that I have looked at have problems
|
||||||
|
decompiling Python's control flow. In some cases we can detect an
|
||||||
|
erroneous decompilation and report that.
|
||||||
|
|
||||||
Python support is strongest in Python 2 for 2.7 and drops off as you
|
Python support is strongest in Python 2 for 2.7 and drops off as you
|
||||||
get further away from that. Support is also probably pretty good for
|
get further away from that. Support is also probably pretty good for
|
||||||
python 2.3-2.4 since a lot of the goodness of early the version of the
|
python 2.3-2.4 since a lot of the goodness of early the version of the
|
||||||
@@ -194,8 +197,12 @@ Between Python 3.5, 3.6 and 3.7 there have been major changes to the
|
|||||||
|
|
||||||
Currently not all Python magic numbers are supported. Specifically in
|
Currently not all Python magic numbers are supported. Specifically in
|
||||||
some versions of Python, notably Python 3.6, the magic number has
|
some versions of Python, notably Python 3.6, the magic number has
|
||||||
changes several times within a version. We support only the released
|
changes several times within a version.
|
||||||
magic. There are also customized Python interpreters, notably Dropbox,
|
|
||||||
|
**We support only released versions, not candidate versions.** Note however
|
||||||
|
that the magic of a released version is usually the same as the *last* candidate version prior to release.
|
||||||
|
|
||||||
|
There are also customized Python interpreters, notably Dropbox,
|
||||||
which use their own magic and encrypt bytcode. With the exception of
|
which use their own magic and encrypt bytcode. With the exception of
|
||||||
the Dropbox's old Python 2.5 interpreter this kind of thing is not
|
the Dropbox's old Python 2.5 interpreter this kind of thing is not
|
||||||
handled.
|
handled.
|
||||||
@@ -218,7 +225,7 @@ See Also
|
|||||||
* https://github.com/zrax/pycdc : purports to support all versions of Python. It is written in C++ and is most accurate for Python versions around 2.7 and 3.3 when the code was more actively developed. Accuracy for more recent versions of Python 3 and early versions of Python are especially lacking. See its `issue tracker <https://github.com/zrax/pycdc/issues>`_ for details. Currently lightly maintained.
|
* https://github.com/zrax/pycdc : purports to support all versions of Python. It is written in C++ and is most accurate for Python versions around 2.7 and 3.3 when the code was more actively developed. Accuracy for more recent versions of Python 3 and early versions of Python are especially lacking. See its `issue tracker <https://github.com/zrax/pycdc/issues>`_ for details. Currently lightly maintained.
|
||||||
* https://code.google.com/archive/p/unpyc3/ : supports Python 3.2 only. The above projects use a different decompiling technique than what is used here. Currently unmaintained.
|
* https://code.google.com/archive/p/unpyc3/ : supports Python 3.2 only. The above projects use a different decompiling technique than what is used here. Currently unmaintained.
|
||||||
* https://github.com/figment/unpyc3/ : fork of above, but supports Python 3.3 only. Includes some fixes like supporting function annotations. Currently unmaintained.
|
* https://github.com/figment/unpyc3/ : fork of above, but supports Python 3.3 only. Includes some fixes like supporting function annotations. Currently unmaintained.
|
||||||
* https://github.com/wibiti/uncompyle2 : supports Python 2.7 only, but does that fairly well. There are situtations where `uncompyle6` results are incorrect while `uncompyle2` results are not, but more often uncompyle6 is correct when uncompyle2 is not. Because `uncompyle6` adheres to accuracy over idiomatic Python, `uncompyle2` can produce more natural-looking code when it is correct. Currently `uncompyle2` is lightly maintained. See its issue `tracker <https://github.com/wibiti/uncompyle2/issues>`_ for more details
|
* https://github.com/wibiti/uncompyle2 : supports Python 2.7 only, but does that fairly well. There are situations where `uncompyle6` results are incorrect while `uncompyle2` results are not, but more often uncompyle6 is correct when uncompyle2 is not. Because `uncompyle6` adheres to accuracy over idiomatic Python, `uncompyle2` can produce more natural-looking code when it is correct. Currently `uncompyle2` is lightly maintained. See its issue `tracker <https://github.com/wibiti/uncompyle2/issues>`_ for more details
|
||||||
* `How to report a bug <https://github.com/rocky/python-uncompyle6/blob/master/HOW-TO-REPORT-A-BUG.md>`_
|
* `How to report a bug <https://github.com/rocky/python-uncompyle6/blob/master/HOW-TO-REPORT-A-BUG.md>`_
|
||||||
* The HISTORY_ file.
|
* The HISTORY_ file.
|
||||||
* https://github.com/rocky/python-xdis : Cross Python version disassembler
|
* https://github.com/rocky/python-xdis : Cross Python version disassembler
|
||||||
@@ -226,7 +233,7 @@ See Also
|
|||||||
* https://github.com/rocky/python-uncompyle6/wiki : Wiki Documents which describe the code and aspects of it in more detail
|
* https://github.com/rocky/python-uncompyle6/wiki : Wiki Documents which describe the code and aspects of it in more detail
|
||||||
|
|
||||||
|
|
||||||
.. _trepan: https://pypi.python.org/pypi/trepan2
|
.. _trepan: https://pypi.python.org/pypi/trepan2g
|
||||||
.. _compiler: https://pypi.python.org/pypi/spark_parser
|
.. _compiler: https://pypi.python.org/pypi/spark_parser
|
||||||
.. _HISTORY: https://github.com/rocky/python-uncompyle6/blob/master/HISTORY.md
|
.. _HISTORY: https://github.com/rocky/python-uncompyle6/blob/master/HISTORY.md
|
||||||
.. _debuggers: https://pypi.python.org/pypi/trepan3k
|
.. _debuggers: https://pypi.python.org/pypi/trepan3k
|
||||||
|
@@ -58,7 +58,7 @@ entry_points = {
|
|||||||
]}
|
]}
|
||||||
ftp_url = None
|
ftp_url = None
|
||||||
install_requires = ['spark-parser >= 1.8.7, < 1.9.0',
|
install_requires = ['spark-parser >= 1.8.7, < 1.9.0',
|
||||||
'xdis >= 4.0.1, < 4.1.0']
|
'xdis >= 4.0.2, < 4.1.0']
|
||||||
|
|
||||||
license = 'GPL3'
|
license = 'GPL3'
|
||||||
mailing_list = 'python-debugger@googlegroups.com'
|
mailing_list = 'python-debugger@googlegroups.com'
|
||||||
|
@@ -1,5 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
PYTHON_VERSION=3.6.5
|
PYTHON_VERSION=3.6.8
|
||||||
|
|
||||||
# FIXME put some of the below in a common routine
|
# FIXME put some of the below in a common routine
|
||||||
function finish {
|
function finish {
|
||||||
|
@@ -1,78 +0,0 @@
|
|||||||
import sys
|
|
||||||
from uncompyle6 import PYTHON3
|
|
||||||
if PYTHON3:
|
|
||||||
from io import StringIO
|
|
||||||
minint = -sys.maxsize-1
|
|
||||||
maxint = sys.maxsize
|
|
||||||
else:
|
|
||||||
from StringIO import StringIO
|
|
||||||
minint = -sys.maxint-1
|
|
||||||
maxint = sys.maxint
|
|
||||||
from uncompyle6.semantics.helper import print_docstring
|
|
||||||
|
|
||||||
class PrintFake:
|
|
||||||
def __init__(self):
|
|
||||||
self.pending_newlines = 0
|
|
||||||
self.f = StringIO()
|
|
||||||
|
|
||||||
def write(self, *data):
|
|
||||||
if (len(data) == 0) or (len(data) == 1 and data[0] == ''):
|
|
||||||
return
|
|
||||||
out = ''.join((str(j) for j in data))
|
|
||||||
n = 0
|
|
||||||
for i in out:
|
|
||||||
if i == '\n':
|
|
||||||
n += 1
|
|
||||||
if n == len(out):
|
|
||||||
self.pending_newlines = max(self.pending_newlines, n)
|
|
||||||
return
|
|
||||||
elif n:
|
|
||||||
self.pending_newlines = max(self.pending_newlines, n)
|
|
||||||
out = out[n:]
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
if self.pending_newlines > 0:
|
|
||||||
self.f.write('\n'*self.pending_newlines)
|
|
||||||
self.pending_newlines = 0
|
|
||||||
|
|
||||||
for i in out[::-1]:
|
|
||||||
if i == '\n':
|
|
||||||
self.pending_newlines += 1
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
if self.pending_newlines:
|
|
||||||
out = out[:-self.pending_newlines]
|
|
||||||
self.f.write(out)
|
|
||||||
def println(self, *data):
|
|
||||||
if data and not(len(data) == 1 and data[0] == ''):
|
|
||||||
self.write(*data)
|
|
||||||
self.pending_newlines = max(self.pending_newlines, 1)
|
|
||||||
return
|
|
||||||
pass
|
|
||||||
|
|
||||||
def test_docstring():
|
|
||||||
|
|
||||||
for doc, expect in (
|
|
||||||
("Now is the time",
|
|
||||||
' """Now is the time"""'),
|
|
||||||
("""
|
|
||||||
Now is the time
|
|
||||||
""",
|
|
||||||
''' """
|
|
||||||
Now is the time
|
|
||||||
"""''')
|
|
||||||
|
|
||||||
# (r'''func placeholder - ' and with ("""\nstring\n """)''',
|
|
||||||
# """ r'''func placeholder - ' and with (\"\"\"\nstring\n\"\"\")'''"""),
|
|
||||||
# (r"""func placeholder - ' and with ('''\nstring\n''') and \"\"\"\nstring\n\"\"\" """,
|
|
||||||
# """ r\"\"\"func placeholder - ' and with ('''\nstring\n''') and \"\"\"\nstring\n\"\"\" \"\"\"""")
|
|
||||||
):
|
|
||||||
|
|
||||||
o = PrintFake()
|
|
||||||
# print(doc)
|
|
||||||
# print(expect)
|
|
||||||
print_docstring(o, ' ', doc)
|
|
||||||
assert expect == o.f.getvalue()
|
|
@@ -88,7 +88,7 @@ def test_grammar():
|
|||||||
COME_FROM_EXCEPT_CLAUSE
|
COME_FROM_EXCEPT_CLAUSE
|
||||||
COME_FROM_LOOP COME_FROM_WITH
|
COME_FROM_LOOP COME_FROM_WITH
|
||||||
COME_FROM_FINALLY ELSE
|
COME_FROM_FINALLY ELSE
|
||||||
LOAD_GENEXPR LOAD_ASSERT LOAD_SETCOMP LOAD_DICTCOMP
|
LOAD_GENEXPR LOAD_ASSERT LOAD_SETCOMP LOAD_DICTCOMP LOAD_STR
|
||||||
LAMBDA_MARKER
|
LAMBDA_MARKER
|
||||||
RETURN_END_IF RETURN_END_IF_LAMBDA RETURN_VALUE_LAMBDA RETURN_LAST
|
RETURN_END_IF RETURN_END_IF_LAMBDA RETURN_VALUE_LAMBDA RETURN_LAST
|
||||||
""".split())
|
""".split())
|
||||||
|
@@ -1,3 +1,4 @@
|
|||||||
|
from uncompyle6 import PYTHON_VERSION
|
||||||
from uncompyle6.scanners.tok import Token
|
from uncompyle6.scanners.tok import Token
|
||||||
|
|
||||||
def test_token():
|
def test_token():
|
||||||
@@ -16,7 +17,7 @@ def test_token():
|
|||||||
# Make sure formatting of: LOAD_CONST False. We assume False is the 0th index
|
# Make sure formatting of: LOAD_CONST False. We assume False is the 0th index
|
||||||
# of co_consts.
|
# of co_consts.
|
||||||
t = Token('LOAD_CONST', offset=1, attr=False, pattr=False, has_arg=True)
|
t = Token('LOAD_CONST', offset=1, attr=False, pattr=False, has_arg=True)
|
||||||
expect = ' 1 LOAD_CONST 0 False'
|
expect = ' 1 LOAD_CONST False'
|
||||||
assert t.format() == expect
|
assert t.format() == expect
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
2
pytest/testdata/if-2.7.right
vendored
2
pytest/testdata/if-2.7.right
vendored
@@ -8,5 +8,5 @@
|
|||||||
9 STORE_NAME 2 'b'
|
9 STORE_NAME 2 'b'
|
||||||
12 JUMP_FORWARD 0 'to 15'
|
12 JUMP_FORWARD 0 'to 15'
|
||||||
15_0 COME_FROM 12 '12'
|
15_0 COME_FROM 12 '12'
|
||||||
15 LOAD_CONST 0 None
|
15 LOAD_CONST None
|
||||||
18 RETURN_VALUE
|
18 RETURN_VALUE
|
||||||
|
6
pytest/testdata/ifelse-2.7.right
vendored
6
pytest/testdata/ifelse-2.7.right
vendored
@@ -4,12 +4,12 @@
|
|||||||
3 0 LOAD_NAME 0 'True'
|
3 0 LOAD_NAME 0 'True'
|
||||||
3 POP_JUMP_IF_FALSE 15 'to 15'
|
3 POP_JUMP_IF_FALSE 15 'to 15'
|
||||||
|
|
||||||
4 6 LOAD_CONST 0 1
|
4 6 LOAD_CONST 1
|
||||||
9 STORE_NAME 1 'b'
|
9 STORE_NAME 1 'b'
|
||||||
12 JUMP_FORWARD 6 'to 21'
|
12 JUMP_FORWARD 6 'to 21'
|
||||||
|
|
||||||
6 15 LOAD_CONST 1 2
|
6 15 LOAD_CONST 2
|
||||||
18 STORE_NAME 2 'd'
|
18 STORE_NAME 2 'd'
|
||||||
21_0 COME_FROM 12 '12'
|
21_0 COME_FROM 12 '12'
|
||||||
21 LOAD_CONST 2 None
|
21 LOAD_CONST None
|
||||||
24 RETURN_VALUE
|
24 RETURN_VALUE
|
||||||
|
@@ -71,10 +71,10 @@ check-3.7: check-bytecode
|
|||||||
$(PYTHON) test_pythonlib.py --bytecode-3.7-run --verify-run
|
$(PYTHON) test_pythonlib.py --bytecode-3.7-run --verify-run
|
||||||
$(PYTHON) test_pythonlib.py --bytecode-3.7 --weak-verify $(COMPILE)
|
$(PYTHON) test_pythonlib.py --bytecode-3.7 --weak-verify $(COMPILE)
|
||||||
|
|
||||||
#: Run working tests from Python 3.8
|
# #: Run working tests from Python 3.8
|
||||||
check-3.8: check-bytecode
|
# check-3.8: check-bytecode
|
||||||
$(PYTHON) test_pythonlib.py --bytecode-3.8-run --verify-run
|
# $(PYTHON) test_pythonlib.py --bytecode-3.8-run --verify-run
|
||||||
$(PYTHON) test_pythonlib.py --bytecode-3.8 --weak-verify $(COMPILE)
|
# $(PYTHON) test_pythonlib.py --bytecode-3.8 --weak-verify $(COMPILE)
|
||||||
|
|
||||||
# FIXME
|
# FIXME
|
||||||
#: this is called when running under pypy3.5-5.8.0 or pypy2-5.6.0
|
#: this is called when running under pypy3.5-5.8.0 or pypy2-5.6.0
|
||||||
@@ -98,7 +98,7 @@ check-bytecode-3:
|
|||||||
$(PYTHON) test_pythonlib.py --bytecode-3.0 \
|
$(PYTHON) test_pythonlib.py --bytecode-3.0 \
|
||||||
--bytecode-3.1 --bytecode-3.2 --bytecode-3.3 \
|
--bytecode-3.1 --bytecode-3.2 --bytecode-3.3 \
|
||||||
--bytecode-3.4 --bytecode-3.5 --bytecode-3.6 \
|
--bytecode-3.4 --bytecode-3.5 --bytecode-3.6 \
|
||||||
--bytecode-3.7 --bytecode-3.8 \
|
--bytecode-3.7 \
|
||||||
--bytecode-pypy3.2
|
--bytecode-pypy3.2
|
||||||
|
|
||||||
#: Check deparsing on selected bytecode 3.x
|
#: Check deparsing on selected bytecode 3.x
|
||||||
@@ -177,7 +177,7 @@ grammar-coverage-2.6:
|
|||||||
grammar-coverage-2.7:
|
grammar-coverage-2.7:
|
||||||
-rm $(COVER_DIR)/spark-grammar-2.7.cover || true
|
-rm $(COVER_DIR)/spark-grammar-2.7.cover || true
|
||||||
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.7.cover $(PYTHON) test_pythonlib.py --bytecode-2.7
|
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.7.cover $(PYTHON) test_pythonlib.py --bytecode-2.7
|
||||||
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.7.cover $(PYTHON) test_pyenvlib.py --2.7.14 --max=600
|
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-2.7.cover $(PYTHON) test_pyenvlib.py --2.7.16 --max=600
|
||||||
|
|
||||||
#: Get grammar coverage for Python 3.0
|
#: Get grammar coverage for Python 3.0
|
||||||
grammar-coverage-3.0:
|
grammar-coverage-3.0:
|
||||||
@@ -220,7 +220,12 @@ grammar-coverage-3.5:
|
|||||||
grammar-coverage-3.6:
|
grammar-coverage-3.6:
|
||||||
rm $(COVER_DIR)/spark-grammar-3.6.cover || /bin/true
|
rm $(COVER_DIR)/spark-grammar-3.6.cover || /bin/true
|
||||||
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.6.cover $(PYTHON) test_pythonlib.py --bytecode-3.6
|
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.6.cover $(PYTHON) test_pythonlib.py --bytecode-3.6
|
||||||
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.6.cover $(PYTHON) test_pyenvlib.py --3.6.4 --max=280
|
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.6.cover $(PYTHON) test_pyenvlib.py --3.6.8 --max=280
|
||||||
|
|
||||||
|
#: Get grammar coverage for Python 3.7
|
||||||
|
grammar-coverage-3.7:
|
||||||
|
rm $(COVER_DIR)/spark-grammar-3.7.cover || /bin/true
|
||||||
|
SPARK_PARSER_COVERAGE=$(COVER_DIR)/spark-grammar-3.7.cover $(PYTHON) test_pyenvlib.py --3.7.3 --max=500
|
||||||
|
|
||||||
#: Check deparsing Python 2.6
|
#: Check deparsing Python 2.6
|
||||||
check-bytecode-2.6:
|
check-bytecode-2.6:
|
||||||
@@ -269,12 +274,13 @@ check-bytecode-3.6:
|
|||||||
|
|
||||||
#: Check deparsing Python 3.7
|
#: Check deparsing Python 3.7
|
||||||
check-bytecode-3.7:
|
check-bytecode-3.7:
|
||||||
|
$(PYTHON) test_pythonlib.py --bytecode-3.7-run --verify-run
|
||||||
$(PYTHON) test_pythonlib.py --bytecode-3.7 --weak-verify
|
$(PYTHON) test_pythonlib.py --bytecode-3.7 --weak-verify
|
||||||
|
|
||||||
#: Check deparsing Python 3.8
|
# #: Check deparsing Python 3.8
|
||||||
check-bytecode-3.8:
|
# check-bytecode-3.8:
|
||||||
$(PYTHON) test_pythonlib.py --bytecode-3.8-run --verify-run
|
# $(PYTHON) test_pythonlib.py --bytecode-3.8-run --verify-run
|
||||||
$(PYTHON) test_pythonlib.py --bytecode-3.8 --weak-verify
|
# $(PYTHON) test_pythonlib.py --bytecode-3.8 --weak-verify
|
||||||
|
|
||||||
#: short tests for bytecodes only for this version of Python
|
#: short tests for bytecodes only for this version of Python
|
||||||
check-native-short:
|
check-native-short:
|
||||||
|
BIN
test/bytecode_2.4_run/00_docstring.pyc
Normal file
BIN
test/bytecode_2.4_run/00_docstring.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_2.6_run/04_ifelse_parens.pyc
Normal file
BIN
test/bytecode_2.6_run/04_ifelse_parens.pyc
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_2.7_run/00_docstring.pyc
Normal file
BIN
test/bytecode_2.7_run/00_docstring.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_2.7_run/04_ifelse_parens.pyc
Normal file
BIN
test/bytecode_2.7_run/04_ifelse_parens.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_3.3_run/04_def_annotate.pyc
Normal file
BIN
test/bytecode_3.3_run/04_def_annotate.pyc
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_3.5/02_async.pyc
Normal file
BIN
test/bytecode_3.5/02_async.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_3.5_run/04_def_annotate.pyc
Normal file
BIN
test/bytecode_3.5_run/04_def_annotate.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_3.6/05_ann_mopdule2.pyc
Normal file
BIN
test/bytecode_3.6/05_ann_mopdule2.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_3.6_run/00_docstring.pyc
Normal file
BIN
test/bytecode_3.6_run/00_docstring.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_3.6_run/02_call_ex_kw.pyc
Normal file
BIN
test/bytecode_3.6_run/02_call_ex_kw.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_3.6_run/04_def_annotate.pyc
Normal file
BIN
test/bytecode_3.6_run/04_def_annotate.pyc
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_3.7/01_chained_compare.pyc
Normal file
BIN
test/bytecode_3.7/01_chained_compare.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_3.7/02_async.pyc
Normal file
BIN
test/bytecode_3.7/02_async.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_3.7_run/00_docstring.pyc
Normal file
BIN
test/bytecode_3.7_run/00_docstring.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_3.7_run/01_and_not_else.pyc
Normal file
BIN
test/bytecode_3.7_run/01_and_not_else.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
test/bytecode_3.7_run/02_call_ex_kw.pyc
Normal file
BIN
test/bytecode_3.7_run/02_call_ex_kw.pyc
Normal file
Binary file not shown.
BIN
test/bytecode_3.7_run/04_def_annotate.pyc
Normal file
BIN
test/bytecode_3.7_run/04_def_annotate.pyc
Normal file
Binary file not shown.
@@ -1,7 +1,7 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Remake Python grammar statistics
|
# Remake Python grammar statistics
|
||||||
|
|
||||||
typeset -A ALL_VERS=([2.4]=2.4.6 [2.5]=2.5.6 [2.6]=2.6.9 [2.7]=2.7.14 [3.2]=3.2.6 [3.3]=3.3.6 [3.4]=3.4.8 [3.5]=3.5.5 [3.6]=3.6.4)
|
typeset -A ALL_VERS=([2.4]=2.4.6 [2.5]=2.5.6 [2.6]=2.6.9 [2.7]=2.7.16 [3.2]=3.2.6 [3.3]=3.3.6 [3.4]=3.4.8 [3.5]=3.5.6 [3.6]=3.6.8, [3.7]=3.7.3)
|
||||||
|
|
||||||
if (( $# == 0 )); then
|
if (( $# == 0 )); then
|
||||||
echo 1>&2 "usage: $0 two-digit-version"
|
echo 1>&2 "usage: $0 two-digit-version"
|
||||||
|
@@ -22,7 +22,7 @@ assert i[0]('a') == True
|
|||||||
assert i[0]('A') == False
|
assert i[0]('A') == False
|
||||||
|
|
||||||
# Issue #170. Bug is needing an "conditional_not_lambda" grammar rule
|
# Issue #170. Bug is needing an "conditional_not_lambda" grammar rule
|
||||||
# in addition the the "conditional_lambda" rule
|
# in addition the the "if_expr_lambda" rule
|
||||||
j = lambda a: False if not a else True
|
j = lambda a: False if not a else True
|
||||||
assert j(True) == True
|
assert j(True) == True
|
||||||
assert j(False) == False
|
assert j(False) == False
|
||||||
|
@@ -2,3 +2,10 @@
|
|||||||
# This is RUNNABLE!
|
# This is RUNNABLE!
|
||||||
assert [False, True, True, True, True] == [False if not a else True for a in range(5)]
|
assert [False, True, True, True, True] == [False if not a else True for a in range(5)]
|
||||||
assert [True, False, False, False, False] == [False if a else True for a in range(5)]
|
assert [True, False, False, False, False] == [False if a else True for a in range(5)]
|
||||||
|
|
||||||
|
# From bug #225
|
||||||
|
m = ['hi', 'he', 'ih', 'who', 'ho']
|
||||||
|
ms = {}
|
||||||
|
for f in (f for f in m if f.startswith('h')):
|
||||||
|
ms[f] = 5
|
||||||
|
assert ms == {'hi': 5, 'he': 5, 'ho': 5}
|
||||||
|
@@ -8,7 +8,7 @@ list(x for x in range(10) if x % 2 if x % 3)
|
|||||||
|
|
||||||
# expresion which evaluates True unconditionally,
|
# expresion which evaluates True unconditionally,
|
||||||
# but leave dead code or junk around that we have to match on.
|
# but leave dead code or junk around that we have to match on.
|
||||||
# Tests "conditional_true" rule
|
# Tests "if_expr_true" rule
|
||||||
5 if 1 else 2
|
5 if 1 else 2
|
||||||
|
|
||||||
0 or max(5, 3) if 0 else 3
|
0 or max(5, 3) if 0 else 3
|
||||||
|
9
test/simple_source/bug26/04_ifelse_parens.py
Normal file
9
test/simple_source/bug26/04_ifelse_parens.py
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
# From 3.7.3 dataclasses.py
|
||||||
|
# Bug was handling precedence. Need parenthesis before IfExp.
|
||||||
|
#
|
||||||
|
# RUNNABLE!
|
||||||
|
def _hash_add(fields):
|
||||||
|
flds = [f for f in fields if (4 if f is None else f)]
|
||||||
|
return flds
|
||||||
|
|
||||||
|
assert _hash_add([None, True, False, 3]) == [None, True, 3]
|
@@ -1,6 +1,6 @@
|
|||||||
# Bug found in 2.7 test_itertools.py
|
# Bug found in 2.7 test_itertools.py
|
||||||
# Bug was erroneously using reduction to unconditional_true
|
# Bug was erroneously using reduction to if_expr_true
|
||||||
# A proper fix would be to use unconditional_true only when we
|
# A proper fix would be to use if_expr_true only when we
|
||||||
# can determine there is or was dead code.
|
# can determine there is or was dead code.
|
||||||
from itertools import izip_longest
|
from itertools import izip_longest
|
||||||
for args in [['abc', range(6)]]:
|
for args in [['abc', range(6)]]:
|
||||||
|
@@ -1,13 +1,42 @@
|
|||||||
# Python 3 annotations
|
# Python 3 positional, kwonly, varargs, and annotations. Ick.
|
||||||
|
|
||||||
def foo(a, b: 'annotating b', c: int) -> float:
|
# RUNNABLE!
|
||||||
print(a + b + c)
|
def test1(args_1, c: int, w=4, *varargs: int, **kwargs: 'annotating kwargs') -> tuple:
|
||||||
|
return (args_1, c, w, kwargs)
|
||||||
|
|
||||||
|
def test2(args_1, args_2, c: int, w=4, *varargs: int, **kwargs: 'annotating kwargs'):
|
||||||
|
return (args_1, args_2, c, w, varargs, kwargs)
|
||||||
|
|
||||||
|
def test3(c: int, w=4, *varargs: int, **kwargs: 'annotating kwargs') -> float:
|
||||||
|
return 5.4
|
||||||
|
|
||||||
|
def test4(a: float, c: int, *varargs: int, **kwargs: 'annotating kwargs') -> float:
|
||||||
|
return 5.4
|
||||||
|
|
||||||
|
def test5(a: float, c: int = 5, *varargs: int, **kwargs: 'annotating kwargs') -> float:
|
||||||
|
return 5.4
|
||||||
|
|
||||||
|
def test6(a: float, c: int, test=None):
|
||||||
|
return (a, c, test)
|
||||||
|
|
||||||
|
def test7(*varargs: int, **kwargs):
|
||||||
|
return (varargs, kwargs)
|
||||||
|
|
||||||
|
def test8(x=55, *varargs: int, **kwargs) -> list:
|
||||||
|
return (x, varargs, kwargs)
|
||||||
|
|
||||||
|
def test9(arg_1=55, *varargs: int, y=5, **kwargs):
|
||||||
|
return x, varargs, int, y, kwargs
|
||||||
|
|
||||||
|
def test10(args_1, b: 'annotating b', c: int) -> float:
|
||||||
|
return 5.4
|
||||||
|
|
||||||
|
class IOBase:
|
||||||
|
pass
|
||||||
|
|
||||||
# Python 3.1 _pyio.py uses the -> "IOBase" annotation
|
# Python 3.1 _pyio.py uses the -> "IOBase" annotation
|
||||||
def open(file, mode = "r", buffering = None,
|
def o(f, mode = "r", buffering = None) -> "IOBase":
|
||||||
encoding = None, errors = None,
|
return (f, mode, buffering)
|
||||||
newline = None, closefd = True) -> "IOBase":
|
|
||||||
return text
|
|
||||||
|
|
||||||
def foo1(x: 'an argument that defaults to 5' = 5):
|
def foo1(x: 'an argument that defaults to 5' = 5):
|
||||||
print(x)
|
print(x)
|
||||||
@@ -18,13 +47,77 @@ def div(a: dict(type=float, help='the dividend'),
|
|||||||
"""Divide a by b"""
|
"""Divide a by b"""
|
||||||
return a / b
|
return a / b
|
||||||
|
|
||||||
class TestSignatureObject(unittest.TestCase):
|
class TestSignatureObject1():
|
||||||
def test_signature_on_wkwonly(self):
|
def test_signature_on_wkwonly(self):
|
||||||
def test(*, a:float, b:str) -> int:
|
def test(*, a:float, b:str, c:str = 'test', **kwargs: int) -> int:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
class SupportsInt(_Protocol):
|
class TestSignatureObject2():
|
||||||
|
def test_signature_on_wkwonly(self):
|
||||||
|
def test(*, c='test', a:float, b:str="S", **kwargs: int) -> int:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class TestSignatureObject3():
|
||||||
|
def test_signature_on_wkwonly(self):
|
||||||
|
def test(*, c='test', a:float, kwargs:str="S", **b: int) -> int:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class TestSignatureObject4():
|
||||||
|
def test_signature_on_wkwonly(self):
|
||||||
|
def test(x=55, *args, c:str='test', a:float, kwargs:str="S", **b: int) -> int:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class TestSignatureObject5():
|
||||||
|
def test_signature_on_wkwonly(self):
|
||||||
|
def test(x=55, *args: int, c='test', a:float, kwargs:str="S", **b: int) -> int:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class TestSignatureObject5():
|
||||||
|
def test_signature_on_wkwonly(self):
|
||||||
|
def test(x:int=55, *args: (int, str), c='test', a:float, kwargs:str="S", **b: int) -> int:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class TestSignatureObject7():
|
||||||
|
def test_signature_on_wkwonly(self):
|
||||||
|
def test(c='test', kwargs:str="S", **b: int) -> int:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class TestSignatureObject8():
|
||||||
|
def test_signature_on_wkwonly(self):
|
||||||
|
def test(**b: int) -> int:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class TestSignatureObject9():
|
||||||
|
def test_signature_on_wkwonly(self):
|
||||||
|
def test(a, **b: int) -> int:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class SupportsInt():
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def __int__(self) -> int:
|
def __int__(self) -> int:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def ann1(args_1, b: 'annotating b', c: int, *varargs: str) -> float:
|
||||||
|
assert ann1.__annotations__['b'] == 'annotating b'
|
||||||
|
assert ann1.__annotations__['c'] == int
|
||||||
|
assert ann1.__annotations__['varargs'] == str
|
||||||
|
assert ann1.__annotations__['return'] == float
|
||||||
|
|
||||||
|
def ann2(args_1, b: int = 5, **kwargs: float) -> float:
|
||||||
|
assert ann2.__annotations__['b'] == int
|
||||||
|
assert ann2.__annotations__['kwargs'] == float
|
||||||
|
assert ann2.__annotations__['return'] == float
|
||||||
|
assert b == 5
|
||||||
|
|
||||||
|
|
||||||
|
assert test1(1, 5) == (1, 5, 4, {})
|
||||||
|
assert test1(1, 5, 6, foo='bar') == (1, 5, 6, {'foo': 'bar'})
|
||||||
|
assert test2(2, 3, 4) == (2, 3, 4, 4, (), {})
|
||||||
|
assert test3(10, foo='bar') == 5.4
|
||||||
|
assert test4(9.5, 7, 6, 4, bar='baz') == 5.4
|
||||||
|
### FIXME: fill in...
|
||||||
|
assert test6(1.2, 3) == (1.2, 3, None)
|
||||||
|
assert test6(2.3, 4, 5) == (2.3, 4, 5)
|
||||||
|
|
||||||
|
ann1(1, 'test', 5)
|
||||||
|
ann2(1)
|
||||||
|
17
test/simple_source/bug35/02_async.py
Normal file
17
test/simple_source/bug35/02_async.py
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
# From 3.7.3 asyncio/base_events.py
|
||||||
|
# We had (still have) screwy logic. Python 3.5 code node detection was off too.
|
||||||
|
|
||||||
|
async def create_connection(self):
|
||||||
|
infos = await self._ensure_resolved()
|
||||||
|
|
||||||
|
laddr_infos = await self._ensure_resolved()
|
||||||
|
for family in infos:
|
||||||
|
for laddr in laddr_infos:
|
||||||
|
family = 1
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
await self.sock_connect()
|
||||||
|
else:
|
||||||
|
raise OSError('Multiple exceptions: {}' for exc in family)
|
||||||
|
|
||||||
|
return
|
@@ -4,8 +4,8 @@
|
|||||||
var1 = 'x'
|
var1 = 'x'
|
||||||
var2 = 'y'
|
var2 = 'y'
|
||||||
abc = 'def'
|
abc = 'def'
|
||||||
assert (f'interpolate {var1} strings {var2!r} {var2!s} py36' ==
|
assert (f"interpolate {var1} strings {var2!r} {var2!s} 'py36" ==
|
||||||
"interpolate x strings 'y' y py36")
|
"interpolate x strings 'y' y 'py36")
|
||||||
assert 'def0' == f'{abc}0'
|
assert 'def0' == f'{abc}0'
|
||||||
assert 'defdef' == f'{abc}{abc!s}'
|
assert 'defdef' == f'{abc}{abc!s}'
|
||||||
|
|
||||||
@@ -38,4 +38,31 @@ filename = '.'
|
|||||||
source = 'foo'
|
source = 'foo'
|
||||||
source = (f"__file__ = r'''{os.path.abspath(filename)}'''\n"
|
source = (f"__file__ = r'''{os.path.abspath(filename)}'''\n"
|
||||||
+ source + "\ndel __file__")
|
+ source + "\ndel __file__")
|
||||||
print(source)
|
|
||||||
|
# Note how { and } are *not* escaped here
|
||||||
|
f = 'one'
|
||||||
|
name = 'two'
|
||||||
|
assert(f"{f}{'{{name}}'} {f}{'{name}'}") == 'one{{name}} one{name}'
|
||||||
|
|
||||||
|
# From 3.7.3 dataclasses.py
|
||||||
|
log_rounds = 5
|
||||||
|
assert "05$" == f'{log_rounds:02d}$'
|
||||||
|
|
||||||
|
|
||||||
|
def testit(a, b, l):
|
||||||
|
# print(l)
|
||||||
|
return l
|
||||||
|
|
||||||
|
# The call below shows the need for BUILD_STRING to count expr arguments.
|
||||||
|
# Also note that we use {{ }} to escape braces in contrast to the example
|
||||||
|
# above.
|
||||||
|
def _repr_fn(fields):
|
||||||
|
return testit('__repr__',
|
||||||
|
('self',),
|
||||||
|
['return xx + f"(' +
|
||||||
|
', '.join([f"{f}={{self.{f}!r}}"
|
||||||
|
for f in fields]) +
|
||||||
|
')"'])
|
||||||
|
|
||||||
|
fields = ['a', 'b', 'c']
|
||||||
|
assert _repr_fn(fields) == ['return xx + f"(a={self.a!r}, b={self.b!r}, c={self.c!r})"']
|
||||||
|
47
test/simple_source/bug36/02_call_ex_kw.py
Normal file
47
test/simple_source/bug36/02_call_ex_kw.py
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
# From #227
|
||||||
|
# Bug was not handling call_ex_kw correctly
|
||||||
|
# This appears in
|
||||||
|
# showparams(c, test="A", **extra_args)
|
||||||
|
# below
|
||||||
|
|
||||||
|
def showparams(c, test, **extra_args):
|
||||||
|
return {'c': c, **extra_args, 'test': test}
|
||||||
|
|
||||||
|
def f(c, **extra_args):
|
||||||
|
return showparams(c, test="A", **extra_args)
|
||||||
|
|
||||||
|
def f1(c, d, **extra_args):
|
||||||
|
return showparams(c, test="B", **extra_args)
|
||||||
|
|
||||||
|
def f2(**extra_args):
|
||||||
|
return showparams(1, test="C", **extra_args)
|
||||||
|
|
||||||
|
def f3(c, *args, **extra_args):
|
||||||
|
return showparams(c, *args, **extra_args)
|
||||||
|
|
||||||
|
assert f(1, a=2, b=3) == {'c': 1, 'a': 2, 'b': 3, 'test': 'A'}
|
||||||
|
|
||||||
|
a = {'param1': 2}
|
||||||
|
assert f1('2', '{\'test\': "4"}', test2='a', **a) \
|
||||||
|
== {'c': '2', 'test2': 'a', 'param1': 2, 'test': 'B'}
|
||||||
|
assert f1(2, '"3"', test2='a', **a) \
|
||||||
|
== {'c': 2, 'test2': 'a', 'param1': 2, 'test': 'B'}
|
||||||
|
assert f1(False, '"3"', test2='a', **a) \
|
||||||
|
== {'c': False, 'test2': 'a', 'param1': 2, 'test': 'B'}
|
||||||
|
assert f(2, test2='A', **a) \
|
||||||
|
== {'c': 2, 'test2': 'A', 'param1': 2, 'test': 'A'}
|
||||||
|
assert f(str(2) + str(1), test2='a', **a) \
|
||||||
|
== {'c': '21', 'test2': 'a', 'param1': 2, 'test': 'A'}
|
||||||
|
assert f1((a.get('a'), a.get('b')), a, test3='A', **a) \
|
||||||
|
== {'c': (None, None), 'test3': 'A', 'param1': 2, 'test': 'B'}
|
||||||
|
|
||||||
|
b = {'b1': 1, 'b2': 2}
|
||||||
|
assert f2(**a, **b) == \
|
||||||
|
{'c': 1, 'param1': 2, 'b1': 1, 'b2': 2, 'test': 'C'}
|
||||||
|
|
||||||
|
c = (2,)
|
||||||
|
d = (2, 3)
|
||||||
|
assert f(2, **a) == {'c': 2, 'param1': 2, 'test': 'A'}
|
||||||
|
assert f3(2, *c, **a) == {'c': 2, 'param1': 2, 'test': 2}
|
||||||
|
assert f3(*d, **a) == {'c': 2, 'param1': 2, 'test': 3}
|
||||||
|
|
37
test/simple_source/bug36/05_ann_mopdule2.py
Normal file
37
test/simple_source/bug36/05_ann_mopdule2.py
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# This is from Python 3.6's test directory.
|
||||||
|
"""
|
||||||
|
Some correct syntax for variable annotation here.
|
||||||
|
More examples are in test_grammar and test_parser.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import no_type_check, ClassVar
|
||||||
|
|
||||||
|
i: int = 1
|
||||||
|
j: int
|
||||||
|
x: float = i/10
|
||||||
|
|
||||||
|
def f():
|
||||||
|
class C: ...
|
||||||
|
return C()
|
||||||
|
|
||||||
|
f().new_attr: object = object()
|
||||||
|
|
||||||
|
class C:
|
||||||
|
def __init__(self, x: int) -> None:
|
||||||
|
self.x = x
|
||||||
|
|
||||||
|
c = C(5)
|
||||||
|
c.new_attr: int = 10
|
||||||
|
|
||||||
|
__annotations__ = {}
|
||||||
|
|
||||||
|
|
||||||
|
@no_type_check
|
||||||
|
class NTC:
|
||||||
|
def meth(self, param: complex) -> None:
|
||||||
|
...
|
||||||
|
|
||||||
|
class CV:
|
||||||
|
var: ClassVar['CV']
|
||||||
|
|
||||||
|
CV.var = CV()
|
16
test/simple_source/bug37/01_and_not_else.py
Normal file
16
test/simple_source/bug37/01_and_not_else.py
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
# From 3.7.3 base64.py
|
||||||
|
# Bug was handling "and not" in an
|
||||||
|
# if/else in the presence of better Python bytecode generatation
|
||||||
|
|
||||||
|
# RUNNABLE!
|
||||||
|
def foo(foldnuls, word):
|
||||||
|
x = 5 if foldnuls and not word else 6
|
||||||
|
return x
|
||||||
|
|
||||||
|
for expect, foldnuls, word in (
|
||||||
|
(6, True, True),
|
||||||
|
(5, True, False),
|
||||||
|
(6, False, True),
|
||||||
|
(6, False, False)
|
||||||
|
):
|
||||||
|
assert foo(foldnuls, word) == expect
|
@@ -11,9 +11,16 @@ def chained_compare_b(a, obj):
|
|||||||
if -0x80000000 <= obj <= 0x7fffffff:
|
if -0x80000000 <= obj <= 0x7fffffff:
|
||||||
return 5
|
return 5
|
||||||
|
|
||||||
|
def chained_compare_c(a, d):
|
||||||
|
for i in len(d):
|
||||||
|
if a == d[i] != 2:
|
||||||
|
return 5
|
||||||
|
|
||||||
chained_compare_a(3)
|
chained_compare_a(3)
|
||||||
try:
|
try:
|
||||||
chained_compare_a(8)
|
chained_compare_a(8)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
pass
|
pass
|
||||||
chained_compare_b(True, 0x0)
|
chained_compare_b(True, 0x0)
|
||||||
|
|
||||||
|
chained_compare_c(3, [3])
|
||||||
|
@@ -1,10 +1,55 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
# uncompyle2 bug was not escaping """ properly
|
# uncompyle2 bug was not escaping """ properly
|
||||||
r'''func placeholder - with ("""\nstring\n""")'''
|
|
||||||
def foo():
|
|
||||||
r'''func placeholder - ' and with ("""\nstring\n""")'''
|
|
||||||
|
|
||||||
def bar():
|
# RUNNABLE!
|
||||||
|
r'''func placeholder - with ("""\nstring\n""")'''
|
||||||
|
|
||||||
|
def dq0():
|
||||||
|
assert __doc__ == r'''func placeholder - with ("""\nstring\n""")'''
|
||||||
|
|
||||||
|
def dq1():
|
||||||
|
"""assert that dedent() has no effect on 'text'"""
|
||||||
|
assert dq1.__doc__ == """assert that dedent() has no effect on 'text'"""
|
||||||
|
|
||||||
|
def dq2():
|
||||||
|
'''assert that dedent() has no effect on 'text\''''
|
||||||
|
assert dq1.__doc__ == '''assert that dedent() has no effect on 'text\''''
|
||||||
|
|
||||||
|
def dq3():
|
||||||
|
"""assert that dedent() has no effect on 'text\""""
|
||||||
|
assert dq3.__doc__ == """assert that dedent() has no effect on 'text\""""
|
||||||
|
|
||||||
|
def dq4():
|
||||||
|
"""assert that dedent() has no effect on 'text'"""
|
||||||
|
assert dq4.__doc__ == """assert that dedent() has no effect on 'text'"""
|
||||||
|
|
||||||
|
def dq5():
|
||||||
|
r'''func placeholder - ' and with ("""\nstring\n""")'''
|
||||||
|
assert dq5.__doc__ == r'''func placeholder - ' and with ("""\nstring\n""")'''
|
||||||
|
|
||||||
|
def dq6():
|
||||||
r"""func placeholder - ' and with ('''\nstring\n''') and \"\"\"\nstring\n\"\"\" """
|
r"""func placeholder - ' and with ('''\nstring\n''') and \"\"\"\nstring\n\"\"\" """
|
||||||
|
assert dq6.__doc__ == r"""func placeholder - ' and with ('''\nstring\n''') and \"\"\"\nstring\n\"\"\" """
|
||||||
|
|
||||||
|
def dq7():
|
||||||
|
u""" <----- SEE 'u' HERE
|
||||||
|
>>> mylen(u"áéíóú")
|
||||||
|
5
|
||||||
|
"""
|
||||||
|
assert dq7.__doc__ == u""" <----- SEE 'u' HERE
|
||||||
|
>>> mylen(u"áéíóú")
|
||||||
|
5
|
||||||
|
"""
|
||||||
|
|
||||||
|
def dq8():
|
||||||
|
u""" <----- SEE 'u' HERE
|
||||||
|
>>> mylen(u"تست")
|
||||||
|
5
|
||||||
|
"""
|
||||||
|
assert dq8.__doc__ == u""" <----- SEE 'u' HERE
|
||||||
|
>>> mylen(u"تست")
|
||||||
|
5
|
||||||
|
"""
|
||||||
|
|
||||||
def baz():
|
def baz():
|
||||||
"""
|
"""
|
||||||
@@ -20,3 +65,28 @@ def baz():
|
|||||||
>>> t.rundict(m1.__dict__, 'rundict_test_pvt') # None are skipped.
|
>>> t.rundict(m1.__dict__, 'rundict_test_pvt') # None are skipped.
|
||||||
TestResults(failed=0, attempted=8)
|
TestResults(failed=0, attempted=8)
|
||||||
"""
|
"""
|
||||||
|
assert baz.__doc__ == \
|
||||||
|
"""
|
||||||
|
... '''>>> assert 1 == 1
|
||||||
|
... '''
|
||||||
|
... \"""
|
||||||
|
>>> exec test_data in m1.__dict__
|
||||||
|
>>> exec test_data in m2.__dict__
|
||||||
|
>>> m1.__dict__.update({"f2": m2._f, "g2": m2.g, "h2": m2.H})
|
||||||
|
|
||||||
|
Tests that objects outside m1 are excluded:
|
||||||
|
\"""
|
||||||
|
>>> t.rundict(m1.__dict__, 'rundict_test_pvt') # None are skipped.
|
||||||
|
TestResults(failed=0, attempted=8)
|
||||||
|
"""
|
||||||
|
|
||||||
|
dq0()
|
||||||
|
dq1()
|
||||||
|
dq2()
|
||||||
|
dq3()
|
||||||
|
dq4()
|
||||||
|
dq5()
|
||||||
|
dq6()
|
||||||
|
dq7()
|
||||||
|
dq8()
|
||||||
|
baz()
|
||||||
|
@@ -7,3 +7,4 @@ import http.client as httpclient
|
|||||||
if len(__file__) == 0:
|
if len(__file__) == 0:
|
||||||
# a.b.c should force consecutive LOAD_ATTRs
|
# a.b.c should force consecutive LOAD_ATTRs
|
||||||
import a.b.c as d
|
import a.b.c as d
|
||||||
|
import stuff0.stuff1.stuff2.stuff3 as stuff3
|
||||||
|
@@ -40,6 +40,8 @@ Options:
|
|||||||
--weak-verify compile generated source
|
--weak-verify compile generated source
|
||||||
--linemaps generated line number correspondencies between byte-code
|
--linemaps generated line number correspondencies between byte-code
|
||||||
and generated source output
|
and generated source output
|
||||||
|
--encoding <encoding>
|
||||||
|
use <encoding> in generated source according to pep-0263
|
||||||
--help show this message
|
--help show this message
|
||||||
|
|
||||||
Debugging Options:
|
Debugging Options:
|
||||||
@@ -80,14 +82,15 @@ def main_bin():
|
|||||||
timestampfmt = "# %Y.%m.%d %H:%M:%S %Z"
|
timestampfmt = "# %Y.%m.%d %H:%M:%S %Z"
|
||||||
|
|
||||||
try:
|
try:
|
||||||
opts, pyc_paths = getopt.getopt(sys.argv[1:], 'hac:gtdrVo:p:',
|
opts, pyc_paths = getopt.getopt(sys.argv[1:], 'hac:gtTdrVo:p:',
|
||||||
'help asm compile= grammar linemaps recurse '
|
'help asm compile= grammar linemaps recurse '
|
||||||
'timestamp tree tree+ '
|
'timestamp tree tree+ '
|
||||||
'fragments verify verify-run version '
|
'fragments verify verify-run version '
|
||||||
'weak-verify '
|
'weak-verify '
|
||||||
'showgrammar'.split(' '))
|
'showgrammar encoding='.split(' '))
|
||||||
except getopt.GetoptError(e):
|
except getopt.GetoptError(e):
|
||||||
sys.stderr.write('%s: %s\n' % (os.path.basename(sys.argv[0]), e))
|
sys.stderr.write('%s: %s\n' %
|
||||||
|
(os.path.basename(sys.argv[0]), e))
|
||||||
sys.exit(-1)
|
sys.exit(-1)
|
||||||
|
|
||||||
options = {}
|
options = {}
|
||||||
@@ -114,7 +117,7 @@ def main_bin():
|
|||||||
elif opt in ('--tree', '-t'):
|
elif opt in ('--tree', '-t'):
|
||||||
options['showast'] = True
|
options['showast'] = True
|
||||||
options['do_verify'] = None
|
options['do_verify'] = None
|
||||||
elif opt in ('--tree+',):
|
elif opt in ('--tree+', '-T'):
|
||||||
options['showast'] = 'Full'
|
options['showast'] = 'Full'
|
||||||
options['do_verify'] = None
|
options['do_verify'] = None
|
||||||
elif opt in ('--grammar', '-g'):
|
elif opt in ('--grammar', '-g'):
|
||||||
@@ -129,6 +132,8 @@ def main_bin():
|
|||||||
numproc = int(val)
|
numproc = int(val)
|
||||||
elif opt in ('--recurse', '-r'):
|
elif opt in ('--recurse', '-r'):
|
||||||
recurse_dirs = True
|
recurse_dirs = True
|
||||||
|
elif opt == '--encoding':
|
||||||
|
options['source_encoding'] = val
|
||||||
else:
|
else:
|
||||||
sys.stderr.write(opt)
|
sys.stderr.write(opt)
|
||||||
usage()
|
usage()
|
||||||
|
@@ -42,7 +42,7 @@ def _get_outstream(outfile):
|
|||||||
|
|
||||||
def decompile(
|
def decompile(
|
||||||
bytecode_version, co, out=None, showasm=None, showast=False,
|
bytecode_version, co, out=None, showasm=None, showast=False,
|
||||||
timestamp=None, showgrammar=False, code_objects={},
|
timestamp=None, showgrammar=False, source_encoding=None, code_objects={},
|
||||||
source_size=None, is_pypy=None, magic_int=None,
|
source_size=None, is_pypy=None, magic_int=None,
|
||||||
mapstream=None, do_fragments=False):
|
mapstream=None, do_fragments=False):
|
||||||
"""
|
"""
|
||||||
@@ -81,6 +81,8 @@ def decompile(
|
|||||||
m = ""
|
m = ""
|
||||||
|
|
||||||
sys_version_lines = sys.version.split('\n')
|
sys_version_lines = sys.version.split('\n')
|
||||||
|
if source_encoding:
|
||||||
|
write('# -*- coding: %s -*-' % source_encoding)
|
||||||
write('# uncompyle6 version %s\n'
|
write('# uncompyle6 version %s\n'
|
||||||
'# %sPython bytecode %s%s\n# Decompiled from: %sPython %s' %
|
'# %sPython bytecode %s%s\n# Decompiled from: %sPython %s' %
|
||||||
(VERSION, co_pypy_str, bytecode_version,
|
(VERSION, co_pypy_str, bytecode_version,
|
||||||
@@ -147,7 +149,7 @@ def compile_file(source_path):
|
|||||||
|
|
||||||
|
|
||||||
def decompile_file(filename, outstream=None, showasm=None, showast=False,
|
def decompile_file(filename, outstream=None, showasm=None, showast=False,
|
||||||
showgrammar=False, mapstream=None, do_fragments=False):
|
showgrammar=False, source_encoding=None, mapstream=None, do_fragments=False):
|
||||||
"""
|
"""
|
||||||
decompile Python byte-code file (.pyc). Return objects to
|
decompile Python byte-code file (.pyc). Return objects to
|
||||||
all of the deparsed objects found in `filename`.
|
all of the deparsed objects found in `filename`.
|
||||||
@@ -163,12 +165,12 @@ def decompile_file(filename, outstream=None, showasm=None, showast=False,
|
|||||||
for con in co:
|
for con in co:
|
||||||
deparsed.append(
|
deparsed.append(
|
||||||
decompile(version, con, outstream, showasm, showast,
|
decompile(version, con, outstream, showasm, showast,
|
||||||
timestamp, showgrammar, code_objects=code_objects,
|
timestamp, showgrammar, source_encoding, code_objects=code_objects,
|
||||||
is_pypy=is_pypy, magic_int=magic_int),
|
is_pypy=is_pypy, magic_int=magic_int),
|
||||||
mapstream=mapstream)
|
mapstream=mapstream)
|
||||||
else:
|
else:
|
||||||
deparsed = [decompile(version, co, outstream, showasm, showast,
|
deparsed = [decompile(version, co, outstream, showasm, showast,
|
||||||
timestamp, showgrammar,
|
timestamp, showgrammar, source_encoding,
|
||||||
code_objects=code_objects, source_size=source_size,
|
code_objects=code_objects, source_size=source_size,
|
||||||
is_pypy=is_pypy, magic_int=magic_int,
|
is_pypy=is_pypy, magic_int=magic_int,
|
||||||
mapstream=mapstream, do_fragments=do_fragments)]
|
mapstream=mapstream, do_fragments=do_fragments)]
|
||||||
@@ -179,7 +181,7 @@ def decompile_file(filename, outstream=None, showasm=None, showast=False,
|
|||||||
# FIXME: combine into an options parameter
|
# FIXME: combine into an options parameter
|
||||||
def main(in_base, out_base, compiled_files, source_files, outfile=None,
|
def main(in_base, out_base, compiled_files, source_files, outfile=None,
|
||||||
showasm=None, showast=False, do_verify=False,
|
showasm=None, showast=False, do_verify=False,
|
||||||
showgrammar=False, raise_on_error=False,
|
showgrammar=False, source_encoding=None, raise_on_error=False,
|
||||||
do_linemaps=False, do_fragments=False):
|
do_linemaps=False, do_fragments=False):
|
||||||
"""
|
"""
|
||||||
in_base base directory for input files
|
in_base base directory for input files
|
||||||
@@ -250,7 +252,7 @@ def main(in_base, out_base, compiled_files, source_files, outfile=None,
|
|||||||
# Try to uncompile the input file
|
# Try to uncompile the input file
|
||||||
try:
|
try:
|
||||||
deparsed = decompile_file(infile, outstream, showasm, showast, showgrammar,
|
deparsed = decompile_file(infile, outstream, showasm, showast, showgrammar,
|
||||||
linemap_stream, do_fragments)
|
source_encoding, linemap_stream, do_fragments)
|
||||||
if do_fragments:
|
if do_fragments:
|
||||||
for d in deparsed:
|
for d in deparsed:
|
||||||
last_mod = None
|
last_mod = None
|
||||||
@@ -280,6 +282,19 @@ def main(in_base, out_base, compiled_files, source_files, outfile=None,
|
|||||||
sys.stdout.write("\n")
|
sys.stdout.write("\n")
|
||||||
sys.stderr.write("\nLast file: %s " % (infile))
|
sys.stderr.write("\nLast file: %s " % (infile))
|
||||||
raise
|
raise
|
||||||
|
except RuntimeError(e):
|
||||||
|
sys.stdout.write("\n%s\n" % str(e))
|
||||||
|
if str(e).startswith('Unsupported Python'):
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
sys.stderr.write("\n# Unsupported bytecode in file %s\n# %s\n" % (infile, e))
|
||||||
|
else:
|
||||||
|
if outfile:
|
||||||
|
outstream.close()
|
||||||
|
os.remove(outfile)
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
sys.stderr.write("\nLast file: %s " % (infile))
|
||||||
|
raise
|
||||||
|
|
||||||
# except:
|
# except:
|
||||||
# failed_files += 1
|
# failed_files += 1
|
||||||
# if current_outfile:
|
# if current_outfile:
|
||||||
@@ -337,9 +352,9 @@ def main(in_base, out_base, compiled_files, source_files, outfile=None,
|
|||||||
# mem_usage = __memUsage()
|
# mem_usage = __memUsage()
|
||||||
print mess, infile
|
print mess, infile
|
||||||
if current_outfile:
|
if current_outfile:
|
||||||
sys.stdout.write("%s\r" %
|
sys.stdout.write("%s -- %s\r" %
|
||||||
status_msg(do_verify, tot_files, okay_files, failed_files,
|
(infile, status_msg(do_verify, tot_files, okay_files, failed_files,
|
||||||
verify_failed_files, do_verify))
|
verify_failed_files, do_verify)))
|
||||||
try:
|
try:
|
||||||
# FIXME: Something is weird with Pypy here
|
# FIXME: Something is weird with Pypy here
|
||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
|
@@ -59,7 +59,6 @@ class PythonParser(GenericASTBuilder):
|
|||||||
'imports_cont',
|
'imports_cont',
|
||||||
'kvlist_n',
|
'kvlist_n',
|
||||||
# Python 3.6+
|
# Python 3.6+
|
||||||
'joined_str',
|
|
||||||
'come_from_loops',
|
'come_from_loops',
|
||||||
]
|
]
|
||||||
self.collect = frozenset(nt_list)
|
self.collect = frozenset(nt_list)
|
||||||
@@ -81,7 +80,7 @@ class PythonParser(GenericASTBuilder):
|
|||||||
# FIXME: would love to do expr, sstmts, stmts and
|
# FIXME: would love to do expr, sstmts, stmts and
|
||||||
# so on but that would require major changes to the
|
# so on but that would require major changes to the
|
||||||
# semantic actions
|
# semantic actions
|
||||||
self.singleton = frozenset(('str', 'joined_str', 'store', '_stmts', 'suite_stmts_opt',
|
self.singleton = frozenset(('str', 'store', '_stmts', 'suite_stmts_opt',
|
||||||
'inplace_op'))
|
'inplace_op'))
|
||||||
# Instructions filled in from scanner
|
# Instructions filled in from scanner
|
||||||
self.insts = []
|
self.insts = []
|
||||||
@@ -802,7 +801,6 @@ def python_parser(version, co, out=sys.stdout, showasm=False,
|
|||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
def parse_test(co):
|
def parse_test(co):
|
||||||
from uncompyle6 import PYTHON_VERSION, IS_PYPY
|
from uncompyle6 import PYTHON_VERSION, IS_PYPY
|
||||||
ast = python_parser('2.7.13', co, showasm=True, is_pypy=True)
|
|
||||||
ast = python_parser(PYTHON_VERSION, co, showasm=True, is_pypy=IS_PYPY)
|
ast = python_parser(PYTHON_VERSION, co, showasm=True, is_pypy=IS_PYPY)
|
||||||
print(ast)
|
print(ast)
|
||||||
return
|
return
|
||||||
|
@@ -96,9 +96,9 @@ class Python2Parser(PythonParser):
|
|||||||
for ::= SETUP_LOOP expr for_iter store
|
for ::= SETUP_LOOP expr for_iter store
|
||||||
for_block POP_BLOCK _come_froms
|
for_block POP_BLOCK _come_froms
|
||||||
|
|
||||||
del_stmt ::= delete_subscr
|
del_stmt ::= delete_subscript
|
||||||
delete_subscr ::= expr expr DELETE_SUBSCR
|
delete_subscript ::= expr expr DELETE_SUBSCR
|
||||||
del_stmt ::= expr DELETE_ATTR
|
del_stmt ::= expr DELETE_ATTR
|
||||||
|
|
||||||
_mklambda ::= load_closure mklambda
|
_mklambda ::= load_closure mklambda
|
||||||
kwarg ::= LOAD_CONST expr
|
kwarg ::= LOAD_CONST expr
|
||||||
@@ -388,10 +388,10 @@ class Python2Parser(PythonParser):
|
|||||||
continue
|
continue
|
||||||
elif opname == 'DELETE_SUBSCR':
|
elif opname == 'DELETE_SUBSCR':
|
||||||
self.addRule("""
|
self.addRule("""
|
||||||
del_stmt ::= delete_subscr
|
del_stmt ::= delete_subscript
|
||||||
delete_subscr ::= expr expr DELETE_SUBSCR
|
delete_subscript ::= expr expr DELETE_SUBSCR
|
||||||
""", nop_func)
|
""", nop_func)
|
||||||
self.check_reduce['delete_subscr'] = 'AST'
|
self.check_reduce['delete_subscript'] = 'AST'
|
||||||
custom_seen_ops.add(opname)
|
custom_seen_ops.add(opname)
|
||||||
continue
|
continue
|
||||||
elif opname == 'GET_ITER':
|
elif opname == 'GET_ITER':
|
||||||
@@ -547,7 +547,7 @@ class Python2Parser(PythonParser):
|
|||||||
elif rule == ('or', ('expr', 'jmp_true', 'expr', '\\e_come_from_opt')):
|
elif rule == ('or', ('expr', 'jmp_true', 'expr', '\\e_come_from_opt')):
|
||||||
expr2 = ast[2]
|
expr2 = ast[2]
|
||||||
return expr2 == 'expr' and expr2[0] == 'LOAD_ASSERT'
|
return expr2 == 'expr' and expr2[0] == 'LOAD_ASSERT'
|
||||||
elif lhs in ('delete_subscr', 'del_expr'):
|
elif lhs in ('delete_subscript', 'del_expr'):
|
||||||
op = ast[0][0]
|
op = ast[0][0]
|
||||||
return op.kind in ('and', 'or')
|
return op.kind in ('and', 'or')
|
||||||
|
|
||||||
|
@@ -82,9 +82,9 @@ class Python25Parser(Python26Parser):
|
|||||||
return_stmt_lambda ::= ret_expr RETURN_VALUE_LAMBDA
|
return_stmt_lambda ::= ret_expr RETURN_VALUE_LAMBDA
|
||||||
setupwithas ::= DUP_TOP LOAD_ATTR ROT_TWO LOAD_ATTR CALL_FUNCTION_0 setup_finally
|
setupwithas ::= DUP_TOP LOAD_ATTR ROT_TWO LOAD_ATTR CALL_FUNCTION_0 setup_finally
|
||||||
stmt ::= classdefdeco
|
stmt ::= classdefdeco
|
||||||
stmt ::= conditional_lambda
|
stmt ::= if_expr_lambda
|
||||||
stmt ::= conditional_not_lambda
|
stmt ::= conditional_not_lambda
|
||||||
conditional_lambda ::= expr jmp_false_then expr return_if_lambda
|
if_expr_lambda ::= expr jmp_false_then expr return_if_lambda
|
||||||
return_stmt_lambda LAMBDA_MARKER
|
return_stmt_lambda LAMBDA_MARKER
|
||||||
conditional_not_lambda
|
conditional_not_lambda
|
||||||
::= expr jmp_true_then expr return_if_lambda
|
::= expr jmp_true_then expr return_if_lambda
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
# Copyright (c) 2017-2018 Rocky Bernstein
|
# Copyright (c) 2017-2019 Rocky Bernstein
|
||||||
"""
|
"""
|
||||||
spark grammar differences over Python2 for Python 2.6.
|
spark grammar differences over Python2 for Python 2.6.
|
||||||
"""
|
"""
|
||||||
@@ -293,19 +293,19 @@ class Python26Parser(Python2Parser):
|
|||||||
compare_chained2 ::= expr COMPARE_OP return_lambda
|
compare_chained2 ::= expr COMPARE_OP return_lambda
|
||||||
|
|
||||||
return_if_lambda ::= RETURN_END_IF_LAMBDA POP_TOP
|
return_if_lambda ::= RETURN_END_IF_LAMBDA POP_TOP
|
||||||
stmt ::= conditional_lambda
|
stmt ::= if_expr_lambda
|
||||||
stmt ::= conditional_not_lambda
|
stmt ::= conditional_not_lambda
|
||||||
conditional_lambda ::= expr jmp_false_then expr return_if_lambda
|
if_expr_lambda ::= expr jmp_false_then expr return_if_lambda
|
||||||
return_stmt_lambda LAMBDA_MARKER
|
return_stmt_lambda LAMBDA_MARKER
|
||||||
conditional_not_lambda ::=
|
conditional_not_lambda ::=
|
||||||
expr jmp_true_then expr return_if_lambda
|
expr jmp_true_then expr return_if_lambda
|
||||||
return_stmt_lambda LAMBDA_MARKER
|
return_stmt_lambda LAMBDA_MARKER
|
||||||
|
|
||||||
# conditional_true are for conditions which always evaluate true
|
# if_expr_true are for conditions which always evaluate true
|
||||||
# There is dead or non-optional remnants of the condition code though,
|
# There is dead or non-optional remnants of the condition code though,
|
||||||
# and we use that to match on to reconstruct the source more accurately
|
# and we use that to match on to reconstruct the source more accurately
|
||||||
expr ::= conditional_true
|
expr ::= if_expr_true
|
||||||
conditional_true ::= expr jf_pop expr COME_FROM
|
if_expr_true ::= expr jf_pop expr COME_FROM
|
||||||
|
|
||||||
# This comes from
|
# This comes from
|
||||||
# 0 or max(5, 3) if 0 else 3
|
# 0 or max(5, 3) if 0 else 3
|
||||||
|
@@ -112,14 +112,14 @@ class Python27Parser(Python2Parser):
|
|||||||
compare_chained2 ::= expr COMPARE_OP return_lambda
|
compare_chained2 ::= expr COMPARE_OP return_lambda
|
||||||
compare_chained2 ::= expr COMPARE_OP return_lambda
|
compare_chained2 ::= expr COMPARE_OP return_lambda
|
||||||
|
|
||||||
# conditional_true are for conditions which always evaluate true
|
# if_expr_true are for conditions which always evaluate true
|
||||||
# There is dead or non-optional remnants of the condition code though,
|
# There is dead or non-optional remnants of the condition code though,
|
||||||
# and we use that to match on to reconstruct the source more accurately.
|
# and we use that to match on to reconstruct the source more accurately.
|
||||||
# FIXME: we should do analysis and reduce *only* if there is dead code?
|
# FIXME: we should do analysis and reduce *only* if there is dead code?
|
||||||
# right now we check that expr is "or". Any other nodes types?
|
# right now we check that expr is "or". Any other nodes types?
|
||||||
|
|
||||||
expr ::= conditional_true
|
expr ::= if_expr_true
|
||||||
conditional_true ::= expr JUMP_FORWARD expr COME_FROM
|
if_expr_true ::= expr JUMP_FORWARD expr COME_FROM
|
||||||
|
|
||||||
conditional ::= expr jmp_false expr JUMP_FORWARD expr COME_FROM
|
conditional ::= expr jmp_false expr JUMP_FORWARD expr COME_FROM
|
||||||
conditional ::= expr jmp_false expr JUMP_ABSOLUTE expr
|
conditional ::= expr jmp_false expr JUMP_ABSOLUTE expr
|
||||||
@@ -181,9 +181,9 @@ class Python27Parser(Python2Parser):
|
|||||||
|
|
||||||
# Common with 2.6
|
# Common with 2.6
|
||||||
return_if_lambda ::= RETURN_END_IF_LAMBDA COME_FROM
|
return_if_lambda ::= RETURN_END_IF_LAMBDA COME_FROM
|
||||||
stmt ::= conditional_lambda
|
stmt ::= if_expr_lambda
|
||||||
stmt ::= conditional_not_lambda
|
stmt ::= conditional_not_lambda
|
||||||
conditional_lambda ::= expr jmp_false expr return_if_lambda
|
if_expr_lambda ::= expr jmp_false expr return_if_lambda
|
||||||
return_stmt_lambda LAMBDA_MARKER
|
return_stmt_lambda LAMBDA_MARKER
|
||||||
conditional_not_lambda
|
conditional_not_lambda
|
||||||
::= expr jmp_true expr return_if_lambda
|
::= expr jmp_true expr return_if_lambda
|
||||||
@@ -216,7 +216,7 @@ class Python27Parser(Python2Parser):
|
|||||||
self.check_reduce['raise_stmt1'] = 'AST'
|
self.check_reduce['raise_stmt1'] = 'AST'
|
||||||
self.check_reduce['list_if_not'] = 'AST'
|
self.check_reduce['list_if_not'] = 'AST'
|
||||||
self.check_reduce['list_if'] = 'AST'
|
self.check_reduce['list_if'] = 'AST'
|
||||||
self.check_reduce['conditional_true'] = 'AST'
|
self.check_reduce['if_expr_true'] = 'tokens'
|
||||||
self.check_reduce['whilestmt'] = 'tokens'
|
self.check_reduce['whilestmt'] = 'tokens'
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -229,6 +229,12 @@ class Python27Parser(Python2Parser):
|
|||||||
return invalid
|
return invalid
|
||||||
|
|
||||||
if rule == ('and', ('expr', 'jmp_false', 'expr', '\\e_come_from_opt')):
|
if rule == ('and', ('expr', 'jmp_false', 'expr', '\\e_come_from_opt')):
|
||||||
|
# If the instruction after the instructions formin "and" is an "YIELD_VALUE"
|
||||||
|
# then this is probably an "if" inside a comprehension.
|
||||||
|
if tokens[last] == 'YIELD_VALUE':
|
||||||
|
# Note: We might also consider testing last+1 being "POP_TOP"
|
||||||
|
return True
|
||||||
|
|
||||||
# Test that jmp_false jumps to the end of "and"
|
# Test that jmp_false jumps to the end of "and"
|
||||||
# or that it jumps to the same place as the end of "and"
|
# or that it jumps to the same place as the end of "and"
|
||||||
jmp_false = ast[1][0]
|
jmp_false = ast[1][0]
|
||||||
@@ -268,11 +274,8 @@ class Python27Parser(Python2Parser):
|
|||||||
while (tokens[i] != 'JUMP_BACK'):
|
while (tokens[i] != 'JUMP_BACK'):
|
||||||
i -= 1
|
i -= 1
|
||||||
return tokens[i].attr != tokens[i-1].attr
|
return tokens[i].attr != tokens[i-1].attr
|
||||||
# elif rule[0] == ('conditional_true'):
|
elif rule[0] == 'if_expr_true':
|
||||||
# # FIXME: the below is a hack: we check expr for
|
return (first) > 0 and tokens[first-1] == 'POP_JUMP_IF_FALSE'
|
||||||
# # nodes that could have possibly been a been a Boolean.
|
|
||||||
# # We should also look for the presence of dead code.
|
|
||||||
# return ast[0] == 'expr' and ast[0] == 'or'
|
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@@ -26,12 +26,12 @@ If we succeed in creating a parse tree, then we have a Python program
|
|||||||
that a later phase can turn into a sequence of ASCII text.
|
that a later phase can turn into a sequence of ASCII text.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import re
|
||||||
from uncompyle6.scanners.tok import Token
|
from uncompyle6.scanners.tok import Token
|
||||||
from uncompyle6.parser import PythonParser, PythonParserSingle, nop_func
|
from uncompyle6.parser import PythonParser, PythonParserSingle, nop_func
|
||||||
from uncompyle6.parsers.treenode import SyntaxTree
|
from uncompyle6.parsers.treenode import SyntaxTree
|
||||||
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
|
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
|
||||||
from xdis import PYTHON3
|
from xdis import PYTHON3
|
||||||
from itertools import islice,chain,repeat
|
|
||||||
|
|
||||||
class Python3Parser(PythonParser):
|
class Python3Parser(PythonParser):
|
||||||
|
|
||||||
@@ -113,10 +113,9 @@ class Python3Parser(PythonParser):
|
|||||||
continues ::= continue
|
continues ::= continue
|
||||||
|
|
||||||
|
|
||||||
kwarg ::= LOAD_CONST expr
|
kwarg ::= LOAD_STR expr
|
||||||
kwargs ::= kwarg+
|
kwargs ::= kwarg+
|
||||||
|
|
||||||
|
|
||||||
classdef ::= build_class store
|
classdef ::= build_class store
|
||||||
|
|
||||||
# FIXME: we need to add these because don't detect this properly
|
# FIXME: we need to add these because don't detect this properly
|
||||||
@@ -325,9 +324,9 @@ class Python3Parser(PythonParser):
|
|||||||
|
|
||||||
def p_stmt3(self, args):
|
def p_stmt3(self, args):
|
||||||
"""
|
"""
|
||||||
stmt ::= conditional_lambda
|
stmt ::= if_expr_lambda
|
||||||
stmt ::= conditional_not_lambda
|
stmt ::= conditional_not_lambda
|
||||||
conditional_lambda ::= expr jmp_false expr return_if_lambda
|
if_expr_lambda ::= expr jmp_false expr return_if_lambda
|
||||||
return_stmt_lambda LAMBDA_MARKER
|
return_stmt_lambda LAMBDA_MARKER
|
||||||
conditional_not_lambda
|
conditional_not_lambda
|
||||||
::= expr jmp_true expr return_if_lambda
|
::= expr jmp_true expr return_if_lambda
|
||||||
@@ -395,11 +394,12 @@ class Python3Parser(PythonParser):
|
|||||||
def p_generator_exp3(self, args):
|
def p_generator_exp3(self, args):
|
||||||
'''
|
'''
|
||||||
load_genexpr ::= LOAD_GENEXPR
|
load_genexpr ::= LOAD_GENEXPR
|
||||||
load_genexpr ::= BUILD_TUPLE_1 LOAD_GENEXPR LOAD_CONST
|
load_genexpr ::= BUILD_TUPLE_1 LOAD_GENEXPR LOAD_STR
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def p_expr3(self, args):
|
def p_expr3(self, args):
|
||||||
"""
|
"""
|
||||||
|
expr ::= LOAD_STR
|
||||||
expr ::= conditionalnot
|
expr ::= conditionalnot
|
||||||
conditionalnot ::= expr jmp_true expr jump_forward_else expr COME_FROM
|
conditionalnot ::= expr jmp_true expr jump_forward_else expr COME_FROM
|
||||||
|
|
||||||
@@ -407,11 +407,11 @@ class Python3Parser(PythonParser):
|
|||||||
# a JUMP_ABSOLUTE with no COME_FROM
|
# a JUMP_ABSOLUTE with no COME_FROM
|
||||||
conditional ::= expr jmp_false expr jump_absolute_else expr
|
conditional ::= expr jmp_false expr jump_absolute_else expr
|
||||||
|
|
||||||
# conditional_true are for conditions which always evaluate true
|
# if_expr_true are for conditions which always evaluate true
|
||||||
# There is dead or non-optional remnants of the condition code though,
|
# There is dead or non-optional remnants of the condition code though,
|
||||||
# and we use that to match on to reconstruct the source more accurately
|
# and we use that to match on to reconstruct the source more accurately
|
||||||
expr ::= conditional_true
|
expr ::= if_expr_true
|
||||||
conditional_true ::= expr JUMP_FORWARD expr COME_FROM
|
if_expr_true ::= expr JUMP_FORWARD expr COME_FROM
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@@ -442,7 +442,7 @@ class Python3Parser(PythonParser):
|
|||||||
break
|
break
|
||||||
pass
|
pass
|
||||||
assert i < len(tokens), "build_class needs to find MAKE_FUNCTION or MAKE_CLOSURE"
|
assert i < len(tokens), "build_class needs to find MAKE_FUNCTION or MAKE_CLOSURE"
|
||||||
assert tokens[i+1].kind == 'LOAD_CONST', \
|
assert tokens[i+1].kind == 'LOAD_STR', \
|
||||||
"build_class expecting CONST after MAKE_FUNCTION/MAKE_CLOSURE"
|
"build_class expecting CONST after MAKE_FUNCTION/MAKE_CLOSURE"
|
||||||
call_fn_tok = None
|
call_fn_tok = None
|
||||||
for i in range(i, len(tokens)):
|
for i in range(i, len(tokens)):
|
||||||
@@ -516,13 +516,13 @@ class Python3Parser(PythonParser):
|
|||||||
self.add_unique_rule(rule, token.kind, uniq_param, customize)
|
self.add_unique_rule(rule, token.kind, uniq_param, customize)
|
||||||
|
|
||||||
def add_make_function_rule(self, rule, opname, attr, customize):
|
def add_make_function_rule(self, rule, opname, attr, customize):
|
||||||
"""Python 3.3 added a an addtional LOAD_CONST before MAKE_FUNCTION and
|
"""Python 3.3 added a an addtional LOAD_STR before MAKE_FUNCTION and
|
||||||
this has an effect on many rules.
|
this has an effect on many rules.
|
||||||
"""
|
"""
|
||||||
if self.version >= 3.3:
|
if self.version >= 3.3:
|
||||||
new_rule = rule % (('LOAD_CONST ') * 1)
|
new_rule = rule % (('LOAD_STR ') * 1)
|
||||||
else:
|
else:
|
||||||
new_rule = rule % (('LOAD_CONST ') * 0)
|
new_rule = rule % (('LOAD_STR ') * 0)
|
||||||
self.add_unique_rule(new_rule, opname, attr, customize)
|
self.add_unique_rule(new_rule, opname, attr, customize)
|
||||||
|
|
||||||
def customize_grammar_rules(self, tokens, customize):
|
def customize_grammar_rules(self, tokens, customize):
|
||||||
@@ -585,9 +585,9 @@ class Python3Parser(PythonParser):
|
|||||||
stmt ::= assign2_pypy
|
stmt ::= assign2_pypy
|
||||||
assign3_pypy ::= expr expr expr store store store
|
assign3_pypy ::= expr expr expr store store store
|
||||||
assign2_pypy ::= expr expr store store
|
assign2_pypy ::= expr expr store store
|
||||||
stmt ::= conditional_lambda
|
stmt ::= if_expr_lambda
|
||||||
stmt ::= conditional_not_lambda
|
stmt ::= conditional_not_lambda
|
||||||
conditional_lambda ::= expr jmp_false expr return_if_lambda
|
if_expr_lambda ::= expr jmp_false expr return_if_lambda
|
||||||
return_lambda LAMBDA_MARKER
|
return_lambda LAMBDA_MARKER
|
||||||
conditional_not_lambda
|
conditional_not_lambda
|
||||||
::= expr jmp_true expr return_if_lambda
|
::= expr jmp_true expr return_if_lambda
|
||||||
@@ -598,16 +598,9 @@ class Python3Parser(PythonParser):
|
|||||||
|
|
||||||
# Determine if we have an iteration CALL_FUNCTION_1.
|
# Determine if we have an iteration CALL_FUNCTION_1.
|
||||||
has_get_iter_call_function1 = False
|
has_get_iter_call_function1 = False
|
||||||
max_branches = 0
|
|
||||||
for i, token in enumerate(tokens):
|
for i, token in enumerate(tokens):
|
||||||
if token == 'GET_ITER' and i < n-2 and self.call_fn_name(tokens[i+1]) == 'CALL_FUNCTION_1':
|
if token == 'GET_ITER' and i < n-2 and self.call_fn_name(tokens[i+1]) == 'CALL_FUNCTION_1':
|
||||||
has_get_iter_call_function1 = True
|
has_get_iter_call_function1 = True
|
||||||
max_branches += 1
|
|
||||||
elif (token == 'GET_AWAITABLE' and i < n-3
|
|
||||||
and tokens[i+1] == 'LOAD_CONST' and tokens[i+2] == 'YIELD_FROM'):
|
|
||||||
max_branches += 1
|
|
||||||
if max_branches > 2:
|
|
||||||
break
|
|
||||||
|
|
||||||
for i, token in enumerate(tokens):
|
for i, token in enumerate(tokens):
|
||||||
opname = token.kind
|
opname = token.kind
|
||||||
@@ -650,10 +643,6 @@ class Python3Parser(PythonParser):
|
|||||||
# FIXME: Use the attr
|
# FIXME: Use the attr
|
||||||
# so this doesn't run into exponential parsing time.
|
# so this doesn't run into exponential parsing time.
|
||||||
if opname.startswith('BUILD_MAP_UNPACK'):
|
if opname.startswith('BUILD_MAP_UNPACK'):
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
|
||||||
rule = 'dict_entry ::= ' + 'expr ' * (token.attr*2)
|
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
|
||||||
|
|
||||||
# FIXME: start here. The LHS should be unmap_dict, not dict.
|
# FIXME: start here. The LHS should be unmap_dict, not dict.
|
||||||
# FIXME: really we need a combination of dict_entry-like things.
|
# FIXME: really we need a combination of dict_entry-like things.
|
||||||
# It just so happens the most common case is not to mix
|
# It just so happens the most common case is not to mix
|
||||||
@@ -742,7 +731,7 @@ class Python3Parser(PythonParser):
|
|||||||
|
|
||||||
if opname == 'CALL_FUNCTION' and token.attr == 1:
|
if opname == 'CALL_FUNCTION' and token.attr == 1:
|
||||||
rule = """
|
rule = """
|
||||||
dict_comp ::= LOAD_DICTCOMP LOAD_CONST MAKE_FUNCTION_0 expr
|
dict_comp ::= LOAD_DICTCOMP LOAD_STR MAKE_FUNCTION_0 expr
|
||||||
GET_ITER CALL_FUNCTION_1
|
GET_ITER CALL_FUNCTION_1
|
||||||
classdefdeco1 ::= expr classdefdeco2 CALL_FUNCTION_1
|
classdefdeco1 ::= expr classdefdeco2 CALL_FUNCTION_1
|
||||||
"""
|
"""
|
||||||
@@ -784,8 +773,8 @@ class Python3Parser(PythonParser):
|
|||||||
custom_ops_processed.add(opname)
|
custom_ops_processed.add(opname)
|
||||||
elif opname == 'DELETE_SUBSCR':
|
elif opname == 'DELETE_SUBSCR':
|
||||||
self.addRule("""
|
self.addRule("""
|
||||||
del_stmt ::= delete_subscr
|
del_stmt ::= delete_subscript
|
||||||
delete_subscr ::= expr expr DELETE_SUBSCR
|
delete_subscript ::= expr expr DELETE_SUBSCR
|
||||||
""", nop_func)
|
""", nop_func)
|
||||||
custom_ops_processed.add(opname)
|
custom_ops_processed.add(opname)
|
||||||
elif opname == 'GET_ITER':
|
elif opname == 'GET_ITER':
|
||||||
@@ -861,7 +850,7 @@ class Python3Parser(PythonParser):
|
|||||||
# Note that 3.6+ doesn't do this, but we'll remove
|
# Note that 3.6+ doesn't do this, but we'll remove
|
||||||
# this rule in parse36.py
|
# this rule in parse36.py
|
||||||
rule = """
|
rule = """
|
||||||
dict_comp ::= load_closure LOAD_DICTCOMP LOAD_CONST
|
dict_comp ::= load_closure LOAD_DICTCOMP LOAD_STR
|
||||||
MAKE_CLOSURE_0 expr
|
MAKE_CLOSURE_0 expr
|
||||||
GET_ITER CALL_FUNCTION_1
|
GET_ITER CALL_FUNCTION_1
|
||||||
"""
|
"""
|
||||||
@@ -914,10 +903,10 @@ class Python3Parser(PythonParser):
|
|||||||
rule = ('mkfunc ::= %s%sload_closure LOAD_CONST %s'
|
rule = ('mkfunc ::= %s%sload_closure LOAD_CONST %s'
|
||||||
% (kwargs_str, 'expr ' * args_pos, opname))
|
% (kwargs_str, 'expr ' * args_pos, opname))
|
||||||
elif self.version == 3.3:
|
elif self.version == 3.3:
|
||||||
rule = ('mkfunc ::= %s%sload_closure LOAD_CONST LOAD_CONST %s'
|
rule = ('mkfunc ::= %s%sload_closure LOAD_CONST LOAD_STR %s'
|
||||||
% (kwargs_str, 'expr ' * args_pos, opname))
|
% (kwargs_str, 'expr ' * args_pos, opname))
|
||||||
elif self.version >= 3.4:
|
elif self.version >= 3.4:
|
||||||
rule = ('mkfunc ::= %s%s load_closure LOAD_CONST LOAD_CONST %s'
|
rule = ('mkfunc ::= %s%s load_closure LOAD_CONST LOAD_STR %s'
|
||||||
% ('expr ' * args_pos, kwargs_str, opname))
|
% ('expr ' * args_pos, kwargs_str, opname))
|
||||||
|
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
self.add_unique_rule(rule, opname, token.attr, customize)
|
||||||
@@ -945,17 +934,17 @@ class Python3Parser(PythonParser):
|
|||||||
rule = ('mklambda ::= %s%s%s%s' %
|
rule = ('mklambda ::= %s%s%s%s' %
|
||||||
('expr ' * stack_count,
|
('expr ' * stack_count,
|
||||||
'load_closure ' * closure,
|
'load_closure ' * closure,
|
||||||
'BUILD_TUPLE_1 LOAD_LAMBDA LOAD_CONST ',
|
'BUILD_TUPLE_1 LOAD_LAMBDA LOAD_STR ',
|
||||||
opname))
|
opname))
|
||||||
else:
|
else:
|
||||||
rule = ('mklambda ::= %s%s%s' %
|
rule = ('mklambda ::= %s%s%s' %
|
||||||
('load_closure ' * closure,
|
('load_closure ' * closure,
|
||||||
'LOAD_LAMBDA LOAD_CONST ',
|
'LOAD_LAMBDA LOAD_STR ',
|
||||||
opname))
|
opname))
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
self.add_unique_rule(rule, opname, token.attr, customize)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
rule = ('mklambda ::= %sLOAD_LAMBDA LOAD_CONST %s' %
|
rule = ('mklambda ::= %sLOAD_LAMBDA LOAD_STR %s' %
|
||||||
(('expr ' * stack_count), opname))
|
(('expr ' * stack_count), opname))
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
self.add_unique_rule(rule, opname, token.attr, customize)
|
||||||
|
|
||||||
@@ -963,7 +952,7 @@ class Python3Parser(PythonParser):
|
|||||||
rule = ('mkfunc ::= %s%s%s%s' %
|
rule = ('mkfunc ::= %s%s%s%s' %
|
||||||
('expr ' * stack_count,
|
('expr ' * stack_count,
|
||||||
'load_closure ' * closure,
|
'load_closure ' * closure,
|
||||||
'LOAD_CONST ' * 2,
|
'LOAD_CONST LOAD_STR ',
|
||||||
opname))
|
opname))
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
self.add_unique_rule(rule, opname, token.attr, customize)
|
||||||
|
|
||||||
@@ -1045,51 +1034,57 @@ class Python3Parser(PythonParser):
|
|||||||
elif self.version == 3.3:
|
elif self.version == 3.3:
|
||||||
# positional args after keyword args
|
# positional args after keyword args
|
||||||
rule = ('mkfunc ::= %s %s%s%s' %
|
rule = ('mkfunc ::= %s %s%s%s' %
|
||||||
(kwargs, 'pos_arg ' * args_pos, 'LOAD_CONST '*2,
|
(kwargs, 'pos_arg ' * args_pos, 'LOAD_CONST LOAD_STR ',
|
||||||
opname))
|
opname))
|
||||||
elif self.version > 3.5:
|
elif self.version > 3.5:
|
||||||
# positional args before keyword args
|
# positional args before keyword args
|
||||||
rule = ('mkfunc ::= %s%s %s%s' %
|
rule = ('mkfunc ::= %s%s %s%s' %
|
||||||
('pos_arg ' * args_pos, kwargs, 'LOAD_CONST '*2,
|
('pos_arg ' * args_pos, kwargs, 'LOAD_CONST LOAD_STR ',
|
||||||
opname))
|
opname))
|
||||||
elif self.version > 3.3:
|
elif self.version > 3.3:
|
||||||
# positional args before keyword args
|
# positional args before keyword args
|
||||||
rule = ('mkfunc ::= %s%s %s%s' %
|
rule = ('mkfunc ::= %s%s %s%s' %
|
||||||
('pos_arg ' * args_pos, kwargs, 'LOAD_CONST '*2,
|
('pos_arg ' * args_pos, kwargs, 'LOAD_CONST LOAD_STR ',
|
||||||
opname))
|
opname))
|
||||||
else:
|
else:
|
||||||
rule = ('mkfunc ::= %s%sexpr %s' %
|
rule = ('mkfunc ::= %s%sexpr %s' %
|
||||||
(kwargs, 'pos_arg ' * args_pos, opname))
|
(kwargs, 'pos_arg ' * args_pos, opname))
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
self.add_unique_rule(rule, opname, token.attr, customize)
|
||||||
|
|
||||||
if opname.startswith('MAKE_FUNCTION_A'):
|
if re.search('^MAKE_FUNCTION.*_A', opname):
|
||||||
if self.version >= 3.6:
|
if self.version >= 3.6:
|
||||||
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST LOAD_CONST %s' %
|
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST LOAD_STR %s' %
|
||||||
(('pos_arg ' * (args_pos)),
|
(('pos_arg ' * (args_pos)),
|
||||||
('call ' * (annotate_args-1)), opname))
|
('call ' * (annotate_args-1)), opname))
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
self.add_unique_rule(rule, opname, token.attr, customize)
|
||||||
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST LOAD_CONST %s' %
|
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST LOAD_STR %s' %
|
||||||
(('pos_arg ' * (args_pos)),
|
(('pos_arg ' * (args_pos)),
|
||||||
('annotate_arg ' * (annotate_args-1)), opname))
|
('annotate_arg ' * (annotate_args-1)), opname))
|
||||||
if self.version >= 3.3:
|
if self.version >= 3.3:
|
||||||
# Normally we remove EXTENDED_ARG from the opcodes, but in the case of
|
# Normally we remove EXTENDED_ARG from the opcodes, but in the case of
|
||||||
# annotated functions can use the EXTENDED_ARG tuple to signal we have an annotated function.
|
# annotated functions can use the EXTENDED_ARG tuple to signal we have an annotated function.
|
||||||
# Yes this is a little hacky
|
# Yes this is a little hacky
|
||||||
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST LOAD_CONST EXTENDED_ARG %s' %
|
if self.version == 3.3:
|
||||||
(('pos_arg ' * (args_pos)),
|
# 3.3 puts kwargs before pos_arg
|
||||||
|
pos_kw_tuple = (('kwargs ' * args_kw), ('pos_arg ' * (args_pos)))
|
||||||
|
else:
|
||||||
|
# 3.4 and 3.5puts pos_arg before kwargs
|
||||||
|
pos_kw_tuple = (('pos_arg ' * (args_pos), ('kwargs ' * args_kw)))
|
||||||
|
rule = ('mkfunc_annotate ::= %s%s%sannotate_tuple LOAD_CONST LOAD_STR EXTENDED_ARG %s' %
|
||||||
|
( pos_kw_tuple[0], pos_kw_tuple[1],
|
||||||
('call ' * (annotate_args-1)), opname))
|
('call ' * (annotate_args-1)), opname))
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
self.add_unique_rule(rule, opname, token.attr, customize)
|
||||||
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST LOAD_CONST EXTENDED_ARG %s' %
|
rule = ('mkfunc_annotate ::= %s%s%sannotate_tuple LOAD_CONST LOAD_STR EXTENDED_ARG %s' %
|
||||||
(('pos_arg ' * (args_pos)),
|
( pos_kw_tuple[0], pos_kw_tuple[1],
|
||||||
('annotate_arg ' * (annotate_args-1)), opname))
|
('annotate_arg ' * (annotate_args-1)), opname))
|
||||||
else:
|
else:
|
||||||
# See above comment about use of EXTENDED_ARG
|
# See above comment about use of EXTENDED_ARG
|
||||||
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST EXTENDED_ARG %s' %
|
rule = ('mkfunc_annotate ::= %s%s%sannotate_tuple LOAD_CONST EXTENDED_ARG %s' %
|
||||||
(('pos_arg ' * (args_pos)),
|
(('pos_arg ' * (args_pos)), ('kwargs ' * args_kw),
|
||||||
('annotate_arg ' * (annotate_args-1)), opname))
|
('annotate_arg ' * (annotate_args-1)), opname))
|
||||||
self.add_unique_rule(rule, opname, token.attr, customize)
|
self.add_unique_rule(rule, opname, token.attr, customize)
|
||||||
rule = ('mkfunc_annotate ::= %s%sannotate_tuple LOAD_CONST EXTENDED_ARG %s' %
|
rule = ('mkfunc_annotate ::= %s%s%sannotate_tuple LOAD_CONST EXTENDED_ARG %s' %
|
||||||
(('pos_arg ' * (args_pos)),
|
(('pos_arg ' * (args_pos)), ('kwargs ' * args_kw),
|
||||||
('call ' * (annotate_args-1)), opname))
|
('call ' * (annotate_args-1)), opname))
|
||||||
self.addRule(rule, nop_func)
|
self.addRule(rule, nop_func)
|
||||||
elif opname == 'RETURN_VALUE_LAMBDA':
|
elif opname == 'RETURN_VALUE_LAMBDA':
|
||||||
@@ -1155,7 +1150,8 @@ class Python3Parser(PythonParser):
|
|||||||
self.check_reduce['while1elsestmt'] = 'noAST'
|
self.check_reduce['while1elsestmt'] = 'noAST'
|
||||||
self.check_reduce['ifelsestmt'] = 'AST'
|
self.check_reduce['ifelsestmt'] = 'AST'
|
||||||
self.check_reduce['annotate_tuple'] = 'noAST'
|
self.check_reduce['annotate_tuple'] = 'noAST'
|
||||||
self.check_reduce['kwarg'] = 'noAST'
|
if not PYTHON3:
|
||||||
|
self.check_reduce['kwarg'] = 'noAST'
|
||||||
if self.version < 3.6:
|
if self.version < 3.6:
|
||||||
# 3.6+ can remove a JUMP_FORWARD which messes up our testing here
|
# 3.6+ can remove a JUMP_FORWARD which messes up our testing here
|
||||||
self.check_reduce['try_except'] = 'AST'
|
self.check_reduce['try_except'] = 'AST'
|
||||||
@@ -1172,10 +1168,7 @@ class Python3Parser(PythonParser):
|
|||||||
return not isinstance(tokens[first].attr, tuple)
|
return not isinstance(tokens[first].attr, tuple)
|
||||||
elif lhs == 'kwarg':
|
elif lhs == 'kwarg':
|
||||||
arg = tokens[first].attr
|
arg = tokens[first].attr
|
||||||
if PYTHON3:
|
return not (isinstance(arg, str) or isinstance(arg, unicode))
|
||||||
return not isinstance(arg, str)
|
|
||||||
else:
|
|
||||||
return not (isinstance(arg, str) or isinstance(arg, unicode))
|
|
||||||
elif lhs == 'while1elsestmt':
|
elif lhs == 'while1elsestmt':
|
||||||
|
|
||||||
n = len(tokens)
|
n = len(tokens)
|
||||||
@@ -1226,10 +1219,11 @@ class Python3Parser(PythonParser):
|
|||||||
cfl = last
|
cfl = last
|
||||||
assert tokens[cfl] == 'COME_FROM_LOOP'
|
assert tokens[cfl] == 'COME_FROM_LOOP'
|
||||||
|
|
||||||
if tokens[cfl-1] != 'JUMP_BACK':
|
for i in range(cfl-1, first, -1):
|
||||||
cfl_offset = tokens[cfl-1].offset
|
if tokens[i] != 'POP_BLOCK':
|
||||||
insn = chain((i for i in self.insts if cfl_offset == i.offset), repeat(None)).next()
|
break
|
||||||
if insn and insn.is_jump_target:
|
if tokens[i].kind not in ('JUMP_BACK', 'RETURN_VALUE'):
|
||||||
|
if not tokens[i].kind.startswith('COME_FROM'):
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Check that the SETUP_LOOP jumps to the offset after the
|
# Check that the SETUP_LOOP jumps to the offset after the
|
||||||
|
@@ -47,7 +47,7 @@ class Python34Parser(Python33Parser):
|
|||||||
|
|
||||||
# Python 3.4+ optimizes the trailing two JUMPS away
|
# Python 3.4+ optimizes the trailing two JUMPS away
|
||||||
|
|
||||||
# Is this 3.4 only?
|
# This is 3.4 only
|
||||||
yield_from ::= expr GET_ITER LOAD_CONST YIELD_FROM
|
yield_from ::= expr GET_ITER LOAD_CONST YIELD_FROM
|
||||||
|
|
||||||
_ifstmts_jump ::= c_stmts_opt JUMP_ABSOLUTE JUMP_FORWARD COME_FROM
|
_ifstmts_jump ::= c_stmts_opt JUMP_ABSOLUTE JUMP_FORWARD COME_FROM
|
||||||
@@ -55,6 +55,7 @@ class Python34Parser(Python33Parser):
|
|||||||
|
|
||||||
def customize_grammar_rules(self, tokens, customize):
|
def customize_grammar_rules(self, tokens, customize):
|
||||||
self.remove_rules("""
|
self.remove_rules("""
|
||||||
|
yield_from ::= expr expr YIELD_FROM
|
||||||
# 3.4.2 has this. 3.4.4 may now
|
# 3.4.2 has this. 3.4.4 may now
|
||||||
# while1stmt ::= SETUP_LOOP l_stmts COME_FROM JUMP_BACK COME_FROM_LOOP
|
# while1stmt ::= SETUP_LOOP l_stmts COME_FROM JUMP_BACK COME_FROM_LOOP
|
||||||
""")
|
""")
|
||||||
|
@@ -29,8 +29,7 @@ class Python36Parser(Python35Parser):
|
|||||||
|
|
||||||
|
|
||||||
def p_36misc(self, args):
|
def p_36misc(self, args):
|
||||||
"""
|
"""sstmt ::= sstmt RETURN_LAST
|
||||||
sstmt ::= sstmt RETURN_LAST
|
|
||||||
|
|
||||||
# 3.6 redoes how return_closure works. FIXME: Isolate to LOAD_CLOSURE
|
# 3.6 redoes how return_closure works. FIXME: Isolate to LOAD_CLOSURE
|
||||||
return_closure ::= LOAD_CLOSURE DUP_TOP STORE_NAME RETURN_VALUE RETURN_LAST
|
return_closure ::= LOAD_CLOSURE DUP_TOP STORE_NAME RETURN_VALUE RETURN_LAST
|
||||||
@@ -142,6 +141,7 @@ class Python36Parser(Python35Parser):
|
|||||||
COME_FROM_FINALLY
|
COME_FROM_FINALLY
|
||||||
|
|
||||||
compare_chained2 ::= expr COMPARE_OP come_froms JUMP_FORWARD
|
compare_chained2 ::= expr COMPARE_OP come_froms JUMP_FORWARD
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def customize_grammar_rules(self, tokens, customize):
|
def customize_grammar_rules(self, tokens, customize):
|
||||||
@@ -187,35 +187,28 @@ class Python36Parser(Python35Parser):
|
|||||||
self.add_unique_doc_rules(rules_str, customize)
|
self.add_unique_doc_rules(rules_str, customize)
|
||||||
elif opname == 'FORMAT_VALUE':
|
elif opname == 'FORMAT_VALUE':
|
||||||
rules_str = """
|
rules_str = """
|
||||||
expr ::= fstring_single
|
expr ::= formatted_value1
|
||||||
fstring_single ::= expr FORMAT_VALUE
|
formatted_value1 ::= expr FORMAT_VALUE
|
||||||
expr ::= fstring_expr
|
|
||||||
fstring_expr ::= expr FORMAT_VALUE
|
|
||||||
|
|
||||||
str ::= LOAD_CONST
|
|
||||||
formatted_value ::= fstring_expr
|
|
||||||
formatted_value ::= str
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
self.add_unique_doc_rules(rules_str, customize)
|
self.add_unique_doc_rules(rules_str, customize)
|
||||||
elif opname == 'FORMAT_VALUE_ATTR':
|
elif opname == 'FORMAT_VALUE_ATTR':
|
||||||
rules_str = """
|
rules_str = """
|
||||||
expr ::= fstring_single
|
expr ::= formatted_value2
|
||||||
fstring_single ::= expr expr FORMAT_VALUE_ATTR
|
formatted_value2 ::= expr expr FORMAT_VALUE_ATTR
|
||||||
"""
|
"""
|
||||||
self.add_unique_doc_rules(rules_str, customize)
|
self.add_unique_doc_rules(rules_str, customize)
|
||||||
elif opname == 'MAKE_FUNCTION_8':
|
elif opname == 'MAKE_FUNCTION_8':
|
||||||
if 'LOAD_DICTCOMP' in self.seen_ops:
|
if 'LOAD_DICTCOMP' in self.seen_ops:
|
||||||
# Is there something general going on here?
|
# Is there something general going on here?
|
||||||
rule = """
|
rule = """
|
||||||
dict_comp ::= load_closure LOAD_DICTCOMP LOAD_CONST
|
dict_comp ::= load_closure LOAD_DICTCOMP LOAD_STR
|
||||||
MAKE_FUNCTION_8 expr
|
MAKE_FUNCTION_8 expr
|
||||||
GET_ITER CALL_FUNCTION_1
|
GET_ITER CALL_FUNCTION_1
|
||||||
"""
|
"""
|
||||||
self.addRule(rule, nop_func)
|
self.addRule(rule, nop_func)
|
||||||
elif 'LOAD_SETCOMP' in self.seen_ops:
|
elif 'LOAD_SETCOMP' in self.seen_ops:
|
||||||
rule = """
|
rule = """
|
||||||
set_comp ::= load_closure LOAD_SETCOMP LOAD_CONST
|
set_comp ::= load_closure LOAD_SETCOMP LOAD_STR
|
||||||
MAKE_FUNCTION_8 expr
|
MAKE_FUNCTION_8 expr
|
||||||
GET_ITER CALL_FUNCTION_1
|
GET_ITER CALL_FUNCTION_1
|
||||||
"""
|
"""
|
||||||
@@ -245,16 +238,12 @@ class Python36Parser(Python35Parser):
|
|||||||
"""
|
"""
|
||||||
self.addRule(rules_str, nop_func)
|
self.addRule(rules_str, nop_func)
|
||||||
|
|
||||||
elif opname == 'BUILD_STRING':
|
elif opname.startswith('BUILD_STRING'):
|
||||||
v = token.attr
|
v = token.attr
|
||||||
joined_str_n = "formatted_value_%s" % v
|
|
||||||
rules_str = """
|
rules_str = """
|
||||||
expr ::= fstring_multi
|
expr ::= joined_str
|
||||||
fstring_multi ::= joined_str BUILD_STRING
|
joined_str ::= %sBUILD_STRING_%d
|
||||||
joined_str ::= formatted_value+
|
""" % ("expr " * v, v)
|
||||||
fstring_multi ::= %s BUILD_STRING
|
|
||||||
%s ::= %sBUILD_STRING
|
|
||||||
""" % (joined_str_n, joined_str_n, "formatted_value " * v)
|
|
||||||
self.add_unique_doc_rules(rules_str, customize)
|
self.add_unique_doc_rules(rules_str, customize)
|
||||||
if 'FORMAT_VALUE_ATTR' in self.seen_ops:
|
if 'FORMAT_VALUE_ATTR' in self.seen_ops:
|
||||||
rules_str = """
|
rules_str = """
|
||||||
@@ -274,6 +263,23 @@ class Python36Parser(Python35Parser):
|
|||||||
self.addRule(rule, nop_func)
|
self.addRule(rule, nop_func)
|
||||||
rule = ('starred ::= %s %s' % ('expr ' * v, opname))
|
rule = ('starred ::= %s %s' % ('expr ' * v, opname))
|
||||||
self.addRule(rule, nop_func)
|
self.addRule(rule, nop_func)
|
||||||
|
elif opname == 'SETUP_ANNOTATIONS':
|
||||||
|
# 3.6 Variable Annotations PEP 526
|
||||||
|
# This seems to come before STORE_ANNOTATION, and doesn't
|
||||||
|
# correspond to direct Python source code.
|
||||||
|
rule = """
|
||||||
|
stmt ::= SETUP_ANNOTATIONS
|
||||||
|
stmt ::= ann_assign_init_value
|
||||||
|
stmt ::= ann_assign_no_init
|
||||||
|
|
||||||
|
ann_assign_init_value ::= expr store store_annotation
|
||||||
|
ann_assign_no_init ::= store_annotation
|
||||||
|
store_annotation ::= LOAD_NAME STORE_ANNOTATION
|
||||||
|
store_annotation ::= subscript STORE_ANNOTATION
|
||||||
|
"""
|
||||||
|
self.addRule(rule, nop_func)
|
||||||
|
# Check to combine assignment + annotation into one statement
|
||||||
|
self.check_reduce['assign'] = 'token'
|
||||||
elif opname == 'SETUP_WITH':
|
elif opname == 'SETUP_WITH':
|
||||||
rules_str = """
|
rules_str = """
|
||||||
withstmt ::= expr SETUP_WITH POP_TOP suite_stmts_opt COME_FROM_WITH
|
withstmt ::= expr SETUP_WITH POP_TOP suite_stmts_opt COME_FROM_WITH
|
||||||
@@ -299,6 +305,7 @@ class Python36Parser(Python35Parser):
|
|||||||
self.addRule(rules_str, nop_func)
|
self.addRule(rules_str, nop_func)
|
||||||
pass
|
pass
|
||||||
pass
|
pass
|
||||||
|
return
|
||||||
|
|
||||||
def custom_classfunc_rule(self, opname, token, customize, next_token):
|
def custom_classfunc_rule(self, opname, token, customize, next_token):
|
||||||
|
|
||||||
@@ -398,6 +405,15 @@ class Python36Parser(Python35Parser):
|
|||||||
tokens, first, last)
|
tokens, first, last)
|
||||||
if invalid:
|
if invalid:
|
||||||
return invalid
|
return invalid
|
||||||
|
if rule[0] == 'assign':
|
||||||
|
# Try to combine assignment + annotation into one statement
|
||||||
|
if (len(tokens) >= last + 1 and
|
||||||
|
tokens[last] == 'LOAD_NAME' and
|
||||||
|
tokens[last+1] == 'STORE_ANNOTATION' and
|
||||||
|
tokens[last-1].pattr == tokens[last+1].pattr):
|
||||||
|
# Will handle as ann_assign_init_value
|
||||||
|
return True
|
||||||
|
pass
|
||||||
if rule[0] == 'call_kw':
|
if rule[0] == 'call_kw':
|
||||||
# Make sure we don't derive call_kw
|
# Make sure we don't derive call_kw
|
||||||
nt = ast[0]
|
nt = ast[0]
|
||||||
|
@@ -72,8 +72,8 @@ class Python37Parser(Python36Parser):
|
|||||||
POP_TOP POP_TOP POP_TOP POP_EXCEPT POP_TOP POP_BLOCK
|
POP_TOP POP_TOP POP_TOP POP_EXCEPT POP_TOP POP_BLOCK
|
||||||
else_suite COME_FROM_LOOP
|
else_suite COME_FROM_LOOP
|
||||||
|
|
||||||
# Is there a pattern here?
|
|
||||||
attributes ::= IMPORT_FROM ROT_TWO POP_TOP IMPORT_FROM
|
attributes ::= IMPORT_FROM ROT_TWO POP_TOP IMPORT_FROM
|
||||||
|
attributes ::= attributes ROT_TWO POP_TOP IMPORT_FROM
|
||||||
|
|
||||||
attribute37 ::= expr LOAD_METHOD
|
attribute37 ::= expr LOAD_METHOD
|
||||||
expr ::= attribute37
|
expr ::= attribute37
|
||||||
@@ -87,26 +87,50 @@ class Python37Parser(Python36Parser):
|
|||||||
|
|
||||||
compare_chained37 ::= expr compare_chained1a_37
|
compare_chained37 ::= expr compare_chained1a_37
|
||||||
compare_chained37 ::= expr compare_chained1b_37
|
compare_chained37 ::= expr compare_chained1b_37
|
||||||
|
compare_chained37 ::= expr compare_chained1c_37
|
||||||
|
|
||||||
compare_chained37_false ::= expr compare_chained1_false_37
|
compare_chained37_false ::= expr compare_chained1_false_37
|
||||||
|
compare_chained37_false ::= expr compare_chained2_false_37
|
||||||
|
|
||||||
compare_chained1a_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
compare_chained1a_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
||||||
compare_chained1a_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
compare_chained1a_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
||||||
compare_chained2a_37 ELSE POP_TOP COME_FROM
|
compare_chained2a_37 ELSE POP_TOP COME_FROM
|
||||||
compare_chained1b_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
compare_chained1b_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
||||||
compare_chained2b_37 POP_TOP JUMP_FORWARD COME_FROM
|
compare_chained2b_37 POP_TOP JUMP_FORWARD COME_FROM
|
||||||
|
compare_chained1c_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
||||||
|
compare_chained2a_37 POP_TOP
|
||||||
|
|
||||||
compare_chained1_false_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
compare_chained1_false_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
||||||
compare_chained2c_37 POP_TOP JUMP_FORWARD COME_FROM
|
compare_chained2c_37 POP_TOP JUMP_FORWARD COME_FROM
|
||||||
|
compare_chained2_false_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP POP_JUMP_IF_FALSE
|
||||||
|
compare_chained2a_false_37 ELSE POP_TOP JUMP_BACK COME_FROM
|
||||||
|
|
||||||
compare_chained2a_37 ::= expr COMPARE_OP POP_JUMP_IF_TRUE JUMP_FORWARD
|
compare_chained2a_37 ::= expr COMPARE_OP POP_JUMP_IF_TRUE JUMP_FORWARD
|
||||||
compare_chained2a_false_37 ::= expr COMPARE_OP POP_JUMP_IF_FALSE JUMP_FORWARD
|
compare_chained2a_37 ::= expr COMPARE_OP POP_JUMP_IF_TRUE JUMP_BACK
|
||||||
|
compare_chained2a_false_37 ::= expr COMPARE_OP POP_JUMP_IF_FALSE jf_cfs
|
||||||
|
|
||||||
compare_chained2b_37 ::= expr COMPARE_OP come_from_opt POP_JUMP_IF_FALSE JUMP_FORWARD ELSE
|
compare_chained2b_37 ::= expr COMPARE_OP come_from_opt POP_JUMP_IF_FALSE JUMP_FORWARD ELSE
|
||||||
|
compare_chained2b_37 ::= expr COMPARE_OP come_from_opt POP_JUMP_IF_FALSE JUMP_FORWARD
|
||||||
|
|
||||||
compare_chained2c_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP come_from_opt POP_JUMP_IF_FALSE
|
compare_chained2c_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP come_from_opt POP_JUMP_IF_FALSE
|
||||||
compare_chained2a_false_37 ELSE
|
compare_chained2a_false_37 ELSE
|
||||||
|
compare_chained2c_37 ::= expr DUP_TOP ROT_THREE COMPARE_OP come_from_opt POP_JUMP_IF_FALSE
|
||||||
|
compare_chained2a_false_37
|
||||||
|
|
||||||
_ifstmts_jump ::= c_stmts_opt come_froms
|
jf_cfs ::= JUMP_FORWARD _come_froms
|
||||||
|
ifelsestmt ::= testexpr c_stmts_opt jf_cfs else_suite opt_come_from_except
|
||||||
|
|
||||||
|
jmp_false37 ::= POP_JUMP_IF_FALSE COME_FROM
|
||||||
|
list_if ::= expr jmp_false37 list_iter
|
||||||
|
|
||||||
|
_ifstmts_jump ::= c_stmts_opt come_froms
|
||||||
|
|
||||||
|
and_not ::= expr jmp_false expr POP_JUMP_IF_TRUE
|
||||||
|
|
||||||
|
expr ::= if_exp_37a
|
||||||
|
expr ::= if_exp_37b
|
||||||
|
if_exp_37a ::= and_not expr JUMP_FORWARD COME_FROM expr COME_FROM
|
||||||
|
if_exp_37b ::= expr jmp_false expr POP_JUMP_IF_FALSE jump_forward_else expr
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def customize_grammar_rules(self, tokens, customize):
|
def customize_grammar_rules(self, tokens, customize):
|
||||||
|
@@ -318,6 +318,8 @@ class Scanner3(Scanner):
|
|||||||
# pattr = 'code_object @ 0x%x %s->%s' %\
|
# pattr = 'code_object @ 0x%x %s->%s' %\
|
||||||
# (id(const), const.co_filename, const.co_name)
|
# (id(const), const.co_filename, const.co_name)
|
||||||
pattr = '<code_object ' + const.co_name + '>'
|
pattr = '<code_object ' + const.co_name + '>'
|
||||||
|
elif isinstance(const, str):
|
||||||
|
opname = 'LOAD_STR'
|
||||||
else:
|
else:
|
||||||
if isinstance(inst.arg, int) and inst.arg < len(co.co_consts):
|
if isinstance(inst.arg, int) and inst.arg < len(co.co_consts):
|
||||||
argval, _ = _get_const_info(inst.arg, co.co_consts)
|
argval, _ = _get_const_info(inst.arg, co.co_consts)
|
||||||
@@ -339,9 +341,10 @@ class Scanner3(Scanner):
|
|||||||
attr = attr[:4] # remove last value: attr[5] == False
|
attr = attr[:4] # remove last value: attr[5] == False
|
||||||
else:
|
else:
|
||||||
pos_args, name_pair_args, annotate_args = parse_fn_counts(inst.argval)
|
pos_args, name_pair_args, annotate_args = parse_fn_counts(inst.argval)
|
||||||
pattr = ("%d positional, %d keyword pair, %d annotated" %
|
pattr = ("%d positional, %d keyword only, %d annotated" %
|
||||||
(pos_args, name_pair_args, annotate_args))
|
(pos_args, name_pair_args, annotate_args))
|
||||||
if name_pair_args > 0:
|
if name_pair_args > 0:
|
||||||
|
# FIXME: this should probably be K_
|
||||||
opname = '%s_N%d' % (opname, name_pair_args)
|
opname = '%s_N%d' % (opname, name_pair_args)
|
||||||
pass
|
pass
|
||||||
if annotate_args > 0:
|
if annotate_args > 0:
|
||||||
@@ -819,7 +822,14 @@ class Scanner3(Scanner):
|
|||||||
self.fixed_jumps[offset] = fix or match[-1]
|
self.fixed_jumps[offset] = fix or match[-1]
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
self.fixed_jumps[offset] = match[-1]
|
if self.version < 3.6:
|
||||||
|
# FIXME: this is putting in COME_FROMs in the wrong place.
|
||||||
|
# Fix up grammar so we don't need to do this.
|
||||||
|
# See cf_for_iter use in parser36.py
|
||||||
|
self.fixed_jumps[offset] = match[-1]
|
||||||
|
elif target > offset:
|
||||||
|
# Right now we only add COME_FROMs in forward (not loop) jumps
|
||||||
|
self.fixed_jumps[offset] = target
|
||||||
return
|
return
|
||||||
# op == POP_JUMP_IF_TRUE
|
# op == POP_JUMP_IF_TRUE
|
||||||
else:
|
else:
|
||||||
@@ -924,7 +934,7 @@ class Scanner3(Scanner):
|
|||||||
# Python 3.5 may remove as dead code a JUMP
|
# Python 3.5 may remove as dead code a JUMP
|
||||||
# instruction after a RETURN_VALUE. So we check
|
# instruction after a RETURN_VALUE. So we check
|
||||||
# based on seeing SETUP_EXCEPT various places.
|
# based on seeing SETUP_EXCEPT various places.
|
||||||
if self.version < 3.8 and code[rtarget] == self.opc.SETUP_EXCEPT:
|
if self.version < 3.6 and code[rtarget] == self.opc.SETUP_EXCEPT:
|
||||||
return
|
return
|
||||||
# Check that next instruction after pops and jump is
|
# Check that next instruction after pops and jump is
|
||||||
# not from SETUP_EXCEPT
|
# not from SETUP_EXCEPT
|
||||||
|
@@ -31,6 +31,8 @@ class Scanner36(Scanner3):
|
|||||||
t.op == self.opc.CALL_FUNCTION_EX and t.attr & 1):
|
t.op == self.opc.CALL_FUNCTION_EX and t.attr & 1):
|
||||||
t.kind = 'CALL_FUNCTION_EX_KW'
|
t.kind = 'CALL_FUNCTION_EX_KW'
|
||||||
pass
|
pass
|
||||||
|
elif t.op == self.opc.BUILD_STRING:
|
||||||
|
t.kind = 'BUILD_STRING_%s' % t.attr
|
||||||
elif t.op == self.opc.CALL_FUNCTION_KW:
|
elif t.op == self.opc.CALL_FUNCTION_KW:
|
||||||
t.kind = 'CALL_FUNCTION_KW_%s' % t.attr
|
t.kind = 'CALL_FUNCTION_KW_%s' % t.attr
|
||||||
elif t.op == self.opc.FORMAT_VALUE:
|
elif t.op == self.opc.FORMAT_VALUE:
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
# Copyright (c) 2016-2018 by Rocky Bernstein
|
# Copyright (c) 2016-2019 by Rocky Bernstein
|
||||||
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
|
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
|
||||||
# Copyright (c) 1999 John Aycock
|
# Copyright (c) 1999 John Aycock
|
||||||
#
|
#
|
||||||
@@ -58,7 +58,10 @@ class Token: # Python 2.4 can't have empty ()
|
|||||||
""" '==' on kind and "pattr" attributes.
|
""" '==' on kind and "pattr" attributes.
|
||||||
It is okay if offsets and linestarts are different"""
|
It is okay if offsets and linestarts are different"""
|
||||||
if isinstance(o, Token):
|
if isinstance(o, Token):
|
||||||
return (self.kind == o.kind) and (self.pattr == o.pattr)
|
return (
|
||||||
|
(self.kind == o.kind)
|
||||||
|
and ((self.pattr == o.pattr) or self.attr == o.attr)
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
# ?? do we need this?
|
# ?? do we need this?
|
||||||
return self.kind == o
|
return self.kind == o
|
||||||
@@ -85,13 +88,15 @@ class Token: # Python 2.4 can't have empty ()
|
|||||||
else:
|
else:
|
||||||
prefix = ' ' * (6 + len(line_prefix))
|
prefix = ' ' * (6 + len(line_prefix))
|
||||||
offset_opname = '%6s %-17s' % (self.offset, self.kind)
|
offset_opname = '%6s %-17s' % (self.offset, self.kind)
|
||||||
|
|
||||||
if not self.has_arg:
|
if not self.has_arg:
|
||||||
return "%s%s" % (prefix, offset_opname)
|
return "%s%s" % (prefix, offset_opname)
|
||||||
|
|
||||||
if isinstance(self.attr, int):
|
if isinstance(self.attr, int):
|
||||||
argstr = "%6d " % self.attr
|
argstr = "%6d " % self.attr
|
||||||
else:
|
else:
|
||||||
argstr = ' '*7
|
argstr = ' '*7
|
||||||
|
name = self.kind
|
||||||
|
|
||||||
if self.has_arg:
|
if self.has_arg:
|
||||||
pattr = self.pattr
|
pattr = self.pattr
|
||||||
if self.opc:
|
if self.opc:
|
||||||
@@ -104,13 +109,25 @@ class Token: # Python 2.4 can't have empty ()
|
|||||||
pattr = "to " + str(self.pattr)
|
pattr = "to " + str(self.pattr)
|
||||||
pass
|
pass
|
||||||
elif self.op in self.opc.CONST_OPS:
|
elif self.op in self.opc.CONST_OPS:
|
||||||
# Compare with pysource n_LOAD_CONST
|
if name == 'LOAD_STR':
|
||||||
attr = self.attr
|
pattr = self.attr
|
||||||
if attr is None:
|
elif name == 'LOAD_CODE':
|
||||||
pattr = None
|
return "%s%s%s %s" % (prefix, offset_opname, argstr, pattr)
|
||||||
|
else:
|
||||||
|
return "%s%s %r" % (prefix, offset_opname, pattr)
|
||||||
|
|
||||||
elif self.op in self.opc.hascompare:
|
elif self.op in self.opc.hascompare:
|
||||||
if isinstance(self.attr, int):
|
if isinstance(self.attr, int):
|
||||||
pattr = self.opc.cmp_op[self.attr]
|
pattr = self.opc.cmp_op[self.attr]
|
||||||
|
return "%s%s%s %s" % (prefix, offset_opname, argstr, pattr)
|
||||||
|
elif self.op in self.opc.hasvargs:
|
||||||
|
return "%s%s%s" % (prefix, offset_opname, argstr)
|
||||||
|
elif self.op in self.opc.NAME_OPS:
|
||||||
|
if self.opc.version >= 3.0:
|
||||||
|
return "%s%s%s %s" % (prefix, offset_opname, argstr, self.attr)
|
||||||
|
elif name == 'EXTENDED_ARG':
|
||||||
|
return "%s%s%s 0x%x << %s = %s" % (prefix, offset_opname, argstr, self.attr,
|
||||||
|
self.opc.EXTENDED_ARG_SHIFT, pattr)
|
||||||
# And so on. See xdis/bytecode.py get_instructions_bytes
|
# And so on. See xdis/bytecode.py get_instructions_bytes
|
||||||
pass
|
pass
|
||||||
elif re.search(r'_\d+$', self.kind):
|
elif re.search(r'_\d+$', self.kind):
|
||||||
|
@@ -36,7 +36,6 @@ class AligningWalker(SourceWalker, object):
|
|||||||
self.pending_newlines = max(self.pending_newlines, 1)
|
self.pending_newlines = max(self.pending_newlines, 1)
|
||||||
|
|
||||||
def write(self, *data):
|
def write(self, *data):
|
||||||
from trepan.api import debug; debug()
|
|
||||||
if (len(data) == 1) and data[0] == self.indent:
|
if (len(data) == 1) and data[0] == self.indent:
|
||||||
diff = max(self.pending_newlines,
|
diff = max(self.pending_newlines,
|
||||||
self.desired_line_number - self.current_line_number)
|
self.desired_line_number - self.current_line_number)
|
||||||
|
@@ -27,75 +27,85 @@ else:
|
|||||||
maxint = sys.maxint
|
maxint = sys.maxint
|
||||||
|
|
||||||
|
|
||||||
# Operator precidence
|
# Operator precidence See
|
||||||
# See https://docs.python.org/2/reference/expressions.html
|
# https://docs.python.org/2/reference/expressions.html#operator-precedence
|
||||||
# or https://docs.python.org/3/reference/expressions.html
|
# or
|
||||||
# for a list.
|
# https://docs.python.org/3/reference/expressions.html#operator-precedence
|
||||||
|
# for a list. We keep the same top-to-botom order here as in the above links,
|
||||||
|
# so we start with low precedence (high values) and go down in value.
|
||||||
|
|
||||||
# Things at the top of this list below with low-value precidence will
|
# Things at the bottom of this list below with high precedence (low value) will
|
||||||
# tend to have parenthesis around them. Things at the bottom
|
# tend to have parenthesis around them. Things at the top
|
||||||
# of the list will tend not to have parenthesis around them.
|
# of the list will tend not to have parenthesis around them.
|
||||||
PRECEDENCE = {
|
|
||||||
'list': 0,
|
|
||||||
'dict': 0,
|
|
||||||
'unary_convert': 0,
|
|
||||||
'dict_comp': 0,
|
|
||||||
'set_comp': 0,
|
|
||||||
'set_comp_expr': 0,
|
|
||||||
'list_comp': 0,
|
|
||||||
'generator_exp': 0,
|
|
||||||
|
|
||||||
'attribute': 2,
|
# Note: The values in this table are even numbers. Inside
|
||||||
'subscript': 2,
|
# various templates we use odd values. Avoiding equal-precedent comparisons
|
||||||
'subscript2': 2,
|
# avoids ambiguity what to do when the precedence is equal.
|
||||||
'store_subscript': 2,
|
|
||||||
'delete_subscr': 2,
|
|
||||||
|
PRECEDENCE = {
|
||||||
|
'yield': 102,
|
||||||
|
'yield_from': 102,
|
||||||
|
|
||||||
|
'_mklambda': 30,
|
||||||
|
|
||||||
|
'conditional': 28, # Conditional expression
|
||||||
|
'conditional_lamdba': 28, # Lambda expression
|
||||||
|
'conditional_not_lamdba': 28, # Lambda expression
|
||||||
|
'conditionalnot': 28,
|
||||||
|
'if_expr_true': 28,
|
||||||
|
'ret_cond': 28,
|
||||||
|
|
||||||
|
'or': 26, # Boolean OR
|
||||||
|
'ret_or': 26,
|
||||||
|
|
||||||
|
'and': 24, # Boolean AND
|
||||||
|
'compare': 20, # in, not in, is, is not, <, <=, >, >=, !=, ==
|
||||||
|
'ret_and': 24,
|
||||||
|
'unary_not': 22, # Boolean NOT
|
||||||
|
|
||||||
|
'BINARY_AND': 14, # Bitwise AND
|
||||||
|
'BINARY_OR': 18, # Bitwise OR
|
||||||
|
'BINARY_XOR': 16, # Bitwise XOR
|
||||||
|
|
||||||
|
'BINARY_LSHIFT': 12, # Shifts <<
|
||||||
|
'BINARY_RSHIFT': 12, # Shifts >>
|
||||||
|
|
||||||
|
'BINARY_ADD': 10, # -
|
||||||
|
'BINARY_SUBTRACT': 10, # +
|
||||||
|
|
||||||
|
'BINARY_DIVIDE': 8, # /
|
||||||
|
'BINARY_FLOOR_DIVIDE': 8, # //
|
||||||
|
'BINARY_MATRIX_MULTIPLY': 8, # @
|
||||||
|
'BINARY_MODULO': 8, # Remainder, %
|
||||||
|
'BINARY_MULTIPLY': 8, # *
|
||||||
|
'BINARY_TRUE_DIVIDE': 8, # Division /
|
||||||
|
|
||||||
|
'unary_expr': 6, # +x, -x, ~x
|
||||||
|
|
||||||
|
'BINARY_POWER': 4, # Exponentiation, *
|
||||||
|
|
||||||
|
'attribute': 2, # x.attribute
|
||||||
|
'buildslice2': 2, # x[index]
|
||||||
|
'buildslice3': 2, # x[index:index]
|
||||||
|
'call': 2, # x(arguments...)
|
||||||
|
'delete_subscript': 2,
|
||||||
'slice0': 2,
|
'slice0': 2,
|
||||||
'slice1': 2,
|
'slice1': 2,
|
||||||
'slice2': 2,
|
'slice2': 2,
|
||||||
'slice3': 2,
|
'slice3': 2,
|
||||||
'buildslice2': 2,
|
'store_subscript': 2,
|
||||||
'buildslice3': 2,
|
'subscript': 2,
|
||||||
'call': 2,
|
'subscript2': 2,
|
||||||
|
|
||||||
'BINARY_POWER': 4,
|
'dict': 0, # {expressions...}
|
||||||
|
'dict_comp': 0,
|
||||||
'unary_expr': 6,
|
'generator_exp': 0, # (expressions...)
|
||||||
|
'list': 0, # [expressions...]
|
||||||
'BINARY_MULTIPLY': 8,
|
'list_comp': 0,
|
||||||
'BINARY_DIVIDE': 8,
|
'set_comp': 0,
|
||||||
'BINARY_TRUE_DIVIDE': 8,
|
'set_comp_expr': 0,
|
||||||
'BINARY_FLOOR_DIVIDE': 8,
|
'unary_convert': 0,
|
||||||
'BINARY_MODULO': 8,
|
|
||||||
|
|
||||||
'BINARY_ADD': 10,
|
|
||||||
'BINARY_SUBTRACT': 10,
|
|
||||||
|
|
||||||
'BINARY_LSHIFT': 12,
|
|
||||||
'BINARY_RSHIFT': 12,
|
|
||||||
|
|
||||||
'BINARY_AND': 14,
|
|
||||||
'BINARY_XOR': 16,
|
|
||||||
'BINARY_OR': 18,
|
|
||||||
|
|
||||||
'compare': 20,
|
|
||||||
'unary_not': 22,
|
|
||||||
'and': 24,
|
|
||||||
'ret_and': 24,
|
|
||||||
|
|
||||||
'or': 26,
|
|
||||||
'ret_or': 26,
|
|
||||||
|
|
||||||
'conditional': 28,
|
|
||||||
'conditional_lamdba': 28,
|
|
||||||
'conditional_not_lamdba': 28,
|
|
||||||
'conditionalnot': 28,
|
|
||||||
'ret_cond': 28,
|
|
||||||
|
|
||||||
'_mklambda': 30,
|
|
||||||
|
|
||||||
'yield': 101,
|
|
||||||
'yield_from': 101
|
|
||||||
}
|
}
|
||||||
|
|
||||||
LINE_LENGTH = 80
|
LINE_LENGTH = 80
|
||||||
@@ -118,10 +128,10 @@ PASS = SyntaxTree('stmts',
|
|||||||
[ SyntaxTree('stmt',
|
[ SyntaxTree('stmt',
|
||||||
[ SyntaxTree('pass', [])])])])
|
[ SyntaxTree('pass', [])])])])
|
||||||
|
|
||||||
ASSIGN_DOC_STRING = lambda doc_string: \
|
ASSIGN_DOC_STRING = lambda doc_string, doc_load: \
|
||||||
SyntaxTree('stmt',
|
SyntaxTree('stmt',
|
||||||
[ SyntaxTree('assign',
|
[ SyntaxTree('assign',
|
||||||
[ SyntaxTree('expr', [ Token('LOAD_CONST', pattr=doc_string) ]),
|
[ SyntaxTree('expr', [ Token(doc_load, pattr=doc_string, attr=doc_string) ]),
|
||||||
SyntaxTree('store', [ Token('STORE_NAME', pattr='__doc__')])
|
SyntaxTree('store', [ Token('STORE_NAME', pattr='__doc__')])
|
||||||
])])
|
])])
|
||||||
|
|
||||||
@@ -210,9 +220,10 @@ TABLE_DIRECT = {
|
|||||||
|
|
||||||
'IMPORT_FROM': ( '%{pattr}', ),
|
'IMPORT_FROM': ( '%{pattr}', ),
|
||||||
'attribute': ( '%c.%[1]{pattr}',
|
'attribute': ( '%c.%[1]{pattr}',
|
||||||
(0, 'expr')),
|
(0, 'expr')),
|
||||||
'LOAD_FAST': ( '%{pattr}', ),
|
'LOAD_STR': ( '%{pattr}', ),
|
||||||
'LOAD_NAME': ( '%{pattr}', ),
|
'LOAD_FAST': ( '%{pattr}', ),
|
||||||
|
'LOAD_NAME': ( '%{pattr}', ),
|
||||||
'LOAD_CLASSNAME': ( '%{pattr}', ),
|
'LOAD_CLASSNAME': ( '%{pattr}', ),
|
||||||
'LOAD_GLOBAL': ( '%{pattr}', ),
|
'LOAD_GLOBAL': ( '%{pattr}', ),
|
||||||
'LOAD_DEREF': ( '%{pattr}', ),
|
'LOAD_DEREF': ( '%{pattr}', ),
|
||||||
@@ -221,7 +232,7 @@ TABLE_DIRECT = {
|
|||||||
'DELETE_FAST': ( '%|del %{pattr}\n', ),
|
'DELETE_FAST': ( '%|del %{pattr}\n', ),
|
||||||
'DELETE_NAME': ( '%|del %{pattr}\n', ),
|
'DELETE_NAME': ( '%|del %{pattr}\n', ),
|
||||||
'DELETE_GLOBAL': ( '%|del %{pattr}\n', ),
|
'DELETE_GLOBAL': ( '%|del %{pattr}\n', ),
|
||||||
'delete_subscr': ( '%|del %p[%c]\n',
|
'delete_subscript': ( '%|del %p[%c]\n',
|
||||||
(0, 'expr', PRECEDENCE['subscript']), (1, 'expr') ),
|
(0, 'expr', PRECEDENCE['subscript']), (1, 'expr') ),
|
||||||
'subscript': ( '%p[%c]',
|
'subscript': ( '%p[%c]',
|
||||||
(0, 'expr', PRECEDENCE['subscript']),
|
(0, 'expr', PRECEDENCE['subscript']),
|
||||||
@@ -252,10 +263,11 @@ TABLE_DIRECT = {
|
|||||||
|
|
||||||
'list_iter': ( '%c', 0 ),
|
'list_iter': ( '%c', 0 ),
|
||||||
'list_for': ( ' for %c in %c%c', 2, 0, 3 ),
|
'list_for': ( ' for %c in %c%c', 2, 0, 3 ),
|
||||||
'list_if': ( ' if %c%c', 0, 2 ),
|
'list_if': ( ' if %p%c',
|
||||||
'list_if_not': ( ' if not %p%c',
|
(0, 'expr', 27), 2 ),
|
||||||
(0, 'expr', PRECEDENCE['unary_not']),
|
'list_if_not': ( ' if not %p%c',
|
||||||
2 ),
|
(0, 'expr', PRECEDENCE['unary_not']),
|
||||||
|
2 ),
|
||||||
'lc_body': ( '', ), # ignore when recursing
|
'lc_body': ( '', ), # ignore when recursing
|
||||||
|
|
||||||
'comp_iter': ( '%c', 0 ),
|
'comp_iter': ( '%c', 0 ),
|
||||||
@@ -281,19 +293,19 @@ TABLE_DIRECT = {
|
|||||||
'and2': ( '%c', 3 ),
|
'and2': ( '%c', 3 ),
|
||||||
'or': ( '%c or %c', 0, 2 ),
|
'or': ( '%c or %c', 0, 2 ),
|
||||||
'ret_or': ( '%c or %c', 0, 2 ),
|
'ret_or': ( '%c or %c', 0, 2 ),
|
||||||
'conditional': ( '%p if %p else %p', (2, 27), (0, 27), (4, 27) ),
|
'conditional': ( '%p if %c else %c',
|
||||||
'conditional_true': ( '%p if 1 else %p', (0, 27), (2, 27) ),
|
(2, 'expr', 27), 0, 4 ),
|
||||||
|
'if_expr_lambda': ( '%p if %c else %c',
|
||||||
|
(2, 'expr', 27), (0, 'expr'), 4 ),
|
||||||
|
'if_expr_true': ( '%p if 1 else %c', (0, 'expr', 27), 2 ),
|
||||||
'ret_cond': ( '%p if %p else %p', (2, 27), (0, 27), (-1, 27) ),
|
'ret_cond': ( '%p if %p else %p', (2, 27), (0, 27), (-1, 27) ),
|
||||||
'conditional_not': ( '%p if not %p else %p',
|
'conditional_not': ( '%p if not %p else %p',
|
||||||
(2, 27),
|
(2, 27),
|
||||||
(0, "expr", PRECEDENCE['unary_not']),
|
(0, "expr", PRECEDENCE['unary_not']),
|
||||||
(4, 27) ),
|
(4, 27) ),
|
||||||
'conditional_lambda':
|
|
||||||
( '%c if %c else %c',
|
|
||||||
(2, 'expr'), 0, 4 ),
|
|
||||||
'conditional_not_lambda':
|
'conditional_not_lambda':
|
||||||
( '%c if not %c else %c',
|
( '%p if not %c else %c',
|
||||||
(2, 'expr'), 0, 4 ),
|
(2, 'expr', 27), 0, 4 ),
|
||||||
|
|
||||||
'compare_single': ( '%p %[-1]{pattr.replace("-", " ")} %p', (0, 19), (1, 19) ),
|
'compare_single': ( '%p %[-1]{pattr.replace("-", " ")} %p', (0, 19), (1, 19) ),
|
||||||
'compare_chained': ( '%p %p', (0, 29), (1, 30)),
|
'compare_chained': ( '%p %p', (0, 29), (1, 30)),
|
||||||
@@ -306,7 +318,7 @@ TABLE_DIRECT = {
|
|||||||
'mkfuncdeco0': ( '%|def %c\n', 0),
|
'mkfuncdeco0': ( '%|def %c\n', 0),
|
||||||
'classdefdeco': ( '\n\n%c', 0),
|
'classdefdeco': ( '\n\n%c', 0),
|
||||||
'classdefdeco1': ( '%|@%c\n%c', 0, 1),
|
'classdefdeco1': ( '%|@%c\n%c', 0, 1),
|
||||||
'kwarg': ( '%[0]{pattr}=%c', 1),
|
'kwarg': ( '%[0]{pattr}=%c', 1), # Change when Python 2 does LOAD_STR
|
||||||
'kwargs': ( '%D', (0, maxint, ', ') ),
|
'kwargs': ( '%D', (0, maxint, ', ') ),
|
||||||
'kwargs1': ( '%D', (0, maxint, ', ') ),
|
'kwargs1': ( '%D', (0, maxint, ', ') ),
|
||||||
|
|
||||||
|
@@ -49,11 +49,6 @@ def customize_for_version(self, is_pypy, version):
|
|||||||
5, 6, 7, 0, 1, 2 ),
|
5, 6, 7, 0, 1, 2 ),
|
||||||
})
|
})
|
||||||
if version >= 3.0:
|
if version >= 3.0:
|
||||||
TABLE_DIRECT.update({
|
|
||||||
# Gotta love Python for its futzing around with syntax like this
|
|
||||||
'raise_stmt2': ( '%|raise %c from %c\n', 0, 1),
|
|
||||||
})
|
|
||||||
|
|
||||||
if version >= 3.2:
|
if version >= 3.2:
|
||||||
TABLE_DIRECT.update({
|
TABLE_DIRECT.update({
|
||||||
'del_deref_stmt': ( '%|del %c\n', 0),
|
'del_deref_stmt': ( '%|del %c\n', 0),
|
||||||
|
@@ -31,9 +31,31 @@ def customize_for_version26_27(self, version):
|
|||||||
if version > 2.6:
|
if version > 2.6:
|
||||||
TABLE_DIRECT.update({
|
TABLE_DIRECT.update({
|
||||||
'except_cond2': ( '%|except %c as %c:\n', 1, 5 ),
|
'except_cond2': ( '%|except %c as %c:\n', 1, 5 ),
|
||||||
|
# When a generator is a single parameter of a function,
|
||||||
|
# it doesn't need the surrounding parenethesis.
|
||||||
|
'call_generator': ('%c%P', 0, (1, -1, ', ', 100)),
|
||||||
})
|
})
|
||||||
else:
|
else:
|
||||||
TABLE_DIRECT.update({
|
TABLE_DIRECT.update({
|
||||||
'testtrue_then': ( 'not %p', (0, 22) ),
|
'testtrue_then': ( 'not %p', (0, 22) ),
|
||||||
|
|
||||||
})
|
})
|
||||||
|
|
||||||
|
def n_call(node):
|
||||||
|
mapping = self._get_mapping(node)
|
||||||
|
key = node
|
||||||
|
for i in mapping[1:]:
|
||||||
|
key = key[i]
|
||||||
|
pass
|
||||||
|
if key.kind == 'CALL_FUNCTION_1':
|
||||||
|
# A function with one argument. If this is a generator,
|
||||||
|
# no parenthesis is needed.
|
||||||
|
args_node = node[-2]
|
||||||
|
if args_node == 'expr':
|
||||||
|
n = args_node[0]
|
||||||
|
if n == 'generator_exp':
|
||||||
|
node.kind = 'call_generator'
|
||||||
|
pass
|
||||||
|
pass
|
||||||
|
|
||||||
|
self.default(node)
|
||||||
|
self.n_call = n_call
|
||||||
|
@@ -19,6 +19,7 @@
|
|||||||
from uncompyle6.semantics.consts import TABLE_DIRECT
|
from uncompyle6.semantics.consts import TABLE_DIRECT
|
||||||
|
|
||||||
from xdis.code import iscode
|
from xdis.code import iscode
|
||||||
|
from uncompyle6.semantics.helper import gen_function_parens_adjust
|
||||||
from uncompyle6.semantics.make_function import make_function3_annotate
|
from uncompyle6.semantics.make_function import make_function3_annotate
|
||||||
from uncompyle6.semantics.customize35 import customize_for_version35
|
from uncompyle6.semantics.customize35 import customize_for_version35
|
||||||
from uncompyle6.semantics.customize36 import customize_for_version36
|
from uncompyle6.semantics.customize36 import customize_for_version36
|
||||||
@@ -33,8 +34,15 @@ def customize_for_version3(self, version):
|
|||||||
(2, 'expr') , (0, 'expr'), (4, 'expr') ),
|
(2, 'expr') , (0, 'expr'), (4, 'expr') ),
|
||||||
'except_cond2' : ( '%|except %c as %c:\n', 1, 5 ),
|
'except_cond2' : ( '%|except %c as %c:\n', 1, 5 ),
|
||||||
'function_def_annotate': ( '\n\n%|def %c%c\n', -1, 0),
|
'function_def_annotate': ( '\n\n%|def %c%c\n', -1, 0),
|
||||||
|
|
||||||
|
# When a generator is a single parameter of a function,
|
||||||
|
# it doesn't need the surrounding parenethesis.
|
||||||
|
'call_generator' : ('%c%P', 0, (1, -1, ', ', 100)),
|
||||||
|
|
||||||
'importmultiple' : ( '%|import %c%c\n', 2, 3 ),
|
'importmultiple' : ( '%|import %c%c\n', 2, 3 ),
|
||||||
'import_cont' : ( ', %c', 2 ),
|
'import_cont' : ( ', %c', 2 ),
|
||||||
|
'kwarg' : ( '%[0]{attr}=%c', 1),
|
||||||
|
'raise_stmt2' : ( '%|raise %c from %c\n', 0, 1),
|
||||||
'store_locals' : ( '%|# inspect.currentframe().f_locals = __locals__\n', ),
|
'store_locals' : ( '%|# inspect.currentframe().f_locals = __locals__\n', ),
|
||||||
'withstmt' : ( '%|with %c:\n%+%c%-', 0, 3),
|
'withstmt' : ( '%|with %c:\n%+%c%-', 0, 3),
|
||||||
'withasstmt' : ( '%|with %c as (%c):\n%+%c%-', 0, 2, 3),
|
'withasstmt' : ( '%|with %c as (%c):\n%+%c%-', 0, 2, 3),
|
||||||
@@ -55,11 +63,11 @@ def customize_for_version3(self, version):
|
|||||||
subclass_info = None
|
subclass_info = None
|
||||||
if node == 'classdefdeco2':
|
if node == 'classdefdeco2':
|
||||||
if self.version >= 3.6:
|
if self.version >= 3.6:
|
||||||
class_name = node[1][1].pattr
|
class_name = node[1][1].attr
|
||||||
elif self.version <= 3.3:
|
elif self.version <= 3.3:
|
||||||
class_name = node[2][0].pattr
|
class_name = node[2][0].attr
|
||||||
else:
|
else:
|
||||||
class_name = node[1][2].pattr
|
class_name = node[1][2].attr
|
||||||
build_class = node
|
build_class = node
|
||||||
else:
|
else:
|
||||||
build_class = node[0]
|
build_class = node[0]
|
||||||
@@ -80,7 +88,7 @@ def customize_for_version3(self, version):
|
|||||||
code_node = build_class[1][0]
|
code_node = build_class[1][0]
|
||||||
class_name = code_node.attr.co_name
|
class_name = code_node.attr.co_name
|
||||||
else:
|
else:
|
||||||
class_name = node[1][0].pattr
|
class_name = node[1][0].attr
|
||||||
build_class = node[0]
|
build_class = node[0]
|
||||||
|
|
||||||
assert 'mkfunc' == build_class[1]
|
assert 'mkfunc' == build_class[1]
|
||||||
@@ -195,7 +203,9 @@ def customize_for_version3(self, version):
|
|||||||
self.n_yield_from = n_yield_from
|
self.n_yield_from = n_yield_from
|
||||||
|
|
||||||
if 3.2 <= version <= 3.4:
|
if 3.2 <= version <= 3.4:
|
||||||
|
|
||||||
def n_call(node):
|
def n_call(node):
|
||||||
|
|
||||||
mapping = self._get_mapping(node)
|
mapping = self._get_mapping(node)
|
||||||
key = node
|
key = node
|
||||||
for i in mapping[1:]:
|
for i in mapping[1:]:
|
||||||
@@ -227,11 +237,22 @@ def customize_for_version3(self, version):
|
|||||||
-2, (-2-kwargs, -2, ', '))
|
-2, (-2-kwargs, -2, ', '))
|
||||||
self.template_engine(template, node)
|
self.template_engine(template, node)
|
||||||
self.prune()
|
self.prune()
|
||||||
|
else:
|
||||||
|
gen_function_parens_adjust(key, node)
|
||||||
|
self.default(node)
|
||||||
|
|
||||||
|
self.n_call = n_call
|
||||||
|
elif version < 3.2:
|
||||||
|
def n_call(node):
|
||||||
|
mapping = self._get_mapping(node)
|
||||||
|
key = node
|
||||||
|
for i in mapping[1:]:
|
||||||
|
key = key[i]
|
||||||
|
pass
|
||||||
|
gen_function_parens_adjust(key, node)
|
||||||
self.default(node)
|
self.default(node)
|
||||||
self.n_call = n_call
|
self.n_call = n_call
|
||||||
|
|
||||||
|
|
||||||
def n_mkfunc_annotate(node):
|
def n_mkfunc_annotate(node):
|
||||||
|
|
||||||
if self.version >= 3.3 or node[-2] == 'kwargs':
|
if self.version >= 3.3 or node[-2] == 'kwargs':
|
||||||
|
@@ -19,7 +19,8 @@ from xdis.code import iscode
|
|||||||
from xdis.util import COMPILER_FLAG_BIT
|
from xdis.util import COMPILER_FLAG_BIT
|
||||||
from uncompyle6.semantics.consts import (
|
from uncompyle6.semantics.consts import (
|
||||||
INDENT_PER_LEVEL, TABLE_DIRECT)
|
INDENT_PER_LEVEL, TABLE_DIRECT)
|
||||||
from uncompyle6.semantics.helper import flatten_list
|
from uncompyle6.semantics.helper import (
|
||||||
|
flatten_list, gen_function_parens_adjust)
|
||||||
|
|
||||||
#######################
|
#######################
|
||||||
# Python 3.5+ Changes #
|
# Python 3.5+ Changes #
|
||||||
@@ -112,17 +113,21 @@ def customize_for_version35(self, version):
|
|||||||
template = ('*%c)', nargs+1)
|
template = ('*%c)', nargs+1)
|
||||||
self.template_engine(template, node)
|
self.template_engine(template, node)
|
||||||
self.prune()
|
self.prune()
|
||||||
|
else:
|
||||||
|
gen_function_parens_adjust(key, node)
|
||||||
|
|
||||||
self.default(node)
|
self.default(node)
|
||||||
self.n_call = n_call
|
self.n_call = n_call
|
||||||
|
|
||||||
def n_function_def(node):
|
def n_function_def(node):
|
||||||
if self.version >= 3.6:
|
n0 = node[0]
|
||||||
code_node = node[0][0]
|
is_code = False
|
||||||
else:
|
for i in list(range(len(n0)-2, -1, -1)):
|
||||||
code_node = node[0][1]
|
code_node = n0[i]
|
||||||
|
if hasattr(code_node, 'attr') and iscode(code_node.attr):
|
||||||
|
is_code = True
|
||||||
|
break
|
||||||
|
|
||||||
is_code = hasattr(code_node, 'attr') and iscode(code_node.attr)
|
|
||||||
if (is_code and
|
if (is_code and
|
||||||
(code_node.attr.co_flags & COMPILER_FLAG_BIT['COROUTINE'])):
|
(code_node.attr.co_flags & COMPILER_FLAG_BIT['COROUTINE'])):
|
||||||
self.template_engine(('\n\n%|async def %c\n',
|
self.template_engine(('\n\n%|async def %c\n',
|
||||||
|
@@ -17,7 +17,9 @@
|
|||||||
|
|
||||||
from spark_parser.ast import GenericASTTraversalPruningException
|
from spark_parser.ast import GenericASTTraversalPruningException
|
||||||
from uncompyle6.scanners.tok import Token
|
from uncompyle6.scanners.tok import Token
|
||||||
from uncompyle6.semantics.helper import flatten_list
|
from uncompyle6.semantics.helper import (
|
||||||
|
flatten_list, escape_string, strip_quotes
|
||||||
|
)
|
||||||
from uncompyle6.semantics.consts import (
|
from uncompyle6.semantics.consts import (
|
||||||
INDENT_PER_LEVEL, PRECEDENCE, TABLE_DIRECT, TABLE_R)
|
INDENT_PER_LEVEL, PRECEDENCE, TABLE_DIRECT, TABLE_R)
|
||||||
|
|
||||||
@@ -31,28 +33,19 @@ def escape_format(s):
|
|||||||
#######################
|
#######################
|
||||||
|
|
||||||
def customize_for_version36(self, version):
|
def customize_for_version36(self, version):
|
||||||
# Value 100 is important; it is exactly
|
PRECEDENCE['call_kw'] = 0
|
||||||
# module/function precidence.
|
PRECEDENCE['call_kw36'] = 1
|
||||||
PRECEDENCE['call_kw'] = 100
|
PRECEDENCE['call_ex'] = 1
|
||||||
PRECEDENCE['call_kw36'] = 100
|
PRECEDENCE['call_ex_kw'] = 1
|
||||||
PRECEDENCE['call_ex'] = 100
|
PRECEDENCE['call_ex_kw2'] = 1
|
||||||
PRECEDENCE['call_ex_kw'] = 100
|
PRECEDENCE['call_ex_kw3'] = 1
|
||||||
PRECEDENCE['call_ex_kw2'] = 100
|
PRECEDENCE['call_ex_kw4'] = 1
|
||||||
PRECEDENCE['call_ex_kw3'] = 100
|
|
||||||
PRECEDENCE['call_ex_kw4'] = 100
|
|
||||||
PRECEDENCE['unmap_dict'] = 0
|
PRECEDENCE['unmap_dict'] = 0
|
||||||
|
PRECEDENCE['formatted_value1'] = 100
|
||||||
|
|
||||||
TABLE_DIRECT.update({
|
TABLE_DIRECT.update({
|
||||||
'tryfinally36': ( '%|try:\n%+%c%-%|finally:\n%+%c%-\n\n',
|
'tryfinally36': ( '%|try:\n%+%c%-%|finally:\n%+%c%-\n\n',
|
||||||
(1, 'returns'), 3 ),
|
(1, 'returns'), 3 ),
|
||||||
'fstring_expr': ( "{%c%{conversion}}",
|
|
||||||
(0, 'expr') ),
|
|
||||||
# FIXME: the below assumes the format strings
|
|
||||||
# don't have ''' in them. Fix this properly
|
|
||||||
'fstring_single': ( "f'''{%c%{conversion}}'''", 0),
|
|
||||||
'formatted_value_attr': ( "f'''{%c%{conversion}}%{string}'''",
|
|
||||||
(0, 'expr')),
|
|
||||||
'fstring_multi': ( "f'''%c'''", 0),
|
|
||||||
'func_args36': ( "%c(**", 0),
|
'func_args36': ( "%c(**", 0),
|
||||||
'try_except36': ( '%|try:\n%+%c%-%c\n\n', 1, -2 ),
|
'try_except36': ( '%|try:\n%+%c%-%c\n\n', 1, -2 ),
|
||||||
'except_return': ( '%|except:\n%+%c%-', 3 ),
|
'except_return': ( '%|except:\n%+%c%-', 3 ),
|
||||||
@@ -67,9 +60,15 @@ def customize_for_version36(self, version):
|
|||||||
'call_ex' : (
|
'call_ex' : (
|
||||||
'%c(%p)',
|
'%c(%p)',
|
||||||
(0, 'expr'), (1, 100)),
|
(0, 'expr'), (1, 100)),
|
||||||
'call_ex_kw' : (
|
'store_annotation': (
|
||||||
'%c(%p)',
|
'%[1]{pattr}: %c',
|
||||||
(0, 'expr'), (2, 100)),
|
0
|
||||||
|
),
|
||||||
|
'ann_assign_init_value': (
|
||||||
|
'%|%c = %p\n',
|
||||||
|
(-1, 'store_annotation'), (0, 'expr', 200)),
|
||||||
|
'ann_assign_no_init': (
|
||||||
|
'%|%c\n', (0, 'store_annotation')),
|
||||||
|
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -80,20 +79,28 @@ def customize_for_version36(self, version):
|
|||||||
})
|
})
|
||||||
|
|
||||||
def build_unpack_tuple_with_call(node):
|
def build_unpack_tuple_with_call(node):
|
||||||
|
n = node[0]
|
||||||
if node[0] == 'expr':
|
if n == 'expr':
|
||||||
tup = node[0][0]
|
n = n[0]
|
||||||
|
if n == 'tuple':
|
||||||
|
self.call36_tuple(n)
|
||||||
|
first = 1
|
||||||
|
sep = ', *'
|
||||||
|
elif n == 'LOAD_STR':
|
||||||
|
value = self.format_pos_args(n)
|
||||||
|
self.f.write(value)
|
||||||
|
first = 1
|
||||||
|
sep = ', *'
|
||||||
else:
|
else:
|
||||||
tup = node[0]
|
first = 0
|
||||||
pass
|
sep = '*'
|
||||||
assert tup == 'tuple'
|
|
||||||
self.call36_tuple(tup)
|
|
||||||
|
|
||||||
buwc = node[-1]
|
buwc = node[-1]
|
||||||
assert buwc.kind.startswith('BUILD_TUPLE_UNPACK_WITH_CALL')
|
assert buwc.kind.startswith('BUILD_TUPLE_UNPACK_WITH_CALL')
|
||||||
for n in node[1:-1]:
|
for n in node[first:-1]:
|
||||||
self.f.write(', *')
|
self.f.write(sep)
|
||||||
self.preorder(n)
|
self.preorder(n)
|
||||||
|
sep = ', *'
|
||||||
pass
|
pass
|
||||||
self.prune()
|
self.prune()
|
||||||
return
|
return
|
||||||
@@ -119,45 +126,41 @@ def customize_for_version36(self, version):
|
|||||||
return
|
return
|
||||||
self.n_build_map_unpack_with_call = build_unpack_map_with_call
|
self.n_build_map_unpack_with_call = build_unpack_map_with_call
|
||||||
|
|
||||||
|
def call_ex_kw(node):
|
||||||
|
"""Handle CALL_FUNCTION_EX 1 (have KW) but with
|
||||||
|
BUILD_MAP_UNPACK_WITH_CALL"""
|
||||||
|
|
||||||
|
expr = node[1]
|
||||||
|
assert expr == 'expr'
|
||||||
|
|
||||||
|
value = self.format_pos_args(expr)
|
||||||
|
if value == '':
|
||||||
|
fmt = "%c(%p)"
|
||||||
|
else:
|
||||||
|
fmt = "%%c(%s, %%p)" % value
|
||||||
|
|
||||||
|
self.template_engine(
|
||||||
|
(fmt,
|
||||||
|
(0, 'expr'), (2, 'build_map_unpack_with_call', 100)), node)
|
||||||
|
|
||||||
|
self.prune()
|
||||||
|
self.n_call_ex_kw = call_ex_kw
|
||||||
|
|
||||||
def call_ex_kw2(node):
|
def call_ex_kw2(node):
|
||||||
"""Handle CALL_FUNCTION_EX 2 (have KW) but with
|
"""Handle CALL_FUNCTION_EX 2 (have KW) but with
|
||||||
BUILD_{MAP,TUPLE}_UNPACK_WITH_CALL"""
|
BUILD_{MAP,TUPLE}_UNPACK_WITH_CALL"""
|
||||||
|
|
||||||
# This is weird shit. Thanks Python!
|
|
||||||
self.preorder(node[0])
|
|
||||||
self.write('(')
|
|
||||||
|
|
||||||
assert node[1] == 'build_tuple_unpack_with_call'
|
assert node[1] == 'build_tuple_unpack_with_call'
|
||||||
btuwc = node[1]
|
value = self.format_pos_args(node[1])
|
||||||
tup = btuwc[0]
|
if value == '':
|
||||||
if tup == 'expr':
|
fmt = "%c(%p)"
|
||||||
tup = tup[0]
|
|
||||||
|
|
||||||
if tup == 'LOAD_CONST':
|
|
||||||
self.write(', '.join(['"%s"' % t.replace('"','\\"') for t in tup.attr]))
|
|
||||||
else:
|
else:
|
||||||
assert tup == 'tuple'
|
fmt = "%%c(%s, %%p)" % value
|
||||||
self.call36_tuple(tup)
|
|
||||||
|
|
||||||
assert node[2] == 'build_map_unpack_with_call'
|
self.template_engine(
|
||||||
|
(fmt,
|
||||||
|
(0, 'expr'), (2, 'build_map_unpack_with_call', 100)), node)
|
||||||
|
|
||||||
self.write(', ')
|
|
||||||
d = node[2][0]
|
|
||||||
if d == 'expr':
|
|
||||||
d = d[0]
|
|
||||||
assert d == 'dict'
|
|
||||||
self.call36_dict(d)
|
|
||||||
|
|
||||||
args = btuwc[1]
|
|
||||||
self.write(', *')
|
|
||||||
self.preorder(args)
|
|
||||||
|
|
||||||
self.write(', **')
|
|
||||||
star_star_args = node[2][1]
|
|
||||||
if star_star_args == 'expr':
|
|
||||||
star_star_args = star_star_args[0]
|
|
||||||
self.preorder(star_star_args)
|
|
||||||
self.write(')')
|
|
||||||
self.prune()
|
self.prune()
|
||||||
self.n_call_ex_kw2 = call_ex_kw2
|
self.n_call_ex_kw2 = call_ex_kw2
|
||||||
|
|
||||||
@@ -166,14 +169,13 @@ def customize_for_version36(self, version):
|
|||||||
BUILD_MAP_UNPACK_WITH_CALL"""
|
BUILD_MAP_UNPACK_WITH_CALL"""
|
||||||
self.preorder(node[0])
|
self.preorder(node[0])
|
||||||
self.write('(')
|
self.write('(')
|
||||||
args = node[1][0]
|
|
||||||
if args == 'expr':
|
value = self.format_pos_args(node[1][0])
|
||||||
args = args[0]
|
if value == '':
|
||||||
if args == 'tuple':
|
|
||||||
if self.call36_tuple(args) > 0:
|
|
||||||
self.write(', ')
|
|
||||||
pass
|
|
||||||
pass
|
pass
|
||||||
|
else:
|
||||||
|
self.write(value)
|
||||||
|
self.write(', ')
|
||||||
|
|
||||||
self.write('*')
|
self.write('*')
|
||||||
self.preorder(node[1][1])
|
self.preorder(node[1][1])
|
||||||
@@ -226,6 +228,25 @@ def customize_for_version36(self, version):
|
|||||||
self.prune()
|
self.prune()
|
||||||
self.n_call_ex_kw4 = call_ex_kw4
|
self.n_call_ex_kw4 = call_ex_kw4
|
||||||
|
|
||||||
|
def format_pos_args(node):
|
||||||
|
"""
|
||||||
|
Positional args should format to:
|
||||||
|
(*(2, ), ...) -> (2, ...)
|
||||||
|
We remove starting and trailing parenthesis and ', ' if
|
||||||
|
tuple has only one element.
|
||||||
|
"""
|
||||||
|
value = self.traverse(node, indent='')
|
||||||
|
if value.startswith('('):
|
||||||
|
assert value.endswith(')')
|
||||||
|
value = value[1:-1].rstrip(" ") # Remove starting '(' and trailing ')' and additional spaces
|
||||||
|
if value == '':
|
||||||
|
pass # args is empty
|
||||||
|
else:
|
||||||
|
if value.endswith(','): # if args has only one item
|
||||||
|
value = value[:-1]
|
||||||
|
return value
|
||||||
|
self.format_pos_args = format_pos_args
|
||||||
|
|
||||||
def call36_tuple(node):
|
def call36_tuple(node):
|
||||||
"""
|
"""
|
||||||
A tuple used in a call, these are like normal tuples but they
|
A tuple used in a call, these are like normal tuples but they
|
||||||
@@ -331,81 +352,8 @@ def customize_for_version36(self, version):
|
|||||||
return
|
return
|
||||||
self.call36_dict = call36_dict
|
self.call36_dict = call36_dict
|
||||||
|
|
||||||
|
|
||||||
FSTRING_CONVERSION_MAP = {1: '!s', 2: '!r', 3: '!a', 'X':':X'}
|
|
||||||
|
|
||||||
def n_except_suite_finalize(node):
|
|
||||||
if node[1] == 'returns' and self.hide_internal:
|
|
||||||
# Process node[1] only.
|
|
||||||
# The code after "returns", e.g. node[3], is dead code.
|
|
||||||
# Adding it is wrong as it dedents and another
|
|
||||||
# exception handler "except_stmt" afterwards.
|
|
||||||
# Note it is also possible that the grammar is wrong here.
|
|
||||||
# and this should not be "except_stmt".
|
|
||||||
self.indent_more()
|
|
||||||
self.preorder(node[1])
|
|
||||||
self.indent_less()
|
|
||||||
else:
|
|
||||||
self.default(node)
|
|
||||||
self.prune()
|
|
||||||
self.n_except_suite_finalize = n_except_suite_finalize
|
|
||||||
|
|
||||||
def n_formatted_value(node):
|
|
||||||
if node[0] == 'LOAD_CONST':
|
|
||||||
value = node[0].attr
|
|
||||||
if isinstance(value, tuple):
|
|
||||||
self.write(node[0].attr)
|
|
||||||
else:
|
|
||||||
self.write(escape_format(node[0].attr))
|
|
||||||
self.prune()
|
|
||||||
else:
|
|
||||||
self.default(node)
|
|
||||||
self.n_formatted_value = n_formatted_value
|
|
||||||
|
|
||||||
def f_conversion(node):
|
|
||||||
fmt_node = node.data[1]
|
|
||||||
if fmt_node == 'expr' and fmt_node[0] == 'LOAD_CONST':
|
|
||||||
data = fmt_node[0].attr
|
|
||||||
else:
|
|
||||||
data = fmt_node.attr
|
|
||||||
node.conversion = FSTRING_CONVERSION_MAP.get(data, '')
|
|
||||||
|
|
||||||
def fstring_expr(node):
|
|
||||||
f_conversion(node)
|
|
||||||
self.default(node)
|
|
||||||
self.n_fstring_expr = fstring_expr
|
|
||||||
|
|
||||||
def fstring_single(node):
|
|
||||||
f_conversion(node)
|
|
||||||
self.default(node)
|
|
||||||
self.n_fstring_single = fstring_single
|
|
||||||
|
|
||||||
def formatted_value_attr(node):
|
|
||||||
f_conversion(node)
|
|
||||||
fmt_node = node.data[3]
|
|
||||||
if fmt_node == 'expr' and fmt_node[0] == 'LOAD_CONST':
|
|
||||||
node.string = escape_format(fmt_node[0].attr)
|
|
||||||
else:
|
|
||||||
node.string = fmt_node
|
|
||||||
|
|
||||||
self.default(node)
|
|
||||||
self.n_formatted_value_attr = formatted_value_attr
|
|
||||||
|
|
||||||
# def kwargs_only_36(node):
|
|
||||||
# keys = node[-1].attr
|
|
||||||
# num_kwargs = len(keys)
|
|
||||||
# values = node[:num_kwargs]
|
|
||||||
# for i, (key, value) in enumerate(zip(keys, values)):
|
|
||||||
# self.write(key + '=')
|
|
||||||
# self.preorder(value)
|
|
||||||
# if i < num_kwargs:
|
|
||||||
# self.write(',')
|
|
||||||
# self.prune()
|
|
||||||
# return
|
|
||||||
# self.n_kwargs_only_36 = kwargs_only_36
|
|
||||||
|
|
||||||
def n_call_kw36(node):
|
def n_call_kw36(node):
|
||||||
self.template_engine(("%c(", 0), node)
|
self.template_engine(("%p(", (0, 100)), node)
|
||||||
keys = node[-2].attr
|
keys = node[-2].attr
|
||||||
num_kwargs = len(keys)
|
num_kwargs = len(keys)
|
||||||
num_posargs = len(node) - (num_kwargs + 2)
|
num_posargs = len(node) - (num_kwargs + 2)
|
||||||
@@ -442,6 +390,137 @@ def customize_for_version36(self, version):
|
|||||||
return
|
return
|
||||||
self.n_call_kw36 = n_call_kw36
|
self.n_call_kw36 = n_call_kw36
|
||||||
|
|
||||||
|
|
||||||
|
FSTRING_CONVERSION_MAP = {1: '!s', 2: '!r', 3: '!a', 'X':':X'}
|
||||||
|
|
||||||
|
def n_except_suite_finalize(node):
|
||||||
|
if node[1] == 'returns' and self.hide_internal:
|
||||||
|
# Process node[1] only.
|
||||||
|
# The code after "returns", e.g. node[3], is dead code.
|
||||||
|
# Adding it is wrong as it dedents and another
|
||||||
|
# exception handler "except_stmt" afterwards.
|
||||||
|
# Note it is also possible that the grammar is wrong here.
|
||||||
|
# and this should not be "except_stmt".
|
||||||
|
self.indent_more()
|
||||||
|
self.preorder(node[1])
|
||||||
|
self.indent_less()
|
||||||
|
else:
|
||||||
|
self.default(node)
|
||||||
|
self.prune()
|
||||||
|
self.n_except_suite_finalize = n_except_suite_finalize
|
||||||
|
|
||||||
|
def n_formatted_value(node):
|
||||||
|
if node[0] in ('LOAD_STR', 'LOAD_CONST'):
|
||||||
|
value = node[0].attr
|
||||||
|
if isinstance(value, tuple):
|
||||||
|
self.write(node[0].attr)
|
||||||
|
else:
|
||||||
|
self.write(escape_string(node[0].attr))
|
||||||
|
self.prune()
|
||||||
|
else:
|
||||||
|
self.default(node)
|
||||||
|
self.n_formatted_value = n_formatted_value
|
||||||
|
|
||||||
|
def n_formatted_value_attr(node):
|
||||||
|
f_conversion(node)
|
||||||
|
fmt_node = node.data[3]
|
||||||
|
if fmt_node == 'expr' and fmt_node[0] == 'LOAD_STR':
|
||||||
|
node.string = escape_format(fmt_node[0].attr)
|
||||||
|
else:
|
||||||
|
node.string = fmt_node
|
||||||
|
self.default(node)
|
||||||
|
self.n_formatted_value_attr = n_formatted_value_attr
|
||||||
|
|
||||||
|
def f_conversion(node):
|
||||||
|
fmt_node = node.data[1]
|
||||||
|
if fmt_node == 'expr' and fmt_node[0] == 'LOAD_STR':
|
||||||
|
data = fmt_node[0].attr
|
||||||
|
else:
|
||||||
|
data = fmt_node.attr
|
||||||
|
node.conversion = FSTRING_CONVERSION_MAP.get(data, '')
|
||||||
|
return node.conversion
|
||||||
|
|
||||||
|
def n_formatted_value1(node):
|
||||||
|
expr = node[0]
|
||||||
|
assert expr == 'expr'
|
||||||
|
value = self.traverse(expr, indent='')
|
||||||
|
conversion = f_conversion(node)
|
||||||
|
f_str = "f%s" % escape_string("{%s%s}" % (value, conversion))
|
||||||
|
self.write(f_str)
|
||||||
|
self.prune()
|
||||||
|
|
||||||
|
self.n_formatted_value1 = n_formatted_value1
|
||||||
|
|
||||||
|
def n_formatted_value2(node):
|
||||||
|
p = self.prec
|
||||||
|
self.prec = 100
|
||||||
|
|
||||||
|
expr = node[0]
|
||||||
|
assert expr == 'expr'
|
||||||
|
value = self.traverse(expr, indent='')
|
||||||
|
format_value_attr = node[-1]
|
||||||
|
assert format_value_attr == 'FORMAT_VALUE_ATTR'
|
||||||
|
attr = format_value_attr.attr
|
||||||
|
if attr == 4:
|
||||||
|
assert node[1] == 'expr'
|
||||||
|
fmt = strip_quotes(self.traverse(node[1], indent=''))
|
||||||
|
conversion = ":%s" % fmt
|
||||||
|
else:
|
||||||
|
conversion = FSTRING_CONVERSION_MAP.get(attr, '')
|
||||||
|
|
||||||
|
f_str = "f%s" % escape_string("{%s%s}" % (value, conversion))
|
||||||
|
self.write(f_str)
|
||||||
|
|
||||||
|
self.prec = p
|
||||||
|
self.prune()
|
||||||
|
self.n_formatted_value2 = n_formatted_value2
|
||||||
|
|
||||||
|
def n_joined_str(node):
|
||||||
|
p = self.prec
|
||||||
|
self.prec = 100
|
||||||
|
|
||||||
|
result = ''
|
||||||
|
for expr in node[:-1]:
|
||||||
|
assert expr == 'expr'
|
||||||
|
value = self.traverse(expr, indent='')
|
||||||
|
if expr[0].kind.startswith('formatted_value'):
|
||||||
|
# remove leading 'f'
|
||||||
|
assert value.startswith('f')
|
||||||
|
value = value[1:]
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
# {{ and }} in Python source-code format strings mean
|
||||||
|
# { and } respectively. But only when *not* part of a
|
||||||
|
# formatted value. However in the LOAD_STR
|
||||||
|
# bytecode, the escaping of the braces has been
|
||||||
|
# removed. So we need to put back the braces escaping in
|
||||||
|
# reconstructing the source.
|
||||||
|
assert expr[0] == 'LOAD_STR'
|
||||||
|
value = value.replace("{", "{{").replace("}", "}}")
|
||||||
|
|
||||||
|
# Remove leading quotes
|
||||||
|
result += strip_quotes(value)
|
||||||
|
pass
|
||||||
|
self.write('f%s' % escape_string(result))
|
||||||
|
|
||||||
|
self.prec = p
|
||||||
|
self.prune()
|
||||||
|
self.n_joined_str = n_joined_str
|
||||||
|
|
||||||
|
|
||||||
|
# def kwargs_only_36(node):
|
||||||
|
# keys = node[-1].attr
|
||||||
|
# num_kwargs = len(keys)
|
||||||
|
# values = node[:num_kwargs]
|
||||||
|
# for i, (key, value) in enumerate(zip(keys, values)):
|
||||||
|
# self.write(key + '=')
|
||||||
|
# self.preorder(value)
|
||||||
|
# if i < num_kwargs:
|
||||||
|
# self.write(',')
|
||||||
|
# self.prune()
|
||||||
|
# return
|
||||||
|
# self.n_kwargs_only_36 = kwargs_only_36
|
||||||
|
|
||||||
def starred(node):
|
def starred(node):
|
||||||
l = len(node)
|
l = len(node)
|
||||||
assert l > 0
|
assert l > 0
|
||||||
|
@@ -22,8 +22,13 @@ def customize_for_version37(self, version):
|
|||||||
# Python 3.7+ changes
|
# Python 3.7+ changes
|
||||||
#######################
|
#######################
|
||||||
|
|
||||||
PRECEDENCE['attribute37'] = 2
|
PRECEDENCE['attribute37'] = 2
|
||||||
|
PRECEDENCE['if_exp_37a'] = 28
|
||||||
|
PRECEDENCE['if_exp_37b'] = 28
|
||||||
|
|
||||||
TABLE_DIRECT.update({
|
TABLE_DIRECT.update({
|
||||||
|
'and_not': ( '%c and not %c',
|
||||||
|
(0, 'expr'), (2, 'expr') ),
|
||||||
'async_forelse_stmt': (
|
'async_forelse_stmt': (
|
||||||
'%|async for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n',
|
'%|async for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n',
|
||||||
(7, 'store'), (1, 'expr'), (17, 'for_block'), (25, 'else_suite') ),
|
(7, 'store'), (1, 'expr'), (17, 'for_block'), (25, 'else_suite') ),
|
||||||
@@ -40,9 +45,15 @@ def customize_for_version37(self, version):
|
|||||||
'compare_chained1_false_37': (
|
'compare_chained1_false_37': (
|
||||||
' %[3]{pattr.replace("-", " ")} %p %p',
|
' %[3]{pattr.replace("-", " ")} %p %p',
|
||||||
(0, 19), (-4, 19)),
|
(0, 19), (-4, 19)),
|
||||||
|
'compare_chained2_false_37': (
|
||||||
|
' %[3]{pattr.replace("-", " ")} %p %p',
|
||||||
|
(0, 19), (-5, 19)),
|
||||||
'compare_chained1b_37': (
|
'compare_chained1b_37': (
|
||||||
' %[3]{pattr.replace("-", " ")} %p %p',
|
' %[3]{pattr.replace("-", " ")} %p %p',
|
||||||
(0, 19), (-4, 19)),
|
(0, 19), (-4, 19)),
|
||||||
|
'compare_chained1c_37': (
|
||||||
|
' %[3]{pattr.replace("-", " ")} %p %p',
|
||||||
|
(0, 19), (-2, 19)),
|
||||||
'compare_chained2a_37': (
|
'compare_chained2a_37': (
|
||||||
'%[1]{pattr.replace("-", " ")} %p',
|
'%[1]{pattr.replace("-", " ")} %p',
|
||||||
(0, 19) ),
|
(0, 19) ),
|
||||||
@@ -54,5 +65,7 @@ def customize_for_version37(self, version):
|
|||||||
(0, 19 ) ),
|
(0, 19 ) ),
|
||||||
'compare_chained2c_37': (
|
'compare_chained2c_37': (
|
||||||
'%[3]{pattr.replace("-", " ")} %p %p', (0, 19), (6, 19) ),
|
'%[3]{pattr.replace("-", " ")} %p %p', (0, 19), (6, 19) ),
|
||||||
|
'if_exp_37a': ( '%p if %p else %p', (1, 'expr', 27), (0, 27), (4, 'expr', 27) ),
|
||||||
|
'if_exp_37b': ( '%p if %p else %p', (2, 'expr', 27), (0, 'expr', 27), (5, 'expr', 27) ),
|
||||||
|
|
||||||
})
|
})
|
||||||
|
@@ -424,6 +424,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
|
|||||||
pass
|
pass
|
||||||
self.set_pos_info(node, start, len(self.f.getvalue()))
|
self.set_pos_info(node, start, len(self.f.getvalue()))
|
||||||
self.prune()
|
self.prune()
|
||||||
|
n_LOAD_STR = n_LOAD_CONST
|
||||||
|
|
||||||
def n_exec_stmt(self, node):
|
def n_exec_stmt(self, node):
|
||||||
"""
|
"""
|
||||||
|
@@ -50,7 +50,7 @@ def find_globals_and_nonlocals(node, globs, nonlocals, code, version):
|
|||||||
# # print("XXX", n.kind, global_ops)
|
# # print("XXX", n.kind, global_ops)
|
||||||
# if isinstance(n, SyntaxTree):
|
# if isinstance(n, SyntaxTree):
|
||||||
# # FIXME: do I need a caser for n.kind="mkfunc"?
|
# # FIXME: do I need a caser for n.kind="mkfunc"?
|
||||||
# if n.kind in ("conditional_lambda", "return_lambda"):
|
# if n.kind in ("if_expr_lambda", "return_lambda"):
|
||||||
# globs = find_globals(n, globs, mklambda_globals)
|
# globs = find_globals(n, globs, mklambda_globals)
|
||||||
# else:
|
# else:
|
||||||
# globs = find_globals(n, globs, global_ops)
|
# globs = find_globals(n, globs, global_ops)
|
||||||
@@ -68,20 +68,53 @@ def find_none(node):
|
|||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def escape_string(str, quotes=('"', "'", '"""', "'''")):
|
||||||
|
quote = None
|
||||||
|
for q in quotes:
|
||||||
|
if str.find(q) == -1:
|
||||||
|
quote = q
|
||||||
|
break
|
||||||
|
pass
|
||||||
|
if quote is None:
|
||||||
|
quote = '"""'
|
||||||
|
str = str.replace('"""', '\\"""')
|
||||||
|
|
||||||
|
for (orig, replace) in (('\t', '\\t'),
|
||||||
|
('\n', '\\n'),
|
||||||
|
('\r', '\\r')):
|
||||||
|
str = str.replace(orig, replace)
|
||||||
|
return "%s%s%s" % (quote, str, quote)
|
||||||
|
|
||||||
|
def strip_quotes(str):
|
||||||
|
if str.startswith("'''") and str.endswith("'''"):
|
||||||
|
str = str[3:-3]
|
||||||
|
elif str.startswith('"""') and str.endswith('"""'):
|
||||||
|
str = str[3:-3]
|
||||||
|
elif str.startswith("'") and str.endswith("'"):
|
||||||
|
str = str[1:-1]
|
||||||
|
elif str.startswith('"') and str.endswith('"'):
|
||||||
|
str = str[1:-1]
|
||||||
|
pass
|
||||||
|
return str
|
||||||
|
|
||||||
|
|
||||||
def print_docstring(self, indent, docstring):
|
def print_docstring(self, indent, docstring):
|
||||||
try:
|
quote = '"""'
|
||||||
if docstring.find('"""') == -1:
|
if docstring.find(quote) >= 0:
|
||||||
quote = '"""'
|
if docstring.find("'''") == -1:
|
||||||
else:
|
|
||||||
quote = "'''"
|
quote = "'''"
|
||||||
docstring = docstring.replace("'''", "\\'''")
|
|
||||||
except:
|
|
||||||
return False
|
|
||||||
self.write(indent)
|
self.write(indent)
|
||||||
if not PYTHON3 and not isinstance(docstring, str):
|
if not PYTHON3 and not isinstance(docstring, str):
|
||||||
# Must be unicode in Python2
|
# Must be unicode in Python2
|
||||||
self.write('u')
|
self.write('u')
|
||||||
docstring = repr(docstring.expandtabs())[2:-1]
|
docstring = repr(docstring.expandtabs())[2:-1]
|
||||||
|
elif PYTHON3 and 2.4 <= self.version <= 2.7:
|
||||||
|
try:
|
||||||
|
repr(docstring.expandtabs())[1:-1].encode("ascii")
|
||||||
|
except UnicodeEncodeError:
|
||||||
|
self.write('u')
|
||||||
|
docstring = repr(docstring.expandtabs())[1:-1]
|
||||||
else:
|
else:
|
||||||
docstring = repr(docstring.expandtabs())[1:-1]
|
docstring = repr(docstring.expandtabs())[1:-1]
|
||||||
|
|
||||||
@@ -102,40 +135,42 @@ def print_docstring(self, indent, docstring):
|
|||||||
and (docstring[-1] != '"'
|
and (docstring[-1] != '"'
|
||||||
or docstring[-2] == '\t')):
|
or docstring[-2] == '\t')):
|
||||||
self.write('r') # raw string
|
self.write('r') # raw string
|
||||||
# restore backslashes unescaped since raw
|
# Restore backslashes unescaped since raw
|
||||||
docstring = docstring.replace('\t', '\\')
|
docstring = docstring.replace('\t', '\\')
|
||||||
else:
|
else:
|
||||||
# Escape '"' if it's the last character, so it doesn't
|
# Escape the last character if it is the same as the
|
||||||
# ruin the ending triple quote
|
# triple quote character.
|
||||||
if len(docstring) and docstring[-1] == '"':
|
quote1 = quote[-1]
|
||||||
docstring = docstring[:-1] + '\\"'
|
if len(docstring) and docstring[-1] == quote1:
|
||||||
# Restore escaped backslashes
|
docstring = docstring[:-1] + '\\' + quote1
|
||||||
|
|
||||||
|
# Escape triple quote when needed
|
||||||
|
if quote == '"""':
|
||||||
|
replace_str = '\\"""'
|
||||||
|
else:
|
||||||
|
assert quote == "'''"
|
||||||
|
replace_str = "\\'''"
|
||||||
|
|
||||||
|
docstring = docstring.replace(quote, replace_str)
|
||||||
docstring = docstring.replace('\t', '\\\\')
|
docstring = docstring.replace('\t', '\\\\')
|
||||||
# Escape triple quote when needed
|
|
||||||
if quote == '""""':
|
|
||||||
docstring = docstring.replace('"""', '\\"\\"\\"')
|
|
||||||
lines = docstring.split('\n')
|
lines = docstring.split('\n')
|
||||||
calculate_indent = maxint
|
|
||||||
for line in lines[1:]:
|
|
||||||
stripped = line.lstrip()
|
|
||||||
if len(stripped) > 0:
|
|
||||||
calculate_indent = min(calculate_indent, len(line) - len(stripped))
|
|
||||||
calculate_indent = min(calculate_indent, len(lines[-1]) - len(lines[-1].lstrip()))
|
|
||||||
# Remove indentation (first line is special):
|
|
||||||
trimmed = [lines[0]]
|
|
||||||
if calculate_indent < maxint:
|
|
||||||
trimmed += [line[calculate_indent:] for line in lines[1:]]
|
|
||||||
|
|
||||||
self.write(quote)
|
self.write(quote)
|
||||||
if len(trimmed) == 0:
|
if len(lines) == 0:
|
||||||
self.println(quote)
|
self.println(quote)
|
||||||
elif len(trimmed) == 1:
|
elif len(lines) == 1:
|
||||||
self.println(trimmed[0], quote)
|
self.println(lines[0], quote)
|
||||||
else:
|
else:
|
||||||
self.println(trimmed[0])
|
self.println(lines[0])
|
||||||
for line in trimmed[1:-1]:
|
for line in lines[1:-1]:
|
||||||
self.println( indent, line )
|
if line:
|
||||||
self.println(indent, trimmed[-1], quote)
|
self.println( line )
|
||||||
|
else:
|
||||||
|
self.println( "\n\n" )
|
||||||
|
pass
|
||||||
|
pass
|
||||||
|
self.println(lines[-1], quote)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
@@ -161,6 +196,26 @@ def flatten_list(node):
|
|||||||
pass
|
pass
|
||||||
return flat_elems
|
return flat_elems
|
||||||
|
|
||||||
|
# Note: this is only used in Python > 3.0
|
||||||
|
# Should move this somewhere more specific?
|
||||||
|
def gen_function_parens_adjust(mapping_key, node):
|
||||||
|
"""If we can avoid the outer parenthesis
|
||||||
|
of a generator function, set the node key to
|
||||||
|
'call_generator' and the caller will do the default
|
||||||
|
action on that. Otherwise we do nothing.
|
||||||
|
"""
|
||||||
|
if mapping_key.kind != 'CALL_FUNCTION_1':
|
||||||
|
return
|
||||||
|
|
||||||
|
args_node = node[-2]
|
||||||
|
if args_node == 'pos_arg':
|
||||||
|
assert args_node[0] == 'expr'
|
||||||
|
n = args_node[0][0]
|
||||||
|
if n == 'generator_exp':
|
||||||
|
node.kind = 'call_generator'
|
||||||
|
pass
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
# if __name__ == '__main__':
|
# if __name__ == '__main__':
|
||||||
# if PYTHON3:
|
# if PYTHON3:
|
||||||
|
@@ -85,6 +85,12 @@ def make_function3_annotate(self, node, is_lambda, nested=1,
|
|||||||
annotate_argc = 0
|
annotate_argc = 0
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
annotate_dict = {}
|
||||||
|
|
||||||
|
for name in annotate_args.keys():
|
||||||
|
n = self.traverse(annotate_args[name], indent='')
|
||||||
|
annotate_dict[name] = n
|
||||||
|
|
||||||
if 3.0 <= self.version <= 3.2:
|
if 3.0 <= self.version <= 3.2:
|
||||||
lambda_index = -2
|
lambda_index = -2
|
||||||
elif 3.03 <= self.version:
|
elif 3.03 <= self.version:
|
||||||
@@ -103,7 +109,11 @@ def make_function3_annotate(self, node, is_lambda, nested=1,
|
|||||||
|
|
||||||
# add defaults values to parameter names
|
# add defaults values to parameter names
|
||||||
argc = code.co_argcount
|
argc = code.co_argcount
|
||||||
|
kwonlyargcount = code.co_kwonlyargcount
|
||||||
|
|
||||||
paramnames = list(code.co_varnames[:argc])
|
paramnames = list(code.co_varnames[:argc])
|
||||||
|
if kwonlyargcount > 0:
|
||||||
|
kwargs = list(code.co_varnames[argc:argc+kwonlyargcount])
|
||||||
|
|
||||||
try:
|
try:
|
||||||
ast = self.build_ast(code._tokens,
|
ast = self.build_ast(code._tokens,
|
||||||
@@ -129,10 +139,6 @@ def make_function3_annotate(self, node, is_lambda, nested=1,
|
|||||||
indent = ' ' * l
|
indent = ' ' * l
|
||||||
line_number = self.line_number
|
line_number = self.line_number
|
||||||
|
|
||||||
if code_has_star_arg(code):
|
|
||||||
self.write('*%s' % code.co_varnames[argc + kw_pairs])
|
|
||||||
argc += 1
|
|
||||||
|
|
||||||
i = len(paramnames) - len(defparams)
|
i = len(paramnames) - len(defparams)
|
||||||
suffix = ''
|
suffix = ''
|
||||||
|
|
||||||
@@ -141,10 +147,8 @@ def make_function3_annotate(self, node, is_lambda, nested=1,
|
|||||||
for param in paramnames[:i]:
|
for param in paramnames[:i]:
|
||||||
self.write(suffix, param)
|
self.write(suffix, param)
|
||||||
suffix = ', '
|
suffix = ', '
|
||||||
if param in annotate_tuple[0].attr:
|
if param in annotate_dict:
|
||||||
p = [x for x in annotate_tuple[0].attr].index(param)
|
self.write(': %s' % annotate_dict[param])
|
||||||
self.write(': ')
|
|
||||||
self.preorder(node[p])
|
|
||||||
if (line_number != self.line_number):
|
if (line_number != self.line_number):
|
||||||
suffix = ",\n" + indent
|
suffix = ",\n" + indent
|
||||||
line_number = self.line_number
|
line_number = self.line_number
|
||||||
@@ -183,8 +187,17 @@ def make_function3_annotate(self, node, is_lambda, nested=1,
|
|||||||
suffix = ', '
|
suffix = ', '
|
||||||
|
|
||||||
|
|
||||||
|
if code_has_star_arg(code):
|
||||||
|
star_arg = code.co_varnames[argc + kwonlyargcount]
|
||||||
|
if annotate_dict and star_arg in annotate_dict:
|
||||||
|
self.write(suffix, '*%s: %s' % (star_arg, annotate_dict[star_arg]))
|
||||||
|
else:
|
||||||
|
self.write(suffix, '*%s' % star_arg)
|
||||||
|
argc += 1
|
||||||
|
|
||||||
# self.println(indent, '#flags:\t', int(code.co_flags))
|
# self.println(indent, '#flags:\t', int(code.co_flags))
|
||||||
if kw_args + annotate_argc > 0:
|
ends_in_comma = False
|
||||||
|
if kwonlyargcount > 0:
|
||||||
if no_paramnames:
|
if no_paramnames:
|
||||||
if not code_has_star_arg(code):
|
if not code_has_star_arg(code):
|
||||||
if argc > 0:
|
if argc > 0:
|
||||||
@@ -194,49 +207,52 @@ def make_function3_annotate(self, node, is_lambda, nested=1,
|
|||||||
pass
|
pass
|
||||||
else:
|
else:
|
||||||
self.write(", ")
|
self.write(", ")
|
||||||
|
ends_in_comma = True
|
||||||
|
else:
|
||||||
|
if argc > 0:
|
||||||
|
self.write(', ')
|
||||||
|
ends_in_comma = True
|
||||||
|
|
||||||
kwargs = node[0]
|
kw_args = [None] * kwonlyargcount
|
||||||
last = len(kwargs)-1
|
|
||||||
i = 0
|
for n in node:
|
||||||
for n in node[0]:
|
if n == 'kwargs':
|
||||||
if n == 'kwarg':
|
n = n[0]
|
||||||
if (line_number != self.line_number):
|
if n == 'kwarg':
|
||||||
self.write("\n" + indent)
|
name = eval(n[0].pattr)
|
||||||
line_number = self.line_number
|
idx = kwargs.index(name)
|
||||||
self.write('%s=' % n[0].pattr)
|
default = self.traverse(n[1], indent='')
|
||||||
self.preorder(n[1])
|
if annotate_dict and name in annotate_dict:
|
||||||
if i < last:
|
kw_args[idx] = '%s: %s=%s' % (name, annotate_dict[name], default)
|
||||||
self.write(', ')
|
else:
|
||||||
i += 1
|
kw_args[idx] = '%s=%s' % (name, default)
|
||||||
pass
|
|
||||||
pass
|
|
||||||
annotate_args = []
|
|
||||||
for n in node:
|
|
||||||
if n == 'annotate_arg':
|
|
||||||
annotate_args.append(n[0])
|
|
||||||
elif n == 'annotate_tuple':
|
|
||||||
t = n[0].attr
|
|
||||||
if t[-1] == 'return':
|
|
||||||
t = t[0:-1]
|
|
||||||
annotate_args = annotate_args[:-1]
|
|
||||||
pass
|
|
||||||
last = len(annotate_args) - 1
|
|
||||||
for i in range(len(annotate_args)):
|
|
||||||
self.write("%s: " % (t[i]))
|
|
||||||
self.preorder(annotate_args[i])
|
|
||||||
if i < last:
|
|
||||||
self.write(', ')
|
|
||||||
pass
|
|
||||||
pass
|
|
||||||
break
|
|
||||||
pass
|
pass
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
# handling other args
|
||||||
|
ann_other_kw = [c == None for c in kw_args]
|
||||||
|
for i, flag in enumerate(ann_other_kw):
|
||||||
|
if flag:
|
||||||
|
n = kwargs[i]
|
||||||
|
if n in annotate_dict:
|
||||||
|
kw_args[i] = "%s: %s" %(n, annotate_dict[n])
|
||||||
|
else:
|
||||||
|
kw_args[i] = "%s" % n
|
||||||
|
|
||||||
if code_has_star_star_arg(code):
|
self.write(', '.join(kw_args), ', ')
|
||||||
if argc > 0:
|
|
||||||
self.write(', ')
|
else:
|
||||||
self.write('**%s' % code.co_varnames[argc + kw_pairs])
|
if argc == 0:
|
||||||
|
ends_in_comma = True
|
||||||
|
|
||||||
|
if code_has_star_star_arg(code):
|
||||||
|
if not ends_in_comma:
|
||||||
|
self.write(', ')
|
||||||
|
star_star_arg = code.co_varnames[argc + kwonlyargcount]
|
||||||
|
if annotate_dict and star_star_arg in annotate_dict:
|
||||||
|
self.write('**%s: %s' % (star_star_arg, annotate_dict[star_star_arg]))
|
||||||
|
else:
|
||||||
|
self.write('**%s' % star_star_arg)
|
||||||
|
|
||||||
if is_lambda:
|
if is_lambda:
|
||||||
self.write(": ")
|
self.write(": ")
|
||||||
@@ -473,7 +489,7 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
|
|
||||||
# Thank you, Python.
|
# Thank you, Python.
|
||||||
|
|
||||||
def build_param(ast, name, default):
|
def build_param(ast, name, default, annotation=None):
|
||||||
"""build parameters:
|
"""build parameters:
|
||||||
- handle defaults
|
- handle defaults
|
||||||
- handle format tuple parameters
|
- handle format tuple parameters
|
||||||
@@ -483,7 +499,10 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
else:
|
else:
|
||||||
value = self.traverse(default, indent='')
|
value = self.traverse(default, indent='')
|
||||||
maybe_show_tree_param_default(self.showast, name, value)
|
maybe_show_tree_param_default(self.showast, name, value)
|
||||||
result = '%s=%s' % (name, value)
|
if annotation:
|
||||||
|
result = '%s: %s=%s' % (name, annotation, value)
|
||||||
|
else:
|
||||||
|
result = '%s=%s' % (name, value)
|
||||||
|
|
||||||
# The below can probably be removed. This is probably
|
# The below can probably be removed. This is probably
|
||||||
# a holdover from days when LOAD_CONST erroneously
|
# a holdover from days when LOAD_CONST erroneously
|
||||||
@@ -658,7 +677,11 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
|
|
||||||
# add defaults values to parameter names
|
# add defaults values to parameter names
|
||||||
argc = code.co_argcount
|
argc = code.co_argcount
|
||||||
|
kwonlyargcount = code.co_kwonlyargcount
|
||||||
|
|
||||||
paramnames = list(scanner_code.co_varnames[:argc])
|
paramnames = list(scanner_code.co_varnames[:argc])
|
||||||
|
if kwonlyargcount > 0:
|
||||||
|
kwargs = list(scanner_code.co_varnames[argc:argc+kwonlyargcount])
|
||||||
|
|
||||||
# defaults are for last n parameters, thus reverse
|
# defaults are for last n parameters, thus reverse
|
||||||
paramnames.reverse();
|
paramnames.reverse();
|
||||||
@@ -681,21 +704,37 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
else:
|
else:
|
||||||
kw_pairs = 0
|
kw_pairs = 0
|
||||||
|
|
||||||
|
i = len(paramnames) - len(defparams)
|
||||||
|
no_paramnames = len(paramnames[:i]) == 0
|
||||||
|
|
||||||
# build parameters
|
# build parameters
|
||||||
params = []
|
params = []
|
||||||
if defparams:
|
if defparams:
|
||||||
for i, defparam in enumerate(defparams):
|
for i, defparam in enumerate(defparams):
|
||||||
params.append(build_param(ast, paramnames[i], defparam))
|
params.append(build_param(ast, paramnames[i], defparam,
|
||||||
|
annotate_dict.get(paramnames[i])))
|
||||||
|
|
||||||
params += paramnames[i+1:]
|
for param in paramnames[i+1:]:
|
||||||
|
if param in annotate_dict:
|
||||||
|
params.append("%s: %s" % (param, annotate_dict[param]))
|
||||||
|
else:
|
||||||
|
params.append(param)
|
||||||
else:
|
else:
|
||||||
params = paramnames
|
for param in paramnames:
|
||||||
|
if param in annotate_dict:
|
||||||
|
params.append("%s: %s" % (param, annotate_dict[param]))
|
||||||
|
else:
|
||||||
|
params.append(param)
|
||||||
|
|
||||||
params.reverse() # back to correct order
|
params.reverse() # back to correct order
|
||||||
|
|
||||||
if code_has_star_arg(code):
|
if code_has_star_arg(code):
|
||||||
if self.version > 3.0:
|
if self.version > 3.0:
|
||||||
params.append('*%s' % code.co_varnames[argc + kw_pairs])
|
star_arg = code.co_varnames[argc + kwonlyargcount]
|
||||||
|
if annotate_dict and star_arg in annotate_dict:
|
||||||
|
params.append('*%s: %s' % (star_arg, annotate_dict[star_arg]))
|
||||||
|
else:
|
||||||
|
params.append('*%s' % star_arg)
|
||||||
else:
|
else:
|
||||||
params.append('*%s' % code.co_varnames[argc])
|
params.append('*%s' % code.co_varnames[argc])
|
||||||
argc += 1
|
argc += 1
|
||||||
@@ -724,17 +763,25 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
self.write("(", ", ".join(params))
|
self.write("(", ", ".join(params))
|
||||||
# self.println(indent, '#flags:\t', int(code.co_flags))
|
# self.println(indent, '#flags:\t', int(code.co_flags))
|
||||||
|
|
||||||
|
# FIXME: Could we remove ends_in_comma and its tests if we just
|
||||||
|
# created a parameter list and at the very end did a join on that?
|
||||||
|
# Unless careful, We might lose line breaks though.
|
||||||
ends_in_comma = False
|
ends_in_comma = False
|
||||||
if kw_args > 0:
|
if kwonlyargcount > 0:
|
||||||
if not (4 & code.co_flags):
|
if no_paramnames:
|
||||||
if argc > 0:
|
if not (4 & code.co_flags):
|
||||||
self.write(", *, ")
|
if argc > 0:
|
||||||
|
self.write(", *, ")
|
||||||
|
else:
|
||||||
|
self.write("*, ")
|
||||||
|
pass
|
||||||
else:
|
else:
|
||||||
self.write("*, ")
|
self.write(", ")
|
||||||
pass
|
ends_in_comma = True
|
||||||
else:
|
else:
|
||||||
self.write(", ")
|
if argc > 0:
|
||||||
ends_in_comma = True
|
self.write(', ')
|
||||||
|
ends_in_comma = True
|
||||||
|
|
||||||
# FIXME: this is not correct for 3.5. or 3.6 (which works different)
|
# FIXME: this is not correct for 3.5. or 3.6 (which works different)
|
||||||
# and 3.7?
|
# and 3.7?
|
||||||
@@ -744,7 +791,7 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
i = 0
|
i = 0
|
||||||
for n in node[0]:
|
for n in node[0]:
|
||||||
if n == 'kwarg':
|
if n == 'kwarg':
|
||||||
self.write('%s=' % n[0].pattr)
|
self.write('%s=' % n[0].attr)
|
||||||
self.preorder(n[1])
|
self.preorder(n[1])
|
||||||
if i < last:
|
if i < last:
|
||||||
self.write(', ')
|
self.write(', ')
|
||||||
@@ -773,7 +820,7 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
# argcount = co.co_argcount
|
# argcount = co.co_argcount
|
||||||
# kwonlyargcount = co.co_kwonlyargcount
|
# kwonlyargcount = co.co_kwonlyargcount
|
||||||
|
|
||||||
free_tup = annotate_dict = kw_dict = default_tup = None
|
free_tup = ann_dict = kw_dict = default_tup = None
|
||||||
fn_bits = node[-1].attr
|
fn_bits = node[-1].attr
|
||||||
index = -4 # Skip over:
|
index = -4 # Skip over:
|
||||||
# MAKE_FUNCTION,
|
# MAKE_FUNCTION,
|
||||||
@@ -783,7 +830,7 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
free_tup = node[index]
|
free_tup = node[index]
|
||||||
index -= 1
|
index -= 1
|
||||||
if fn_bits[-2]:
|
if fn_bits[-2]:
|
||||||
annotate_dict = node[index]
|
ann_dict = node[index]
|
||||||
index -= 1
|
index -= 1
|
||||||
if fn_bits[-3]:
|
if fn_bits[-3]:
|
||||||
kw_dict = node[index]
|
kw_dict = node[index]
|
||||||
@@ -795,6 +842,8 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
kw_dict = kw_dict[0]
|
kw_dict = kw_dict[0]
|
||||||
|
|
||||||
# FIXME: handle free_tup, annotate_dict, and default_tup
|
# FIXME: handle free_tup, annotate_dict, and default_tup
|
||||||
|
kw_args = [None] * kwonlyargcount
|
||||||
|
|
||||||
if kw_dict:
|
if kw_dict:
|
||||||
assert kw_dict == 'dict'
|
assert kw_dict == 'dict'
|
||||||
defaults = [self.traverse(n, indent='') for n in kw_dict[:-2]]
|
defaults = [self.traverse(n, indent='') for n in kw_dict[:-2]]
|
||||||
@@ -803,18 +852,42 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
|
|||||||
sep = ''
|
sep = ''
|
||||||
# FIXME: possibly handle line breaks
|
# FIXME: possibly handle line breaks
|
||||||
for i, n in enumerate(names):
|
for i, n in enumerate(names):
|
||||||
self.write(sep)
|
idx = kwargs.index(n)
|
||||||
self.write("%s=%s" % (n, defaults[i]))
|
if annotate_dict and n in annotate_dict:
|
||||||
sep = ', '
|
t = "%s: %s=%s" % (n, annotate_dict[n], defaults[i])
|
||||||
ends_in_comma = False
|
else:
|
||||||
|
t = "%s=%s" % (n, defaults[i])
|
||||||
|
kw_args[idx] = t
|
||||||
pass
|
pass
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
# handle others
|
||||||
|
if ann_dict:
|
||||||
|
ann_other_kw = [c == None for c in kw_args]
|
||||||
|
|
||||||
|
for i, flag in enumerate(ann_other_kw):
|
||||||
|
if flag:
|
||||||
|
n = kwargs[i]
|
||||||
|
if n in annotate_dict:
|
||||||
|
kw_args[i] = "%s: %s" %(n, annotate_dict[n])
|
||||||
|
else:
|
||||||
|
kw_args[i] = "%s" % n
|
||||||
|
self.write(', '.join(kw_args))
|
||||||
|
ends_in_comma = False
|
||||||
|
|
||||||
pass
|
pass
|
||||||
|
else:
|
||||||
|
if argc == 0:
|
||||||
|
ends_in_comma = True
|
||||||
|
|
||||||
if code_has_star_star_arg(code):
|
if code_has_star_star_arg(code):
|
||||||
if argc > 0 and not ends_in_comma:
|
if not ends_in_comma:
|
||||||
self.write(', ')
|
self.write(', ')
|
||||||
self.write('**%s' % code.co_varnames[argc + kw_pairs])
|
star_star_arg = code.co_varnames[argc + kwonlyargcount]
|
||||||
|
if annotate_dict and star_star_arg in annotate_dict:
|
||||||
|
self.write('**%s: %s' % (star_star_arg, annotate_dict[star_star_arg]))
|
||||||
|
else:
|
||||||
|
self.write('**%s' % star_star_arg)
|
||||||
|
|
||||||
if is_lambda:
|
if is_lambda:
|
||||||
self.write(": ")
|
self.write(": ")
|
||||||
|
@@ -26,7 +26,7 @@ Upper levels of the grammar is a more-or-less conventional grammar for
|
|||||||
Python.
|
Python.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# The below is a bit long, but still it is somehwat abbreviated.
|
# The below is a bit long, but still it is somewhat abbreviated.
|
||||||
# See https://github.com/rocky/python-uncompyle6/wiki/Table-driven-semantic-actions.
|
# See https://github.com/rocky/python-uncompyle6/wiki/Table-driven-semantic-actions.
|
||||||
# for a more complete explanation, nicely marked up and with examples.
|
# for a more complete explanation, nicely marked up and with examples.
|
||||||
#
|
#
|
||||||
@@ -363,7 +363,10 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
def write(self, *data):
|
def write(self, *data):
|
||||||
if (len(data) == 0) or (len(data) == 1 and data[0] == ''):
|
if (len(data) == 0) or (len(data) == 1 and data[0] == ''):
|
||||||
return
|
return
|
||||||
out = ''.join((str(j) for j in data))
|
if not PYTHON3:
|
||||||
|
out = ''.join((unicode(j) for j in data))
|
||||||
|
else:
|
||||||
|
out = ''.join((str(j) for j in data))
|
||||||
n = 0
|
n = 0
|
||||||
for i in out:
|
for i in out:
|
||||||
if i == '\n':
|
if i == '\n':
|
||||||
@@ -607,17 +610,22 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
else:
|
else:
|
||||||
self.write(repr(data))
|
self.write(repr(data))
|
||||||
else:
|
else:
|
||||||
|
if not PYTHON3:
|
||||||
|
try:
|
||||||
|
repr(data).encode("ascii")
|
||||||
|
except UnicodeEncodeError:
|
||||||
|
self.write('u')
|
||||||
self.write(repr(data))
|
self.write(repr(data))
|
||||||
# LOAD_CONST is a terminal, so stop processing/recursing early
|
# LOAD_CONST is a terminal, so stop processing/recursing early
|
||||||
self.prune()
|
self.prune()
|
||||||
|
|
||||||
def n_delete_subscr(self, node):
|
def n_delete_subscript(self, node):
|
||||||
if node[-2][0] == 'build_list' and node[-2][0][-1].kind.startswith('BUILD_TUPLE'):
|
if node[-2][0] == 'build_list' and node[-2][0][-1].kind.startswith('BUILD_TUPLE'):
|
||||||
if node[-2][0][-1] != 'BUILD_TUPLE_0':
|
if node[-2][0][-1] != 'BUILD_TUPLE_0':
|
||||||
node[-2][0].kind = 'build_tuple2'
|
node[-2][0].kind = 'build_tuple2'
|
||||||
self.default(node)
|
self.default(node)
|
||||||
|
|
||||||
n_store_subscript = n_subscript = n_delete_subscr
|
n_store_subscript = n_subscript = n_delete_subscript
|
||||||
|
|
||||||
# Note: this node is only in Python 2.x
|
# Note: this node is only in Python 2.x
|
||||||
# FIXME: figure out how to get this into customization
|
# FIXME: figure out how to get this into customization
|
||||||
@@ -1100,6 +1108,9 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
comp_store = ast[3]
|
comp_store = ast[3]
|
||||||
|
|
||||||
have_not = False
|
have_not = False
|
||||||
|
|
||||||
|
# Iterate to find the innermost store
|
||||||
|
# We'll come back to the list iteration below.
|
||||||
while n in ('list_iter', 'comp_iter'):
|
while n in ('list_iter', 'comp_iter'):
|
||||||
# iterate one nesting deeper
|
# iterate one nesting deeper
|
||||||
if self.version == 3.0 and len(n) == 3:
|
if self.version == 3.0 and len(n) == 3:
|
||||||
@@ -1109,7 +1120,7 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
n = n[0]
|
n = n[0]
|
||||||
|
|
||||||
if n in ('list_for', 'comp_for'):
|
if n in ('list_for', 'comp_for'):
|
||||||
if n[2] == 'store':
|
if n[2] == 'store' and not store:
|
||||||
store = n[2]
|
store = n[2]
|
||||||
n = n[3]
|
n = n[3]
|
||||||
elif n in ('list_if', 'list_if_not', 'comp_if', 'comp_if_not'):
|
elif n in ('list_if', 'list_if_not', 'comp_if', 'comp_if_not'):
|
||||||
@@ -1153,11 +1164,12 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
self.write(' in ')
|
self.write(' in ')
|
||||||
self.preorder(node[-3])
|
self.preorder(node[-3])
|
||||||
|
|
||||||
|
# Here is where we handle nested list iterations.
|
||||||
if ast == 'list_comp' and self.version != 3.0:
|
if ast == 'list_comp' and self.version != 3.0:
|
||||||
list_iter = ast[1]
|
list_iter = ast[1]
|
||||||
assert list_iter == 'list_iter'
|
assert list_iter == 'list_iter'
|
||||||
if list_iter == 'list_for':
|
if list_iter[0] == 'list_for':
|
||||||
self.preorder(list_iter[3])
|
self.preorder(list_iter[0][3])
|
||||||
self.prec = p
|
self.prec = p
|
||||||
return
|
return
|
||||||
pass
|
pass
|
||||||
@@ -1168,6 +1180,7 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
self.write(' if ')
|
self.write(' if ')
|
||||||
if have_not:
|
if have_not:
|
||||||
self.write('not ')
|
self.write('not ')
|
||||||
|
self.prec = 27
|
||||||
self.preorder(if_node)
|
self.preorder(if_node)
|
||||||
pass
|
pass
|
||||||
self.prec = p
|
self.prec = p
|
||||||
@@ -1423,22 +1436,20 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
n = len(node) - 1
|
n = len(node) - 1
|
||||||
if node.kind != 'expr':
|
if node.kind != 'expr':
|
||||||
if node == 'kwarg':
|
if node == 'kwarg':
|
||||||
self.write('(')
|
self.template_engine(('(%[0]{attr}=%c)', 1), node)
|
||||||
self.template_engine(('%[0]{pattr}=%c', 1), node)
|
|
||||||
self.write(')')
|
|
||||||
return
|
return
|
||||||
|
|
||||||
kwargs = None
|
kwargs = None
|
||||||
assert node[n].kind.startswith('CALL_FUNCTION')
|
assert node[n].kind.startswith('CALL_FUNCTION')
|
||||||
|
|
||||||
if node[n].kind.startswith('CALL_FUNCTION_KW'):
|
if node[n].kind.startswith('CALL_FUNCTION_KW'):
|
||||||
# 3.6+ starts does this
|
# 3.6+ starts doing this
|
||||||
kwargs = node[n-1].attr
|
kwargs = node[n-1].attr
|
||||||
assert isinstance(kwargs, tuple)
|
assert isinstance(kwargs, tuple)
|
||||||
i = n - (len(kwargs)+1)
|
i = n - (len(kwargs)+1)
|
||||||
j = 1 + n - node[n].attr
|
j = 1 + n - node[n].attr
|
||||||
else:
|
else:
|
||||||
start = n-2
|
i = start = n-2
|
||||||
for i in range(start, 0, -1):
|
for i in range(start, 0, -1):
|
||||||
if not node[i].kind in ['expr', 'call', 'LOAD_CLASSNAME']:
|
if not node[i].kind in ['expr', 'call', 'LOAD_CLASSNAME']:
|
||||||
break
|
break
|
||||||
@@ -1836,11 +1847,7 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
typ = m.group('type') or '{'
|
typ = m.group('type') or '{'
|
||||||
node = startnode
|
node = startnode
|
||||||
if m.group('child'):
|
if m.group('child'):
|
||||||
try:
|
node = node[int(m.group('child'))]
|
||||||
node = node[int(m.group('child'))]
|
|
||||||
except:
|
|
||||||
from trepan.api import debug; debug()
|
|
||||||
pass
|
|
||||||
|
|
||||||
if typ == '%': self.write('%')
|
if typ == '%': self.write('%')
|
||||||
elif typ == '+':
|
elif typ == '+':
|
||||||
@@ -2100,6 +2107,7 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
have_qualname = False
|
have_qualname = False
|
||||||
if self.version < 3.0:
|
if self.version < 3.0:
|
||||||
# Should we ditch this in favor of the "else" case?
|
# Should we ditch this in favor of the "else" case?
|
||||||
@@ -2115,7 +2123,7 @@ class SourceWalker(GenericASTTraversal, object):
|
|||||||
# which are not simple classes like the < 3 case.
|
# which are not simple classes like the < 3 case.
|
||||||
try:
|
try:
|
||||||
if (first_stmt[0] == 'assign' and
|
if (first_stmt[0] == 'assign' and
|
||||||
first_stmt[0][0][0] == 'LOAD_CONST' and
|
first_stmt[0][0][0] == 'LOAD_STR' and
|
||||||
first_stmt[0][1] == 'store' and
|
first_stmt[0][1] == 'store' and
|
||||||
first_stmt[0][1][0] == Token('STORE_NAME', pattr='__qualname__')):
|
first_stmt[0][1][0] == Token('STORE_NAME', pattr='__qualname__')):
|
||||||
have_qualname = True
|
have_qualname = True
|
||||||
@@ -2326,13 +2334,28 @@ def code_deparse(co, out=sys.stdout, version=None, debug_opts=DEFAULT_DEBUG_OPTS
|
|||||||
|
|
||||||
assert not nonlocals
|
assert not nonlocals
|
||||||
|
|
||||||
|
if version >= 3.0:
|
||||||
|
load_op = 'LOAD_STR'
|
||||||
|
else:
|
||||||
|
load_op = 'LOAD_CONST'
|
||||||
|
|
||||||
# convert leading '__doc__ = "..." into doc string
|
# convert leading '__doc__ = "..." into doc string
|
||||||
try:
|
try:
|
||||||
if deparsed.ast[0][0] == ASSIGN_DOC_STRING(co.co_consts[0]):
|
stmts = deparsed.ast
|
||||||
|
first_stmt = stmts[0][0]
|
||||||
|
if version >= 3.6:
|
||||||
|
if first_stmt[0] == 'SETUP_ANNOTATIONS':
|
||||||
|
del stmts[0]
|
||||||
|
assert stmts[0] == 'sstmt'
|
||||||
|
# Nuke sstmt
|
||||||
|
first_stmt = stmts[0][0]
|
||||||
|
pass
|
||||||
|
pass
|
||||||
|
if first_stmt == ASSIGN_DOC_STRING(co.co_consts[0], load_op):
|
||||||
print_docstring(deparsed, '', co.co_consts[0])
|
print_docstring(deparsed, '', co.co_consts[0])
|
||||||
del deparsed.ast[0]
|
del stmts[0]
|
||||||
if deparsed.ast[-1] == RETURN_NONE:
|
if stmts[-1] == RETURN_NONE:
|
||||||
deparsed.ast.pop() # remove last node
|
stmts.pop() # remove last node
|
||||||
# todo: if empty, add 'pass'
|
# todo: if empty, add 'pass'
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
@@ -12,4 +12,4 @@
|
|||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
# This file is suitable for sourcing inside bash as
|
# This file is suitable for sourcing inside bash as
|
||||||
# well as importing into Python
|
# well as importing into Python
|
||||||
VERSION='3.3.2' # noqa
|
VERSION='3.3.4' # noqa
|
||||||
|
Reference in New Issue
Block a user