Compare commits

...

69 Commits

Author SHA1 Message Date
rocky
b51039ac1e Get ready for release 2.12.0 2017-09-26 09:59:55 -04:00
rocky
f73f0ba41c No unicode in Python3.
but we need it in Python2. The bug was probably introduced
as a result of recent Python code type unteroperability canonicalization
2017-09-26 09:43:01 -04:00
rocky
114f979555 Pyton 3.1 Annotation args can be unicode? 2017-09-26 09:31:04 -04:00
rocky
7b38d2f1f8 Adjust for xdis opcode JUMP_OPS. release 2.12.0 2017-09-25 20:01:31 -04:00
rocky
dfbd60231b Get ready for release 2.12.0 2017-09-25 19:11:25 -04:00
rocky
8b67f2ccd0 Python 3 compatibility 2017-09-21 11:47:42 -04:00
rocky
aadea7224d Unit test for format-specifiers
And in the process we catch some small bugs
2017-09-21 11:25:51 -04:00
rocky
da7421da1c Tidy pysource and fragments a little more 2017-09-20 19:02:56 -04:00
rocky
96ca68a6fe Tidy/regularize table entry formatting 2017-09-20 17:47:56 -04:00
rocky
147b6e1cfe Small fixes
test_pyenvlib.py: it is sys.exit(), not exit()
pysource.py: reinstate nod type of async_func_call
2017-09-20 11:32:42 -04:00
rocky
d7b12f4da1 More small doc changes 2017-09-20 02:49:14 -04:00
rocky
c7b9e54e59 Update Table-driven info...
Start a pysource unit test.
2017-09-20 00:06:50 -04:00
rocky
3003070acb engine -> template_engine 2017-09-17 11:56:51 -04:00
rocky
19d6dedcf5 Need weak-verification on 3.4 for now 2017-09-13 01:09:04 -04:00
rocky
51ad3fb36e Revert one of the changes pending a better fix 2017-09-10 03:01:19 -04:00
rocky
f017acce21 More semantic action cleanup 2017-09-10 02:56:47 -04:00
rocky
5bef5683e4 Match Python 3.4's terms a little names better 2017-09-10 00:48:54 -04:00
rocky
4e1467adc8 Revert last revert 2017-09-09 08:08:40 -04:00
rocky
7cdf0abb43 Revert last change 2017-09-09 08:03:04 -04:00
rocky
9b336251a7 New-style Python classes only, please. 2017-09-09 07:47:21 -04:00
rocky
7844456e1e Skeletal support for Python 3.7
Largely failing though.
2017-08-31 10:12:09 -04:00
rocky
356ea6c770 Remove python versions tag
I think it's messing up Pypi's very fussy formatting
2017-08-31 09:50:48 -04:00
rocky
4d58438515 Get ready for release 2.11.5 2017-08-31 09:42:14 -04:00
rocky
f7bfe3f7b2 3.7 support 2017-08-15 21:52:43 -04:00
rocky
c54a47b15f Get ready for release 2.11.4 2017-08-15 10:57:14 -04:00
rocky
d1e02afb4b Misc cleanups...
remove code now in xdis
require at least xdis 3.5.4
PyPy tolerance in validate testing
2017-08-15 09:41:39 -04:00
rocky
f4ceb6304d Allow 3-part version string lookups, e.g 2.7.1
We allow a float here, but if passed a string like
'2.7'. or  '2.7.13', accept that in looking up
either a scanner or a parser.
2017-08-13 09:17:07 -04:00
rocky
503039ab51 Link typo
Name is trepan2 now not trepan
2017-08-10 09:41:48 -04:00
rocky
8393064136 Get ready for release 2.11.3
need xdis 3.5.1 for now. Adjust for xdis "is-not" which we need as "is not"
2017-08-09 22:09:31 -04:00
rocky
bb9b3ac9cf Revert commit to wrong branch 2017-08-02 08:25:39 -04:00
rocky
05ac60ea74 Remove six from Python-2.4/2.5 package 2017-08-02 08:18:54 -04:00
rocky
d138a01bf1 xdis's "exception match" is now "exception-match" 2017-07-17 22:42:57 -04:00
rocky
9e8e4f54c7 xdis 3.5.1 is botched? 2017-07-15 00:24:40 -04:00
rocky
a06a5e1cd8 Use newer xdis 2017-07-14 23:45:56 -04:00
R. Bernstein
1048f6a964 Fixes issue #124 2017-07-14 23:43:40 -04:00
rocky
7fed237077 History updates 2017-07-14 08:03:06 -04:00
rocky
8b816ead0d RsT doc formatting 2017-07-09 02:06:39 -04:00
rocky
300d387349 Get ready for release 2.11.2 2017-07-09 01:44:55 -04:00
rocky
27ab6fe2f5 Use xdis 3.5.0's opcode sets 2017-07-08 20:41:46 -04:00
rocky
2e164763eb Start supporting Pypy 3.5 (5.7.1-beta) 2017-07-08 17:47:32 -04:00
rocky
d332bde104 Loops in Python 2.4-2.6 loop come_from
Looks like Python 2.4-2.6 may have a COME_FROM(_LOOP)
before the jump_back.

Fixes Issue #123
2017-07-05 06:12:14 -04:00
rocky
0893652943 Work around not having real flow-control analysis 2017-06-29 20:49:01 -04:00
rocky
6efd7afda3 continue non-detection in Python 2.7
fixes issue 122
2017-06-29 20:27:07 -04:00
rocky
ee3202779a A guard against badly formated bytecode 2017-06-28 18:39:05 -04:00
rocky
9c072a6a42 3.x funciton and annotation bug fixes 2017-06-25 18:46:03 -04:00
rocky
277ad36566 Get ready for release 2.11.1 2017-06-25 13:50:46 -04:00
rocky
af3d46b35c Use xdis' instruction offset calculation fns..
next_offset, op_size, has_argument
2017-06-24 06:43:04 -04:00
rocky
e1bc0c5cd6 Python 2 sometimes need str->uncode in writing? 2017-06-19 08:02:59 -04:00
rocky
5a519ed36a Allow deparsed out to be str as well as unicode 2017-06-19 07:55:09 -04:00
rocky
af10f99776 Get ready for release 2.11.0 2017-06-18 15:31:44 -04:00
rocky
0cbafa6e3a Adjust nodeInfo if it is a Token 2017-06-13 04:41:32 -04:00
rocky
4afaee2a36 Add nonterminal node in extractInfo 2017-06-13 04:17:23 -04:00
rocky
daea3c348c Fragment tag more expressions
Revise make_function3 comment wrt args and kwargs
2017-06-10 16:31:56 -04:00
rocky
bf45260588 Fragment tag array subscripts 2017-06-10 08:05:18 -04:00
R. Bernstein
34a356d237 Create README.rst 2017-06-10 06:21:36 -04:00
R. Bernstein
d9c1374a59 Create README.rst 2017-06-10 06:14:06 -04:00
rocky
2e05137f2b Set YIELD_VALUE offset in a <yield> expr 2017-06-10 02:09:58 -04:00
rocky
267ecda070 Python 3.2 MAKE_FUNCTION again..
Was handling bug32/01_named_and_kwargs.py wrong again
2017-06-10 01:42:50 -04:00
R. Bernstein
7e89839777 Merge pull request #119 from rocky/scan-longconstant
Simplify access to L65536 ...
2017-06-09 18:57:28 -04:00
rocky
c7f8edd5ef Simplify access to L65536 ...
and fix use in scanner26.py. Thanks to AnythingTechPro
2017-06-09 18:22:02 -04:00
rocky
6a991833a3 Attempt to document the MAKE_FUNCTION/MAKE_LAMBDA mess...
in Python 3.0+
2017-06-09 06:52:14 -04:00
rocky
28ee3f1257 Correct make_function3 for Pytohn 3.2 2017-06-08 21:49:13 -04:00
rocky
e9588e56e2 Disable "continue" removal in pysource.py
"continue" could be the only statement and then removing it
might lead to a dangling "else".
2017-06-08 04:35:06 -04:00
rocky
7b2217fda4 Mark "pass" offsets.
Start routine to find previous node.
2017-06-07 22:14:38 -04:00
rocky
5ca219f3d3 Remove hacky fragments try fixup...
hacky call_function code is also not needed or will be reinstated
properly. Better grammar structure for Python 3.6 call_function.
2017-06-06 21:58:47 -04:00
rocky
b733a1b036 BUILD_{MAP,TUPLE}_UNPACK & CALL_FUNCTION_EX_KW...
Bang on these in 3.6. Not totally succesfull right now.
In fact a regression on one of the test cases
2017-06-05 23:51:51 -04:00
rocky
4615cda03f Important fragments bug fix...
start, finish that had been adjusted wasn't getting reflected in final
returned deparsed.offsets dictionary. Redo keeping API compatibility,
i.e we still use namedtuple NodeInfo.
2017-06-05 21:17:17 -04:00
rocky
eb92418224 Python 3.5 *args with kwargs handling.
3.5 is a snowflake here. Thank you, Python.

Fully fixes Issue 95.

3.6 is broken on this source, but for a *different* reason. Sigh.
2017-06-04 17:53:51 -04:00
rocky
844221cd43 Small changes.
fragment tag EXEC_STMT
2017-06-03 23:29:46 -04:00
53 changed files with 1447 additions and 578 deletions

345
ChangeLog
View File

@@ -1,6 +1,349 @@
2017-09-26 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: No unicode in Python3. but we need it in Python2. The bug was probably introduced as a
result of recent Python code type unteroperability canonicalization
2017-09-26 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: Pyton 3.1 Annotation args can be
unicode?
2017-09-25 rocky <rb@dustyfeet.com>
* : Adjust for xdis opcode JUMP_OPS. release 2.12.0
2017-09-21 rocky <rb@dustyfeet.com>
* pytest/test_pysource.py: Python 3 compatibility
2017-09-21 rocky <rb@dustyfeet.com>
* pytest/test_pysource.py, uncompyle6/semantics/consts.py,
uncompyle6/semantics/fragments.py, uncompyle6/semantics/pysource.py:
Unit test for format-specifiers And in the process we catch some small bugs
2017-09-20 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: Tidy pysource and fragments a
little more
2017-09-20 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/consts.py: Tidy/regularize table entry
formatting
2017-09-20 rocky <rb@dustyfeet.com>
* test/test_pythonlib.py, uncompyle6/semantics/pysource.py: Small
fixes test_pyenvlib.py: it is sys.exit(), not exit() pysource.py:
reinstate nod type of async_func_call
2017-09-20 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/consts.py, uncompyle6/semantics/pysource.py:
More small doc changes
2017-09-20 rocky <rb@dustyfeet.com>
* pytest/test_pysource.py, uncompyle6/semantics/pysource.py: Update
Table-driven info... Start a pysource unit test.
2017-09-17 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: engine -> template_engine
2017-09-13 rocky <rb@dustyfeet.com>
* test/Makefile: Need weak-verification on 3.4 for now
2017-09-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Revert one of the changes
pending a better fix
2017-09-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: More semantic action cleanup
2017-09-10 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py, uncompyle6/scanners/tok.py: Match
Python 3.4's terms a little names better
2017-09-09 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/tok.py: Revert last revert
2017-09-09 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/tok.py: Revert last change
2017-09-09 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/tok.py: New-style Python classes only, please.
2017-08-31 rocky <rb@dustyfeet.com>
* uncompyle6/scanner.py, uncompyle6/scanners/scanner37.py: Skeletal
support for Python 3.7 Largely failing though.
2017-08-31 rocky <rb@dustyfeet.com>
* README.rst: Remove python versions tag I think it's messing up Pypi's very fussy formatting
2017-08-31 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, README.rst, __pkginfo__.py,
uncompyle6/parsers/parse37.py,
uncompyle6/semantics/make_function.py, uncompyle6/version.py: Get
ready for release 2.11.5
2017-08-15 rocky <rb@dustyfeet.com>
* Makefile: 3.7 support
2017-08-15 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
2.11.4
2017-08-15 rocky <rb@dustyfeet.com>
* __pkginfo__.py, pytest/validate.py, uncompyle6/parser.py,
uncompyle6/scanner.py: Misc cleanups... remove code now in xdis require at least xdis 3.5.4 PyPy tolerance
in validate testing
2017-08-13 rocky <rb@dustyfeet.com>
* pytest/test_basic.py, uncompyle6/parser.py, uncompyle6/scanner.py:
Allow 3-part version string lookups, e.g 2.7.1 We allow a float here, but if passed a string like '2.7'. or
'2.7.13', accept that in looking up either a scanner or a parser.
2017-08-10 rocky <rb@dustyfeet.com>
* README.rst: Link typo Name is trepan2 now not trepan
2017-08-09 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, README.rst, __pkginfo__.py,
uncompyle6/semantics/consts.py, uncompyle6/version.py: Get ready for
release 2.11.3 need xdis 3.5.1 for now. Adjust for xdis "is-not" which we need as
"is not"
2017-08-02 rocky <rb@dustyfeet.com>
* __pkginfo__.py: Revert commit to wrong branch
2017-08-02 rocky <rb@dustyfeet.com>
* __pkginfo__.py: Remove six from Python-2.4/2.5 package
2017-07-17 rocky <rb@dustyfeet.com>
* __pkginfo__.py, uncompyle6/scanners/scanner2.py,
uncompyle6/scanners/scanner3.py, uncompyle6/scanners/scanner30.py:
xdis's "exception match" is now "exception-match"
2017-07-15 rocky <rb@dustyfeet.com>
* __pkginfo__.py: xdis 3.5.1 is botched?
2017-07-14 rocky <rb@dustyfeet.com>
* __pkginfo__.py: Use newer xdis
2017-07-14 R. Bernstein <rocky@users.noreply.github.com>
* README.rst: Fixes issue #124
2017-07-14 rocky <rb@dustyfeet.com>
* HISTORY.md: History updates
2017-07-09 rocky <rb@dustyfeet.com>
* README.rst: RsT doc formatting
2017-07-09 rocky <rb@dustyfeet.com>
* ChangeLog, HOW-TO-REPORT-A-BUG.md, NEWS, uncompyle6/version.py:
Get ready for release 2.11.2
2017-07-08 rocky <rb@dustyfeet.com>
* __pkginfo__.py, uncompyle6/scanner.py,
uncompyle6/scanners/scanner2.py, uncompyle6/scanners/scanner26.py,
uncompyle6/scanners/scanner3.py, uncompyle6/scanners/scanner30.py,
uncompyle6/scanners/tok.py: Use xdis 3.5.0's opcode sets
2017-07-08 rocky <rb@dustyfeet.com>
* test/test_pyenvlib.py, uncompyle6/scanners/pypy32.py,
uncompyle6/scanners/pypy35.py, uncompyle6/scanners/scanner15.py,
uncompyle6/scanners/scanner32.py, uncompyle6/scanners/scanner34.py,
uncompyle6/scanners/scanner35.py, uncompyle6/scanners/scanner36.py:
Start supporting Pypy 3.5 (5.7.1-beta)
2017-07-05 rocky <rb@dustyfeet.com>
* test/simple_source/bug26/03_loop_if_cf.py,
uncompyle6/parsers/parse26.py: Loops in Python 2.4-2.6 loop
come_from Looks like Python 2.4-2.6 may have a COME_FROM(_LOOP) before the
jump_back. Fixes Issue #123
2017-06-29 rocky <rb@dustyfeet.com>
* : Work around not having real flow-control analysis
2017-06-28 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/make_function.py: A guard against badly
formated bytecode
2017-06-25 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, test/simple_source/bug31/04_def_annotate.py,
uncompyle6/semantics/make_function.py,
uncompyle6/semantics/pysource.py: 3.x funciton and annotation bug
fixes
2017-06-25 rocky <rb@dustyfeet.com>
* uncompyle6/version.py: Get ready for release 2.11.1
2017-06-24 rocky <rb@dustyfeet.com>
* __pkginfo__.py, uncompyle6/scanner.py,
uncompyle6/scanners/scanner2.py, uncompyle6/scanners/scanner3.py,
uncompyle6/scanners/scanner30.py, uncompyle6/semantics/pysource.py:
Use xdis' instruction offset calculation fns.. next_offset, op_size, has_argument
2017-06-19 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Python 2 sometimes need
str->uncode in writing?
2017-06-19 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Allow deparsed out to be str as
well as unicode
2017-06-18 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
2.11.0
2017-06-13 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Adjust nodeInfo if it is a
Token
2017-06-13 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Add nonterminal node in
extractInfo
2017-06-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/make_function.py: Fragment tag more expressions Revise make_function3 comment wrt args and kwargs
2017-06-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Fragment tag array subscripts
2017-06-10 R. Bernstein <rocky@users.noreply.github.com>
* README.rst: Create README.rst
2017-06-10 R. Bernstein <rocky@users.noreply.github.com>
* README.rst: Create README.rst
2017-06-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Set YIELD_VALUE offset in a
<yield> expr
2017-06-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/make_function.py: Python 3.2 MAKE_FUNCTION
again.. Was handling bug32/01_named_and_kwargs.py wrong again
2017-06-09 R. Bernstein <rocky@users.noreply.github.com>
* : Merge pull request #119 from rocky/scan-longconstant Simplify access to L65536 ...
2017-06-09 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/make_function.py: Attempt to document the
MAKE_FUNCTION/MAKE_LAMBDA mess... in Python 3.0+
2017-06-08 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/make_function.py: Correct make_function3 for
Pytohn 3.2
2017-06-08 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Disable "continue" removal in
pysource.py "continue" could be the only statement and then removing it might
lead to a dangling "else".
2017-06-07 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Mark "pass" offsets. Start routine to find previous node.
2017-06-06 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/semantics/fragments.py:
Remove hacky fragments try fixup... hacky call_function code is also not needed or will be reinstated
properly. Better grammar structure for Python 3.6 call_function.
2017-06-05 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/parsers/parse36.py,
uncompyle6/scanners/scanner36.py: BUILD_{MAP,TUPLE}_UNPACK &
CALL_FUNCTION_EX_KW... Bang on these in 3.6. Not totally succesfull right now. In fact a
regression on one of the test cases
2017-06-05 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Important fragments bug fix... start, finish that had been adjusted wasn't getting reflected in
final returned deparsed.offsets dictionary. Redo keeping API
compatibility, i.e we still use namedtuple NodeInfo.
2017-06-04 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/semantics/pysource.py:
Python 3.5 *args with kwargs handling. 3.5 is a snowflake here. Thank you, Python. Fully fixes Issue 95. 3.6 is broken on this source, but for a *different* reason. Sigh.
2017-06-03 rocky <rb@dustyfeet.com>
* uncompyle6/version.py: Get ready for release 2.10.1
* README.rst, __pkginfo__.py,
test/simple_source/bug35/04_CALL_FUNCTION_VAR_KW.py,
uncompyle6/semantics/fragments.py: Small changes. fragment tag EXEC_STMT
2017-06-03 rocky <rb@dustyfeet.com>
* .travis.yml: Streamline .travis.yml a little bit
2017-06-03 rocky <rb@dustyfeet.com>
* __pkginfo__.py: We need six
2017-06-03 rocky <rb@dustyfeet.com>
* README.rst, circle.yml, requirements-dev.txt: Go over
administrivia
2017-06-03 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
2.10.1
2017-06-03 rocky <rb@dustyfeet.com>

View File

@@ -44,8 +44,8 @@ it appears that Hartmut did most of the work to get this code to
accept the full Python language. He added precedence to the table
specifiers, support for multiple versions of Python, the
pretty-printing of docstrings, lists, and hashes. He also wrote test and verification routines of
deparsed bytecode, and used this in an extensive set of tests that he also wrote. He says he could verify against the
entire Python library. However I have subsequently found small and relatively obscure bugs in the decompilation code.
deparsed bytecode, and used this in an extensive set of tests that he also wrote. He says he could verify against the
entire Python library. However I have subsequently found small and relatively obscure bugs in the decompilation code.
decompyle2.2 was packaged for Debian (sarge) by
[Ben Burton around 2002](https://packages.qa.debian.org/d/decompyle.html). As
@@ -66,7 +66,7 @@ code to handle first Python 2.3 and then 2.4 bytecodes. Because of
jump optimization introduced in the CPython bytecode compiler at that
time, various JUMP instructions were classifed as going backwards, and
COME FROM instructions were reintroduced. See
[RELEASE-2.4-CHANGELOG.txt](https://github.com/rocky/python-uncompyle6/blob/master/DECOMPYLE-2.4-CHANGELOG.txt)
[RELEASE-2.4-CHANGELOG.txt](https://github.com/rocky/python-uncompyle6/blob/master/DECOMPYLE-2.4-CHANGELOG.txt)
for more details here. There wasn't a public
release of RELEASE-2.4 and bytecodes other than Python 2.4 weren't
supported. Dan says the Python 2.3 version could verify the entire
@@ -99,7 +99,7 @@ made a few commits later on. But mostly wibiti, and Guenther
Starnberger got the code to where uncompyle2 was around 2012.
In `uncompyle`, decompilation of python bytecode 2.5 & 2.6 is done by
transforming the byte code into a a pseudo 2.7 python bytecode and is
transforming the byte code into a pseudo-2.7 Python bytecode and is
based on code from Eloi Vanderbeken.
This project, `uncompyle6`, abandons that approach for various
@@ -120,10 +120,10 @@ while, handling Python bytecodes from Python versions 2.5+ and
3.2+. In doing so, it has been expedient to separate this into three
projects:
* bytecode loading and disassembly ([xdis](https://pypi.python.org/pypi/xdis)),
* marshaling/unmarshaling, bytecode loading and disassembly ([xdis](https://pypi.python.org/pypi/xdis)),
* parsing and tree building ([spark_parser](https://pypi.python.org/pypi/spark_parser)),
* this project - grammar and semantic actions for decompiling
([uncompyle6](https://pypi.python.org/pypi/spark_parser)).
([uncompyle6](https://pypi.python.org/pypi/uncompyle6)).
Over the many years, code styles and Python features have
@@ -162,5 +162,8 @@ support has been lagging.
Tests for the project have been, or are being, culled from all of the
projects mentioned.
For a little bit of the history of changes to the Early-algorithm parser,
see the file [NEW-FEATURES.rst](https://github.com/rocky/python-spark/blob/master/NEW-FEATURES.rst) in the [python-spark github repository](https://github.com/rocky/python-spark).
NB. If you find mistakes, want corrections, or want your name added
(or removed), please contact me.

View File

@@ -19,7 +19,7 @@ So it is likely you'll find a mistranslation in decompiling.
The basic requirement is pretty simple:
* Python bytecode
* Source text
* Python source text
## What to send (additional helpful information)
@@ -50,7 +50,7 @@ one fool can learn, so can another."
## Narrowing the problem
I don't need the entire source code base for which one file or module
I don't need or want the entire source code base for which one file or module
can't be decompiled. I just need that one file or module only. If
there are several files, file a bug report for each file.

View File

@@ -36,6 +36,8 @@ check-2.7 check-3.3 check-3.4: pytest
check-3.0 check-3.1 check-3.2 check-3.5 check-3.6:
$(MAKE) -C test $@
check-3.7: pytest
#:Tests for Python 2.6 (doesn't have pytest)
check-2.6:
$(MAKE) -C test $@

89
NEWS
View File

@@ -1,14 +1,69 @@
uncompyle6 2.10.1 2016-06-3 Marylin Frankel
uncompyle6 2.12.0 2017-09-26
- Use xdis 3.6.0 or greater now
- Small semantic table cleanups
- Python 3.4's terms a little names better
- Slightly more Python 3.7, but still failing a lot
uncompyle6 2.11.5 2017-08-31
- Skeletal support for Python 3.7
uncompyle6 2.11.4 2017-08-15
* scanner and parser now allow 3-part version string lookups,
e.g. 2.7.1 We allow a float here, but if passed a string like '2.7'. or
* unpin 3.5.1. xdis 3.5.4 has been releasd and fixes the problems we had. Use that.
* some routnes here moved to xdis. Use the xdis version
* README.rst: Link typo Name is trepan2 now not trepan
* xdis-forced change adjust for COMPARE_OP "is-not" in
semanatic routines. We need "is not".
* Some PyPy tolerance in validate testing.
* Some pyston tolerance
uncompyle6 2.11.3 2017-08-09
Very minor changes
- RsT doc fixes and updates
- use newer xdis, but not too new; 3.5.2 breaks uncompyle6
- use xdis opcode sets
- xdis "exception match" is now "exception-match"
uncompyle6 2.11.2 2017-07-09
- Start supporting Pypy 3.5 (5.7.1-beta)
- use xdis 3.5.0's opcode sets and require xdis 3.5.0
- Correct some Python 2.4-2.6 loop detection
- guard against badly formatted bytecode
uncompyle6 2.11.1 2017-06-25
- Python 3.x annotation and function signature fixes
- Bump xdis version
- Small pysource bug fixes
uncompyle6 2.11.0 2017-06-18 Fleetwood
- Major improvements in fragment tracking
* Add nonterminal node in extractInfo
* tag more offsets in expressions
* tag array subscripts
* set YIELD value offset in a <yield> expr
* fix a long-standing bug in not adjusting final AST when melding other deparse ASTs
- Fixes yet again for make_function node handling; document what's up here
- Fix bug in snowflake Python 3.5 *args kwargs
uncompyle6 2.10.1 2017-06-3 Marylin Frankel
- fix some fragments parsing bugs
- was returning the wrong type sometimes in deparse_code_around_offset()
- capture function name in offsets
- track changes to ifelstrmtr node from pysource into fragments
uncompyle6 2.10.0 2016-05-30 Elaine Gordon
uncompyle6 2.10.0 2017-05-30 Elaine Gordon
- Add fuzzy offset deparse lookup
- 3.6 bugfixes
- Add fuzzy offset deparse look up
- 3.6 bug fixes
- fix EXTENDED_ARGS handling (and in 2.6 and others)
- semantic routine make_function fragments.py
- MAKE_FUNCTION handling
@@ -19,19 +74,19 @@ uncompyle6 2.10.0 2016-05-30 Elaine Gordon
- 3.5 FUNCTION_VAR bug
- 3.x pass statement insdie while True
- Improve 3.2 decompilation
- Fixed -o argument processing (Gregrory)
- Fixed -o argument processing (grkov90)
- Reduce scope of LOAD_ASSERT as expr to 3.4+
- "await" statement fixes
- 2.3, 2.4 "if 1 .." fixes
- 3.x annotation fixes
uncompyle6 2.9.11 2016-04-06
uncompyle6 2.9.11 2017-04-06
- Better support for Python 3.5+ BUILD_MAP_UNPACK
- Start 3.6 CALL_FUNCTION_EX support
- Many decompilation bug fixes. (Many more remain). See ChangeLog
uncompyle6 2.9.10 2016-02-25
uncompyle6 2.9.10 2017-02-25
- Python grammar rule fixes
- Add ability to get grammar coverage on runs
@@ -98,7 +153,7 @@ uncompyle6 2.9.6 2016-11-20
uncompyle6 2.9.5 2016-11-13
- Fix Python 3 bugs:
* improprer while 1 else
* improper while 1 else
* docstring indent
* 3.3 default values in lambda expressions
* start 3.0 decompilation (needs newer xdis)
@@ -108,12 +163,12 @@ uncompyle6 2.9.5 2016-11-13
uncompyle6 2.9.4 2016-11-02
- Handle Python 3.x function annotations
- track def keywoard-parameter line-splitting in source code better
- track def keyword-parameter line-splitting in source code better
- bump min xdis version to mask previous xdis bug
uncompyle6 2.9.3 2016-10-26
Release forced by incompatiblity change in xdis 3.2.0.
Release forced by incompatibility change in xdis 3.2.0.
- Python 3.1 bugs:
* handle "with ... as"
@@ -145,7 +200,7 @@ uncompyle6 2.9.0 2016-10-09
this Forces change in requirements.txt and _pkg_info_.py
- Start Python 1.5 decompiling; another round of work is needed to
remove bugs
- Simpify python 2.1 grammar
- Simplify python 2.1 grammar
- Fix bug with -t ... Wasn't showing source text when -t option was given
- Fix 2.1-2.6 bug in list comprehension
@@ -168,7 +223,7 @@ control-flow structure detection is done.
. 3.0 .. 3.2 *args processing
. 3.0 .. 3.2 call name and kwargs bug
. 3.0 .. getting parameter of *
. 3.0 .. handling varible number of args
. 3.0 .. handling variable number of args
. 3.0 .. "if" structure bugs
* 3.5+ if/else bugs
* 2.2-2.6 bugs
@@ -219,7 +274,7 @@ uncompyle6 2.7.1 2016-07-26
uncompyle6 2.7.0 2016-07-15
- Many Syntax and verifification bugs removed
- Many Syntax and verification bugs removed
tested on standard libraries from 2.3.7 to 3.5.1
and they all decompile and verify fine.
I'm sure there are more bugs though.
@@ -246,9 +301,9 @@ uncompyle6 2.6.0 2016-07-07
- Better <2.6 vs. 2.7 grammar separation
- Fix some 2.7 deparsing bugs
- Fix bug in installing uncompyle6 script
- Doc improvments
- Doc improvements
uncompyle6 2.5.0 2016-06-22 Summer Solstace
uncompyle6 2.5.0 2016-06-22 Summer Solstice
- Much better Python 3.2-3.5 coverage.
3.4.6 is probably the best;3.2 and 3.5 are weaker
@@ -260,7 +315,7 @@ uncompyle6 2.5.0 2016-06-22 Summer Solstace
uncompyle6 2.4.0 2016-05-18 (in memory of Lewis Bernstein)
- Many Python 3 bugs fixed:
* Python 3.2 to 3.5 libaries largely
* Python 3.2 to 3.5 libraries largely
uncompyle and most verify
- pydisassembler:
* disassembles all code objects in a file
@@ -318,7 +373,7 @@ uncompyle6 2.2.0 2016-04-30
uncompyle6 2.2.0 2016-04-02
- Support single-mode (in addtion to exec-mode) compilation
- Support single-mode (in addition to exec-mode) compilation
- Start to DRY Python 2 and Python 3 grammars
- Fix bug in if else ternary construct
- Fix bug in uncomplye6 -d and -r options (via lelicopter)

View File

@@ -1,4 +1,4 @@
|buildstatus| |Supported Python Versions|
|buildstatus|
uncompyle6
==========
@@ -12,7 +12,7 @@ Introduction
*uncompyle6* translates Python bytecode back into equivalent Python
source code. It accepts bytecodes from Python version 1.5, and 2.1 to
3.6 or so, including PyPy bytecode and Dropbox's Python 2.5 bytecode.
3.7 or so, including PyPy bytecode and Dropbox's Python 2.5 bytecode.
Why this?
---------
@@ -56,7 +56,7 @@ This uses setup.py, so it follows the standard Python routine:
::
pip install -e setup.py
pip install -e .
pip install -r requirements-dev.txt
python setup.py install # may need sudo
# or if you have pyenv:
@@ -171,9 +171,12 @@ See Also
* https://code.google.com/archive/p/unpyc3/ : supports Python 3.2 only. The above projects use a different decompiling technique than what is used here.
* https://github.com/figment/unpyc3/ : fork of above, but supports Python 3.3 only. Include some fixes like supporting function annotations
* The HISTORY_ file.
* `How to report a bug <https://github.com/rocky/python-uncompyle6/blob/master/HOW-TO-REPORT-A-BUG.md>`_
* https://github.com/rocky/python-xdis : Cross Python version disassembler
* https://github.com/rocky/python-xasm : Cross Python version assembler
.. |downloads| image:: https://img.shields.io/pypi/dd/uncompyle6.svg
.. _trepan: https://pypi.python.org/pypi/trepan
.. _trepan: https://pypi.python.org/pypi/trepan2
.. _HISTORY: https://github.com/rocky/python-uncompyle6/blob/master/HISTORY.md
.. _debuggers: https://pypi.python.org/pypi/trepan3k
.. _remake: https://bashdb.sf.net/remake
@@ -181,7 +184,5 @@ See Also
.. _this: https://github.com/rocky/python-uncompyle6/wiki/Deparsing-technology-and-its-use-in-exact-location-reporting
.. |buildstatus| image:: https://travis-ci.org/rocky/python-uncompyle6.svg
:target: https://travis-ci.org/rocky/python-uncompyle6
.. |Supported Python Versions| image:: https://img.shields.io/pypi/pyversions/uncompyle6.svg
:target: https://pypi.python.org/pypi/uncompyle6/
.. _PJOrion: http://www.koreanrandom.com/forum/topic/15280-pjorion-%D1%80%D0%B5%D0%B4%D0%B0%D0%BA%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5-%D0%BA%D0%BE%D0%BC%D0%BF%D0%B8%D0%BB%D1%8F%D1%86%D0%B8%D1%8F-%D0%B4%D0%B5%D0%BA%D0%BE%D0%BC%D0%BF%D0%B8%D0%BB%D1%8F%D1%86%D0%B8%D1%8F-%D0%BE%D0%B1%D1%84
.. _Deobfuscator: https://github.com/extremecoders-re/PjOrion-Deobfuscator

View File

@@ -33,14 +33,14 @@ classifiers = ['Development Status :: 5 - Production/Stable',
# The rest in alphabetic order
author = "Rocky Bernstein, Hartmut Goebel, John Aycock, and others"
author_email = "rb@dustyfeet.com"
entry_points={
entry_points = {
'console_scripts': [
'uncompyle6=uncompyle6.bin.uncompile:main_bin',
'pydisassemble=uncompyle6.bin.pydisassemble:main',
]}
ftp_url = None
install_requires = ['spark-parser >= 1.6.1, < 1.7.0',
'xdis >= 3.3.1, < 3.4.0', 'six']
'xdis >= 3.6.0, < 3.7.0', 'six']
license = 'MIT'
mailing_list = 'python-debugger@googlegroups.com'
modname = 'uncompyle6'

11
pytest/test_basic.py Normal file
View File

@@ -0,0 +1,11 @@
from uncompyle6.scanner import get_scanner
from uncompyle6.parser import get_python_parser
def test_get_scanner():
# See that we can retrieve a scanner using a full version number
assert get_scanner('2.7.13')
def test_get_parser():
# See that we can retrieve a sparser using a full version number
assert get_python_parser('2.7.13')

168
pytest/test_pysource.py Normal file
View File

@@ -0,0 +1,168 @@
from uncompyle6 import PYTHON3
from uncompyle6.semantics.consts import (
escape, NONE,
# RETURN_NONE, PASS, RETURN_LOCALS
)
if PYTHON3:
from io import StringIO
def iteritems(d):
return d.items()
else:
from StringIO import StringIO
def iteritems(d):
return d.iteritems()
from uncompyle6.semantics.pysource import SourceWalker as SourceWalker
def test_template_engine():
s = StringIO()
sw = SourceWalker(2.7, s, None)
sw.ast = NONE
sw.template_engine(('--%c--', 0), NONE)
print(sw.f.getvalue())
assert sw.f.getvalue() == '--None--'
# FIXME: and so on...
from uncompyle6.semantics.consts import (
TABLE_DIRECT, TABLE_R,
)
from uncompyle6.semantics.fragments import (
TABLE_DIRECT_FRAGMENT,
)
skip_for_now = "DELETE_DEREF".split()
def test_tables():
for t, name, fragment in (
(TABLE_DIRECT, 'TABLE_DIRECT', False),
(TABLE_R, 'TABLE_R', False),
(TABLE_DIRECT_FRAGMENT, 'TABLE_DIRECT_FRAGMENT', True)):
for k, entry in iteritems(t):
if k in skip_for_now:
continue
fmt = entry[0]
arg = 1
i = 0
m = escape.search(fmt)
print("%s[%s]" % (name, k))
while m:
i = m.end()
typ = m.group('type') or '{'
if typ in frozenset(['%', '+', '-', '|', ',', '{']):
# No args
pass
elif typ in frozenset(['c', 'p', 'P', 'C', 'D']):
# One arg - should be int or tuple of int
if typ == 'c':
assert isinstance(entry[arg], int), (
"%s[%s][%d] type %s is '%s' should be an int but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
elif typ in frozenset(['C', 'D']):
tup = entry[arg]
assert isinstance(tup, tuple), (
"%s[%s][%d] type %s is %s should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert len(tup) == 3
for j, x in enumerate(tup[:-1]):
assert isinstance(x, int), (
"%s[%s][%d][%d] type %s is %s should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
assert isinstance(tup[-1], str) or tup[-1] is None, (
"%s[%s][%d][%d] sep type %s is %s should be an string but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, tup[-1], type(x), entry)
)
elif typ == 'P':
tup = entry[arg]
assert isinstance(tup, tuple), (
"%s[%s][%d] type %s is %s should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert len(tup) == 4
for j, x in enumerate(tup[:-2]):
assert isinstance(x, int), (
"%s[%s][%d][%d] type %s is '%s' should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
assert isinstance(tup[-2], str), (
"%s[%s][%d][%d] sep type %s is '%s' should be an string but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
assert isinstance(tup[1], int), (
"%s[%s][%d][%d] prec type %s is '%s' should be an int but is %s. "
"Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
else:
# Should be a tuple which contains only ints
tup = entry[arg]
assert isinstance(tup, tuple), (
"%s[%s][%d] type %s is '%s' should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert len(tup) == 2
for j, x in enumerate(tup):
assert isinstance(x, int), (
"%s[%s][%d][%d] type '%s' is '%s should be an int but is %s. Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
pass
arg += 1
elif typ in frozenset(['r']) and fragment:
pass
elif typ == 'b' and fragment:
assert isinstance(entry[arg], int), (
"%s[%s][%d] type %s is '%s' should be an int but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
arg += 1
elif typ == 'x' and fragment:
tup = entry[arg]
assert isinstance(tup, tuple), (
"%s[%s][%d] type %s is '%s' should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert len(tup) == 2
assert isinstance(tup[0], int), (
"%s[%s][%d] source type %s is '%s' should be an int but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
assert isinstance(tup[1], tuple), (
"%s[%s][%d] dest type %s is '%s' should be an tuple but is %s. "
"Full entry: %s" %
(name, k, arg, typ, entry[arg], type(entry[arg]), entry)
)
for j, x in enumerate(tup[1]):
assert isinstance(x, int), (
"%s[%s][%d][%d] type %s is %s should be an int but is %s. Full entry: %s" %
(name, k, arg, j, typ, x, type(x), entry)
)
arg += 1
pass
else:
assert False, (
"%s[%s][%d] type %s is not known. Full entry: %s" %
(name, k, arg, typ, entry)
)
m = escape.search(fmt, i)
pass
assert arg == len(entry), (
"%s[%s] arg %d should be length of entry %d. Full entry: %s" %
(name, k, arg, len(entry), entry))

View File

@@ -123,7 +123,9 @@ def validate_uncompyle(text, mode='exec'):
original_text = text
deparsed = deparse_code(PYTHON_VERSION, original_code,
compile_mode=mode, out=six.StringIO())
compile_mode=mode,
out=six.StringIO(),
is_pypy=IS_PYPY)
uncompyled_text = deparsed.text
uncompyled_code = compile(uncompyled_text, '<string>', 'exec')

View File

@@ -39,7 +39,7 @@ check-3.3: check-bytecode
#: Run working tests from Python 3.4
check-3.4: check-bytecode check-3.4-ok check-2.7-ok
$(PYTHON) test_pythonlib.py --bytecode-3.4 --verify $(COMPILE)
$(PYTHON) test_pythonlib.py --bytecode-3.4 --weak-verify $(COMPILE)
#: Run working tests from Python 3.5
check-3.5: check-bytecode

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,19 @@
# Bug in < 2.6 is having a COME_FROM_LOOP (but we
# don't tag that so it is just COME_FROM *before*
# a jump back to the loop.
def pickup(self, open_players, open_buf, wrap_buf):
for aplayer in self._game.active_players:
if aplayer in open_players:
aplayer.send(open_players)
if self == aplayer:
for awatcher in self._watchers:
if awatcher._can_see_detail:
awatcher.send(open_buf)
else:
awatcher.send(wrap_buf)
else:
self._game.send(aplayer.side)
else:
self._game.send(aplayer.side, wrap_buf)

View File

@@ -9,7 +9,7 @@ def open(file, mode = "r", buffering = None,
newline = None, closefd = True) -> "IOBase":
return text
def foo(x: 'an argument that defaults to 5' = 5):
def foo1(x: 'an argument that defaults to 5' = 5):
print(x)
def div(a: dict(type=float, help='the dividend'),

View File

@@ -1,4 +1,5 @@
# sql/schema.py
# Note that kwargs comes before "positional" args
def tometadata(self, metadata, schema, Table, args, name=None):
table = Table(
name, metadata, schema=schema,

View File

@@ -29,7 +29,7 @@ from fnmatch import fnmatch
TEST_VERSIONS=('2.3.7', '2.4.6', '2.5.6', '2.6.9',
'pypy-2.4.0', 'pypy-2.6.1',
'pypy-5.0.1', 'pypy-5.3.1',
'pypy-5.0.1', 'pypy-5.3.1', 'pypy3.5-5.7.1-beta',
'2.7.10', '2.7.11', '2.7.12', '2.7.13',
'3.0.1', '3.1.5', '3.2.6',
'3.3.5', '3.3.6',

View File

@@ -169,13 +169,13 @@ def do_tests(src_dir, obj_patterns, target_dir, opts):
main(src_dir, target_dir, files, [],
do_verify=opts['do_verify'])
if failed_files != 0:
exit(2)
sys.exit(2)
elif failed_verify != 0:
exit(3)
sys.exit(3)
except (KeyboardInterrupt, OSError):
print()
exit(1)
sys.exit(1)
if test_opts['rmtree']:
parent_dir = os.path.dirname(target_dir)
print("Everything good, removing %s" % parent_dir)

View File

@@ -11,10 +11,10 @@ from __future__ import print_function
import sys
from xdis.code import iscode
from xdis.magics import py_str2float
from spark_parser import GenericASTBuilder, DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from uncompyle6.show import maybe_show_asm
class ParserError(Exception):
def __init__(self, token, offset):
self.token = token
@@ -605,7 +605,15 @@ def get_python_parser(
explanation of the different modes.
"""
# If version is a string, turn that into the corresponding float.
if isinstance(version, str):
version = py_str2float(version)
# FIXME: there has to be a better way...
# We could do this as a table lookup, but that would force us
# in import all of the parsers all of the time. Perhaps there is
# a lazy way of doing the import?
if version < 3.0:
if version == 1.5:
import uncompyle6.parsers.parse15 as parse15
@@ -758,6 +766,7 @@ def python_parser(version, co, out=sys.stdout, showasm=False,
if __name__ == '__main__':
def parse_test(co):
from uncompyle6 import PYTHON_VERSION, IS_PYPY
ast = python_parser('2.7.13', co, showasm=True, is_pypy=True)
ast = python_parser(PYTHON_VERSION, co, showasm=True, is_pypy=IS_PYPY)
print(ast)
return

View File

@@ -84,6 +84,12 @@ class Python26Parser(Python2Parser):
ja_cf_pop ::= JUMP_ABSOLUTE come_froms POP_TOP
jf_cf_pop ::= JUMP_FORWARD come_froms POP_TOP
# The first optional COME_FROM when it appears is really
# COME_FROM_LOOP, but in <= 2.6 we don't distinguish
# this
cf_jb_cf_pop ::= _come_from JUMP_BACK come_froms POP_TOP
bp_come_from ::= POP_BLOCK COME_FROM
jb_bp_come_from ::= JUMP_BACK bp_come_from
@@ -111,7 +117,8 @@ class Python26Parser(Python2Parser):
break_stmt ::= BREAK_LOOP JUMP_BACK
# Semantic actions want else_suitel to be at index 3
ifelsestmtl ::= testexpr c_stmts_opt jb_cf_pop else_suitel
ifelsestmtl ::= testexpr c_stmts_opt cf_jb_cf_pop else_suitel
ifelsestmtc ::= testexpr c_stmts_opt ja_cf_pop else_suitec
# Semantic actions want suite_stmts_opt to be at index 3

View File

@@ -20,6 +20,7 @@ from __future__ import print_function
from uncompyle6.parser import PythonParser, PythonParserSingle, nop_func
from uncompyle6.parsers.astnode import AST
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from xdis import PYTHON3
class Python3Parser(PythonParser):
@@ -496,8 +497,8 @@ class Python3Parser(PythonParser):
token.type = self.call_fn_name(token)
uniq_param = args_kw + args_pos
if self.version == 3.5 and opname.startswith('CALL_FUNCTION_VAR'):
# Python 3.5 changes the stack position of where * args, the
# first LOAD_FAST, below are located.
# Python 3.5 changes the stack position of *args. KW args come
# after *args.
# Python 3.6+ replaces CALL_FUNCTION_VAR_KW with CALL_FUNCTION_EX
if opname.endswith('KW'):
kw = 'expr '
@@ -507,11 +508,17 @@ class Python3Parser(PythonParser):
('pos_arg ' * args_pos) +
('kwarg ' * args_kw) + kw + token.type)
self.add_unique_rule(rule, token.type, uniq_param, customize)
rule = ('call_function ::= expr ' +
('pos_arg ' * args_pos) +
('kwarg ' * args_kw) +
'expr ' * nak + token.type)
if self.version >= 3.6 and opname == 'CALL_FUNCTION_EX_KW':
rule = ('call_function36 ::= '
'expr build_tuple_unpack_with_call build_map_unpack_with_call '
'CALL_FUNCTION_EX_KW_1')
self.add_unique_rule(rule, token.type, uniq_param, customize)
rule = 'call_function ::= call_function36'
else:
rule = ('call_function ::= expr ' +
('pos_arg ' * args_pos) +
('kwarg ' * args_kw) +
'expr ' * nak + token.type)
self.add_unique_rule(rule, token.type, uniq_param, customize)
if self.version >= 3.5:
@@ -610,9 +617,9 @@ class Python3Parser(PythonParser):
assign2_pypy ::= expr expr designator designator
""", nop_func)
continue
elif opname in ('CALL_FUNCTION', 'CALL_FUNCTION_VAR',
'CALL_FUNCTION_VAR_KW') \
or opname.startswith('CALL_FUNCTION_KW'):
elif (opname in ('CALL_FUNCTION', 'CALL_FUNCTION_VAR',
'CALL_FUNCTION_VAR_KW', 'CALL_FUNCTION_EX_KW')
or opname.startswith('CALL_FUNCTION_KW')):
self.custom_classfunc_rule(opname, token, customize)
elif opname == 'LOAD_DICTCOMP':
rule_pat = ("dictcomp ::= LOAD_DICTCOMP %sMAKE_FUNCTION_0 expr "
@@ -633,6 +640,18 @@ class Python3Parser(PythonParser):
self.add_unique_rule(rule, opname, token.attr, customize)
rule = 'expr ::= build_list_unpack'
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname.startswith('BUILD_TUPLE_UNPACK_WITH_CALL'):
v = token.attr
rule = ('build_tuple_unpack_with_call ::= ' + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname.startswith('BUILD_MAP_UNPACK_WITH_CALL'):
v = token.attr
rule = ('build_map_unpack_with_call ::= ' + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname_base in ('BUILD_LIST', 'BUILD_TUPLE', 'BUILD_SET'):
v = token.attr
rule = ('build_list ::= ' + 'expr1024 ' * int(v//1024) +
@@ -642,7 +661,10 @@ class Python3Parser(PythonParser):
if opname_base == 'BUILD_TUPLE':
rule = ('load_closure ::= %s%s' % (('LOAD_CLOSURE ' * v), opname))
self.add_unique_rule(rule, opname, token.attr, customize)
rule = ('build_tuple ::= ' + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname == 'LOOKUP_METHOD':
# A PyPy speciality - DRY with parse2
self.add_unique_rule("load_attr ::= expr LOOKUP_METHOD",
@@ -868,7 +890,11 @@ class Python3Parser(PythonParser):
elif lhs == 'annotate_tuple':
return not isinstance(tokens[first].attr, tuple)
elif lhs == 'kwarg':
return not isinstance(tokens[first].attr, str)
arg = tokens[first].attr
if PYTHON3:
return not isinstance(arg, str)
else:
return not (isinstance(arg, str) or isinstance(arg, unicode))
elif lhs == 'while1elsestmt':
# if SETUP_LOOP target spans the else part, then this is
# not while1else. Also do for whileTrue?

View File

@@ -25,12 +25,13 @@ class Python36Parser(Python35Parser):
func_args36 ::= expr BUILD_TUPLE_0
call_function ::= func_args36 unmapexpr CALL_FUNCTION_EX
call_function ::= func_args36 build_map_unpack_with_call CALL_FUNCTION_EX_KW_1
withstmt ::= expr SETUP_WITH POP_TOP suite_stmts_opt POP_BLOCK LOAD_CONST
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
call_function ::= expr expr CALL_FUNCTION_EX
call_function ::= expr expr expr CALL_FUNCTION_EX_KW
call_function ::= expr expr expr CALL_FUNCTION_EX_KW_1
"""
def add_custom_rules(self, tokens, customize):

View File

@@ -0,0 +1,41 @@
# Copyright (c) 2017 Rocky Bernstein
"""
spark grammar differences over Python 3.6 for Python 3.7
"""
from __future__ import print_function
from uncompyle6.parser import PythonParserSingle
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from uncompyle6.parsers.parse36 import Python37Parser
class Python36Parser(Python35Parser):
def __init__(self, debug_parser=PARSER_DEFAULT_DEBUG):
super(Python37Parser, self).__init__(debug_parser)
self.customized = {}
class Python37ParserSingle(Python37Parser, PythonParserSingle):
pass
if __name__ == '__main__':
# Check grammar
p = Python37Parser()
p.checkGrammar()
from uncompyle6 import PYTHON_VERSION, IS_PYPY
if PYTHON_VERSION == 3.7:
lhs, rhs, tokens, right_recursive = p.checkSets()
from uncompyle6.scanner import get_scanner
s = get_scanner(PYTHON_VERSION, IS_PYPY)
opcode_set = set(s.opc.opname).union(set(
"""JUMP_BACK CONTINUE RETURN_END_IF COME_FROM
LOAD_GENEXPR LOAD_ASSERT LOAD_SETCOMP LOAD_DICTCOMP LOAD_CLASSNAME
LAMBDA_MARKER RETURN_LAST
""".split()))
remain_tokens = set(tokens) - opcode_set
import re
remain_tokens = set([re.sub('_\d+$', '', t) for t in remain_tokens])
remain_tokens = set([re.sub('_CONT$', '', t) for t in remain_tokens])
remain_tokens = set(remain_tokens) - opcode_set
print(remain_tokens)
# print(sorted(p.rule2name.items()))

View File

@@ -16,11 +16,13 @@ import sys
from uncompyle6 import PYTHON3, IS_PYPY
from uncompyle6.scanners.tok import Token
from xdis.bytecode import op_size
from xdis.magics import py_str2float
# The byte code versions we support
PYTHON_VERSIONS = (1.5,
2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7,
3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6)
3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7)
# FIXME: DRY
if PYTHON3:
@@ -54,7 +56,7 @@ class Scanner(object):
if version in PYTHON_VERSIONS:
if is_pypy:
v_str = "opcode_pypy%s" % (int(version * 10))
v_str = "opcode_%spypy" % (int(version * 10))
else:
v_str = "opcode_%s" % (int(version * 10))
exec("from xdis.opcodes import %s" % v_str)
@@ -63,6 +65,7 @@ class Scanner(object):
raise TypeError("%s is not a Python version I know about" % version)
self.opname = self.opc.opname
# FIXME: This weird Python2 behavior is not Python3
self.resetTokenClass()
@@ -88,7 +91,7 @@ class Scanner(object):
if op is None:
op = self.code[pos]
target = self.get_argument(pos)
if op in self.opc.hasjrel:
if op in self.opc.JREL_OPS:
target += pos + 3
return target
@@ -99,7 +102,7 @@ class Scanner(object):
def print_bytecode(self):
for i in self.op_range(0, len(self.code)):
op = self.code[i]
if op in self.opc.hasjabs+self.opc.hasjrel:
if op in self.JUMP_OPS:
dest = self.get_target(i, op)
print('%i\t%s\t%i' % (i, self.opname[op], dest))
else:
@@ -214,9 +217,6 @@ class Scanner(object):
result.append(offset)
return result
def op_hasArgument(self, op):
return self.op_size(op) > 1
def op_range(self, start, end):
"""
Iterate through positions of opcodes, skipping
@@ -224,20 +224,7 @@ class Scanner(object):
"""
while start < end:
yield start
start += self.op_size(self.code[start])
def next_offset(self, op, offset):
return offset + self.op_size(op)
def op_size(self, op):
"""
Return size of operator with its arguments
for given opcode <op>.
"""
if op < self.opc.HAVE_ARGUMENT:
return 2 if self.version >= 3.6 else 1
else:
return 2 if self.version >= 3.6 else 3
start += op_size(self.code[start], self.opc)
def remove_mid_line_ifs(self, ifs):
"""
@@ -269,13 +256,16 @@ class Scanner(object):
self.Token = tokenClass
return self.Token
def op_has_argument(op, opc):
return op >= opc.HAVE_ARGUMENT
def parse_fn_counts(argc):
return ((argc & 0xFF), (argc >> 8) & 0xFF, (argc >> 16) & 0x7FFF)
def get_scanner(version, is_pypy=False, show_asm=None):
# If version is a string, turn that into the corresponding float.
if isinstance(version, str):
version = py_str2float(version)
# Pick up appropriate scanner
if version in PYTHON_VERSIONS:
v_str = "%s" % (int(version * 10))
@@ -302,5 +292,6 @@ def get_scanner(version, is_pypy=False, show_asm=None):
if __name__ == "__main__":
import inspect, uncompyle6
co = inspect.currentframe().f_code
scanner = get_scanner('2.7.13', True)
scanner = get_scanner(uncompyle6.PYTHON_VERSION, IS_PYPY, True)
tokens, customize = scanner.ingest(co, {})

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python PyPy 2.7 bytecode scanner/deparser
@@ -10,8 +10,8 @@ information for later use in deparsing.
import uncompyle6.scanners.scanner27 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_pypy27
JUMP_OPs = opcode_pypy27.JUMP_OPs
from xdis.opcodes import opcode_27pypy
JUMP_OPS = opcode_27pypy.JUMP_OPS
# We base this off of 2.6 instead of the other way around
# because we cleaned things up this way.

View File

@@ -1,22 +1,18 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2017 by Rocky Bernstein
"""
Python PyPy 3.2 bytecode scanner/deparser
Python PyPy 3.2 decompiler scanner.
This overlaps Python's 3.2's dis module, but it can be run from
Python 3 and other versions of Python. Also, we save token
information for later use in deparsing.
Does some additional massaging of xdis-disassembled instructions to
make things easier for decompilation.
"""
import uncompyle6.scanners.scanner32 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_32 as opc # is this rgith?
from xdis.opcodes import opcode_32 as opc # is this right?
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
# We base this off of 2.6 instead of the other way around
# because we cleaned things up this way.
# The history is that 2.7 support is the cleanest,
# then from that we got 2.6 and so on.
# We base this off of 3.2
class ScannerPyPy32(scan.Scanner32):
def __init__(self, show_asm):
# There are no differences in initialization between

View File

@@ -0,0 +1,22 @@
# Copyright (c) 2017 by Rocky Bernstein
"""
Python PyPy 3.2 decompiler scanner.
Does some additional massaging of xdis-disassembled instructions to
make things easier for decompilation.
"""
import uncompyle6.scanners.scanner35 as scan
# bytecode verification, verify(), uses JUMP_OPS from here
from xdis.opcodes import opcode_35 as opc # is this right?
JUMP_OPs = opc.JUMP_OPS
# We base this off of 3.5
class ScannerPyPy35(scan.Scanner35):
def __init__(self, show_asm):
# There are no differences in initialization between
# pypy 3.5 and 3.5
scan.Scanner35.__init__(self, show_asm, is_pypy=True)
self.version = 3.5
return

View File

@@ -1,6 +1,6 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 1.5 bytecode scanner/deparser
Python 1.5 bytecode decompiler scanner.
This massages tokenized 1.5 bytecode to make it more amenable for
grammar parsing.
@@ -11,7 +11,7 @@ import uncompyle6.scanners.scanner21 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_15
JUMP_OPs = opcode_15.JUMP_OPs
JUMP_OPS = opcode_15.JUMP_OPS
# We base this off of 2.2 instead of the other way around
# because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015, 2016 by Rocky Bernstein
# Copyright (c) 2015-2017 by Rocky Bernstein
# Copyright (c) 2005 by Dan Pascu <dan@windowmaker.org>
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
"""
@@ -25,14 +25,15 @@ from __future__ import print_function
from collections import namedtuple
from array import array
from uncompyle6.scanner import op_has_argument
from uncompyle6.scanner import L65536
from xdis.code import iscode
from xdis.bytecode import op_has_argument, op_size
import uncompyle6.scanner as scan
from uncompyle6.scanner import Scanner
class Scanner2(scan.Scanner):
class Scanner2(Scanner):
def __init__(self, version, show_asm=None, is_pypy=False):
scan.Scanner.__init__(self, version, show_asm, is_pypy)
Scanner.__init__(self, version, show_asm, is_pypy)
self.pop_jump_if = frozenset([self.opc.PJIF, self.opc.PJIT])
self.jump_forward = frozenset([self.opc.JUMP_ABSOLUTE, self.opc.JUMP_FORWARD])
# This is the 2.5+ default
@@ -186,9 +187,9 @@ class Scanner2(scan.Scanner):
oparg = self.get_argument(offset) + extended_arg
extended_arg = 0
if op == self.opc.EXTENDED_ARG:
extended_arg = oparg * scan.L65536
extended_arg = oparg * L65536
continue
if op in self.opc.hasconst:
if op in self.opc.CONST_OPS:
const = co.co_consts[oparg]
if iscode(const):
oparg = const
@@ -209,23 +210,23 @@ class Scanner2(scan.Scanner):
pattr = '<code_object ' + const.co_name + '>'
else:
pattr = const
elif op in self.opc.hasname:
elif op in self.opc.NAME_OPS:
pattr = names[oparg]
elif op in self.opc.hasjrel:
elif op in self.opc.JREL_OPS:
# use instead: hasattr(self, 'patch_continue'): ?
if self.version == 2.7:
self.patch_continue(tokens, offset, op)
pattr = repr(offset + 3 + oparg)
elif op in self.opc.hasjabs:
elif op in self.opc.JABS_OPS:
# use instead: hasattr(self, 'patch_continue'): ?
if self.version == 2.7:
self.patch_continue(tokens, offset, op)
pattr = repr(oparg)
elif op in self.opc.haslocal:
elif op in self.opc.LOCAL_OPS:
pattr = varnames[oparg]
elif op in self.opc.hascompare:
elif op in self.opc.COMPARE_OPS:
pattr = self.opc.cmp_op[oparg]
elif op in self.opc.hasfree:
elif op in self.opc.FREE_OPS:
pattr = free[oparg]
if op in self.varargs_ops:
@@ -327,7 +328,7 @@ class Scanner2(scan.Scanner):
for i in self.op_range(0, n):
op = self.code[i]
self.prev.append(i)
if self.op_hasArgument(op):
if op_has_argument(op, self.opc):
self.prev.append(i)
self.prev.append(i)
pass
@@ -380,7 +381,7 @@ class Scanner2(scan.Scanner):
if elem != code[i]:
match = False
break
i += self.op_size(code[i])
i += op_size(code[i], self.opc)
if match:
i = self.prev[i]
@@ -451,7 +452,7 @@ class Scanner2(scan.Scanner):
self.not_continue.add(jmp)
jmp = self.get_target(jmp)
prev_offset = self.prev[except_match]
# COMPARE_OP argument should be "exception match" or 10
# COMPARE_OP argument should be "exception-match" or 10
if (self.code[prev_offset] == self.opc.COMPARE_OP and
self.code[prev_offset+1] != 10):
return None
@@ -602,7 +603,7 @@ class Scanner2(scan.Scanner):
if test == offset:
loop_type = 'while 1'
elif self.code[test] in self.opc.hasjabs + self.opc.hasjrel:
elif self.code[test] in self.opc.JUMP_OPs:
self.ignore_if.add(test)
test_target = self.get_target(test)
if test_target > (jump_back+3):
@@ -617,7 +618,7 @@ class Scanner2(scan.Scanner):
'start': jump_back+3,
'end': end})
elif op == self.opc.SETUP_EXCEPT:
start = offset + self.op_size(op)
start = offset + op_size(op, self.opc)
target = self.get_target(offset, op)
end = self.restrict_to_parent(target, parent)
if target != end:
@@ -641,7 +642,7 @@ class Scanner2(scan.Scanner):
setup_except_nest -= 1
elif self.code[end_finally_offset] == self.opc.SETUP_EXCEPT:
setup_except_nest += 1
end_finally_offset += self.op_size(code[end_finally_offset])
end_finally_offset += op_size(code[end_finally_offset], self.opc)
pass
# Add the except blocks
@@ -842,7 +843,7 @@ class Scanner2(scan.Scanner):
else:
# We still have the case in 2.7 that the next instruction
# is a jump to a SETUP_LOOP target.
next_offset = target + self.op_size(self.code[target])
next_offset = target + op_size(self.code[target], self.opc)
next_op = self.code[next_offset]
if self.op_name(next_op) == 'JUMP_FORWARD':
jump_target = self.get_target(next_offset, next_op)
@@ -904,7 +905,9 @@ class Scanner2(scan.Scanner):
'start': start-3,
'end': pre_rtarget})
self.not_continue.add(pre_rtarget)
# FIXME: this is yet another case were we need dominators.
if pre_rtarget not in self.linestartoffsets or self.version < 2.7:
self.not_continue.add(pre_rtarget)
if rtarget < end:
# We have an "else" block of some kind.
@@ -991,11 +994,11 @@ class Scanner2(scan.Scanner):
oparg = self.get_argument(offset)
if label is None:
if op in self.opc.hasjrel and self.op_name(op) != 'FOR_ITER':
# if (op in self.opc.hasjrel and
if op in self.opc.JREL_OPS and self.op_name(op) != 'FOR_ITER':
# if (op in self.opc.JREL_OPS and
# (self.version < 2.0 or op != self.opc.FOR_ITER)):
label = offset + 3 + oparg
elif self.version == 2.7 and op in self.opc.hasjabs:
elif self.version == 2.7 and op in self.opc.JABS_OPS:
if op in (self.opc.JUMP_IF_FALSE_OR_POP,
self.opc.JUMP_IF_TRUE_OR_POP):
if (oparg > offset):

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 2.1 bytecode scanner/deparser
@@ -11,7 +11,7 @@ import uncompyle6.scanners.scanner22 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_21
JUMP_OPs = opcode_21.JUMP_OPs
JUMP_OPS = opcode_21.JUMP_OPS
# We base this off of 2.2 instead of the other way around
# because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 2.2 bytecode ingester.
@@ -11,7 +11,7 @@ import uncompyle6.scanners.scanner23 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_22
JUMP_OPs = opcode_22.JUMP_OPs
JUMP_OPS = opcode_22.JUMP_OPS
# We base this off of 2.3 instead of the other way around
# because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 2.3 bytecode scanner/deparser
@@ -10,7 +10,7 @@ import uncompyle6.scanners.scanner24 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_23
JUMP_OPs = opcode_23.JUMP_OPs
JUMP_OPS = opcode_23.JUMP_OPS
# We base this off of 2.4 instead of the other way around
# because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 2.4 bytecode scanner/deparser
@@ -10,7 +10,7 @@ import uncompyle6.scanners.scanner25 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_24
JUMP_OPs = opcode_24.JUMP_OPs
JUMP_OPS = opcode_24.JUMP_OPS
# We base this off of 2.5 instead of the other way around
# because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015-2016 by Rocky Bernstein
# Copyright (c) 2015-2017 by Rocky Bernstein
"""
Python 2.5 bytecode scanner/deparser
@@ -11,7 +11,7 @@ import uncompyle6.scanners.scanner26 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_25
JUMP_OPs = opcode_25.JUMP_OPs
JUMP_OPS = opcode_25.JUMP_OPS
# We base this off of 2.6 instead of the other way around
# because we cleaned things up this way.

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015, 2016 by Rocky Bernstein
# Copyright (c) 2015-2017 by Rocky Bernstein
# Copyright (c) 2005 by Dan Pascu <dan@windowmaker.org>
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
"""
@@ -15,10 +15,11 @@ if PYTHON3:
intern = sys.intern
import uncompyle6.scanners.scanner2 as scan
from uncompyle6.scanner import L65536
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_26
JUMP_OPs = opcode_26.JUMP_OPs
JUMP_OPS = opcode_26.JUMP_OPS
class Scanner26(scan.Scanner2):
def __init__(self, show_asm=False):
@@ -178,9 +179,9 @@ class Scanner26(scan.Scanner2):
oparg = self.get_argument(offset) + extended_arg
extended_arg = 0
if op == self.opc.EXTENDED_ARG:
extended_arg = oparg * scan.L65536
extended_arg = oparg * L65536
continue
if op in self.opc.hasconst:
if op in self.opc.CONST_OPS:
const = co.co_consts[oparg]
# We can't use inspect.iscode() because we may be
# using a different version of Python than the
@@ -205,9 +206,9 @@ class Scanner26(scan.Scanner2):
pattr = '<code_object ' + const.co_name + '>'
else:
pattr = const
elif op in self.opc.hasname:
elif op in self.opc.NAME_OPS:
pattr = names[oparg]
elif op in self.opc.hasjrel:
elif op in self.opc.JREL_OPS:
pattr = repr(offset + 3 + oparg)
if op == self.opc.JUMP_FORWARD:
target = self.get_target(offset)
@@ -217,13 +218,13 @@ class Scanner26(scan.Scanner2):
if len(tokens) and tokens[-1].type == 'JUMP_BACK':
tokens[-1].type = intern('CONTINUE')
elif op in self.opc.hasjabs:
elif op in self.opc.JABS_OPS:
pattr = repr(oparg)
elif op in self.opc.haslocal:
elif op in self.opc.LOCAL_OPS:
pattr = varnames[oparg]
elif op in self.opc.hascompare:
elif op in self.opc.COMPARE_OPS:
pattr = self.opc.cmp_op[oparg]
elif op in self.opc.hasfree:
elif op in self.opc.FREE_OPS:
pattr = free[oparg]
if op in self.varargs_ops:
# CE - Hack for >= 2.5

View File

@@ -18,7 +18,7 @@ if PYTHON3:
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_27
JUMP_OPs = opcode_27.JUMP_OPs
JUMP_OPS = opcode_27.JUMP_OPs
class Scanner27(Scanner2):
def __init__(self, show_asm=False, is_pypy=False):

View File

@@ -25,10 +25,11 @@ from __future__ import print_function
from collections import namedtuple
from array import array
from uncompyle6.scanner import Scanner, op_has_argument
from uncompyle6.scanner import Scanner
from xdis.code import iscode
from xdis.bytecode import Bytecode
from xdis.bytecode import Bytecode, op_has_argument, op_size
from uncompyle6.scanner import Token, parse_fn_counts
import xdis
# Get all the opcodes into globals
import xdis.opcodes.opcode_33 as op3
@@ -329,7 +330,7 @@ class Scanner3(Scanner):
attr = (pos_args, name_pair_args, annotate_args)
tokens.append(
Token(
type_ = opname,
opname = opname,
attr = attr,
pattr = pattr,
offset = inst.offset,
@@ -407,7 +408,7 @@ class Scanner3(Scanner):
last_op_was_break = opname == 'BREAK_LOOP'
tokens.append(
Token(
type_ = opname,
opname = opname,
attr = argval,
pattr = pattr,
offset = inst.offset,
@@ -468,7 +469,7 @@ class Scanner3(Scanner):
self.prev = self.prev_op = [0]
for offset in self.op_range(0, codelen):
op = code[offset]
for _ in range(self.op_size(op)):
for _ in range(op_size(op, self.opc)):
self.prev_op.append(offset)
def find_jump_targets(self, debug):
@@ -518,7 +519,7 @@ class Scanner3(Scanner):
oparg = code[offset+1]
else:
oparg = code[offset+1] + code[offset+2] * 256
next_offset = self.next_offset(op, offset)
next_offset = xdis.next_offset(op, self.opc, offset)
if label is None:
if op in op3.hasjrel and op != self.opc.FOR_ITER:
@@ -564,7 +565,7 @@ class Scanner3(Scanner):
if elem != code[i]:
match = False
break
i += self.op_size(code[i])
i += op_size(code[i], self.opc)
if match is True:
i = self.prev_op[i]
@@ -632,11 +633,11 @@ class Scanner3(Scanner):
rel_offset = 0
if self.version >= 3.6:
target = self.code[offset+1]
if op in self.opc.hasjrel:
if op in self.opc.JREL_OPS:
rel_offset = offset + 2
else:
target = self.code[offset+1] + self.code[offset+2] * 256
if op in self.opc.hasjrel:
if op in self.opc.JREL_OPS:
rel_offset = offset + 3
pass
pass
@@ -757,7 +758,7 @@ class Scanner3(Scanner):
'start': jump_back+3,
'end': end})
elif op in self.pop_jump_tf:
start = offset + self.op_size(op)
start = offset + op_size(op, self.opc)
target = self.get_target(offset)
rtarget = self.restrict_to_parent(target, parent)
prev_op = self.prev_op
@@ -920,7 +921,7 @@ class Scanner3(Scanner):
# except block return
jump_prev = prev_op[offset]
if self.is_pypy and code[jump_prev] == self.opc.COMPARE_OP:
if self.opc.cmp_op[code[jump_prev+1]] == 'exception match':
if self.opc.cmp_op[code[jump_prev+1]] == 'exception-match':
return
if self.version >= 3.5:
# Python 3.5 may remove as dead code a JUMP
@@ -932,9 +933,9 @@ class Scanner3(Scanner):
# not from SETUP_EXCEPT
next_op = rtarget
if code[next_op] == self.opc.POP_BLOCK:
next_op += self.op_size(self.code[next_op])
next_op += op_size(self.code[next_op], self.opc)
if code[next_op] == self.opc.JUMP_ABSOLUTE:
next_op += self.op_size(self.code[next_op])
next_op += op_size(self.code[next_op], self.opc)
if next_op in targets:
for try_op in targets[next_op]:
come_from_op = code[try_op]
@@ -957,12 +958,12 @@ class Scanner3(Scanner):
end = self.restrict_to_parent(target, parent)
self.fixed_jumps[offset] = end
elif op == self.opc.POP_EXCEPT:
next_offset = self.next_offset(op, offset)
next_offset = xdis.next_offset(op, self.opc, offset)
target = self.get_target(next_offset)
if target > next_offset:
next_op = code[next_offset]
if (self.opc.JUMP_ABSOLUTE == next_op and
END_FINALLY != code[self.next_offset(next_op, next_offset)]):
END_FINALLY != code[xdis.next_offset(next_op, self.opc, next_offset)]):
self.fixed_jumps[next_offset] = target
self.except_targets[target] = next_offset

View File

@@ -10,7 +10,9 @@ from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_30 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
from xdis.bytecode import op_size
JUMP_OPS = opc.JUMP_OPS
JUMP_TF = frozenset([opc.JUMP_IF_FALSE, opc.JUMP_IF_TRUE])
@@ -118,7 +120,7 @@ class Scanner30(Scanner3):
if test == offset:
loop_type = 'while 1'
elif self.code[test] in opc.hasjabs+opc.hasjrel:
elif self.code[test] in opc.JUMP_OPs:
self.ignore_if.add(test)
test_target = self.get_target(test)
if test_target > (jump_back+3):
@@ -133,7 +135,7 @@ class Scanner30(Scanner3):
'start': jump_back+3,
'end': end})
elif op in JUMP_TF:
start = offset + self.op_size(op)
start = offset + op_size(op, self.opc)
target = self.get_target(offset)
rtarget = self.restrict_to_parent(target, parent)
prev_op = self.prev_op
@@ -293,7 +295,7 @@ class Scanner30(Scanner3):
# except block return
jump_prev = prev_op[offset]
if self.is_pypy and code[jump_prev] == self.opc.COMPARE_OP:
if self.opc.cmp_op[code[jump_prev+1]] == 'exception match':
if self.opc.cmp_op[code[jump_prev+1]] == 'exception-match':
return
if self.version >= 3.5:
# Python 3.5 may remove as dead code a JUMP
@@ -305,9 +307,9 @@ class Scanner30(Scanner3):
# not from SETUP_EXCEPT
next_op = rtarget
if code[next_op] == self.opc.POP_BLOCK:
next_op += self.op_size(self.code[next_op])
next_op += op_size(self.code[next_op], self.opc)
if code[next_op] == self.opc.JUMP_ABSOLUTE:
next_op += self.op_size(self.code[next_op])
next_op += op_size(self.code[next_op], self.opc)
if next_op in targets:
for try_op in targets[next_op]:
come_from_op = code[try_op]

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 3.1 bytecode scanner/deparser
@@ -10,7 +10,7 @@ from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_31 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
JUMP_OPS = opc.JUMP_OPS
from uncompyle6.scanners.scanner3 import Scanner3
class Scanner31(Scanner3):

View File

@@ -1,6 +1,9 @@
# Copyright (c) 2015-2016 by Rocky Bernstein
# Copyright (c) 2015-2017 by Rocky Bernstein
"""
Python 3.2 bytecode scanner/deparser
Python 3.2 bytecode decompiler scanner.
Does some additional massaging of xdis-disassembled instructions to
make things easier for decompilation.
This sets up opcodes Python's 3.2 and calls a generalized
scanner routine for Python 3.
@@ -10,7 +13,7 @@ from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_32 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
JUMP_OPS = opc.JUMP_OPS
from uncompyle6.scanners.scanner3 import Scanner3
class Scanner32(Scanner3):

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015-2016 by Rocky Bernstein
# Copyright (c) 2015-2017 by Rocky Bernstein
"""
Python 3.3 bytecode scanner/deparser
@@ -10,7 +10,7 @@ from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_33 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
JUMP_OPS = opc.JUMP_OPS
from uncompyle6.scanners.scanner3 import Scanner3
class Scanner33(Scanner3):

View File

@@ -1,6 +1,9 @@
# Copyright (c) 2015-2016 by Rocky Bernstein
# Copyright (c) 2015-2017 by Rocky Bernstein
"""
Python 3.4 bytecode scanner/deparser
Python 3.4 bytecode decompiler scanner
Does some additional massaging of xdis-disassembled instructions to
make things easier for decompilation.
This sets up opcodes Python's 3.4 and calls a generalized
scanner routine for Python 3.
@@ -11,7 +14,7 @@ from __future__ import print_function
from xdis.opcodes import opcode_34 as opc
# bytecode verification, verify(), uses JUMP_OPs from here
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
JUMP_OPS = opc.JUMP_OPS
from uncompyle6.scanners.scanner3 import Scanner3

View File

@@ -1,6 +1,9 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2017 by Rocky Bernstein
"""
Python 3.5 bytecode scanner/deparser
Python 3.5 bytecode decompiler scanner
Does some additional massaging of xdis-disassembled instructions to
make things easier for decompilation.
This sets up opcodes Python's 3.5 and calls a generalized
scanner routine for Python 3.
@@ -12,12 +15,12 @@ from uncompyle6.scanners.scanner3 import Scanner3
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_35 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
JUMP_OPS = opc.JUMP_OPS
class Scanner35(Scanner3):
def __init__(self, show_asm=None):
Scanner3.__init__(self, 3.5, show_asm)
def __init__(self, show_asm=None, is_pypy=False):
Scanner3.__init__(self, 3.5, show_asm, is_pypy)
return
pass

View File

@@ -1,6 +1,9 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 3.6 bytecode scanner/deparser
Python 3.6 bytecode decompiler scanner
Does some additional massaging of xdis-disassembled instructions to
make things easier for decompilation.
This sets up opcodes Python's 3.6 and calls a generalized
scanner routine for Python 3.
@@ -10,9 +13,9 @@ from __future__ import print_function
from uncompyle6.scanners.scanner3 import Scanner3
# bytecode verification, verify(), uses JUMP_OPs from here
# bytecode verification, verify(), uses JUMP_OPS from here
from xdis.opcodes import opcode_36 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
JUMP_OPS = opc.JUMP_OPS
class Scanner36(Scanner3):
@@ -28,8 +31,12 @@ class Scanner36(Scanner3):
if t.op == self.opc.CALL_FUNCTION_EX and t.attr & 1:
t.type = 'CALL_FUNCTION_EX_KW'
pass
if t.op == self.opc.CALL_FUNCTION_KW:
elif t.op == self.opc.CALL_FUNCTION_KW:
t.type = 'CALL_FUNCTION_KW_{t.attr}'.format(**locals())
elif t.op == self.opc.BUILD_TUPLE_UNPACK_WITH_CALL:
t.type = 'BUILD_TUPLE_UNPACK_WITH_CALL_%d' % t.attr
elif t.op == self.opc.BUILD_MAP_UNPACK_WITH_CALL:
t.type = 'BUILD_MAP_UNPACK_WITH_CALL_%d' % t.attr
pass
return tokens, customize

View File

@@ -0,0 +1,38 @@
# Copyright (c) 2016-2017 by Rocky Bernstein
"""
Python 3.7 bytecode decompiler scanner
Does some additional massaging of xdis-disassembled instructions to
make things easier for decompilation.
This sets up opcodes Python's 3.6 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
from uncompyle6.scanners.scanner3 import Scanner3
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_36 as opc
JUMP_OPs = opc.JUMP_OPS
class Scanner37(Scanner3):
def __init__(self, show_asm=None):
Scanner3.__init__(self, 3.7, show_asm)
return
pass
if __name__ == "__main__":
from uncompyle6 import PYTHON_VERSION
if PYTHON_VERSION == 3.7:
import inspect
co = inspect.currentframe().f_code
tokens, customize = Scanner37().ingest(co)
for t in tokens:
print(t.format())
pass
else:
print("Need to be Python 3.7 to demo; I am %s." %
PYTHON_VERSION)

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 by Rocky Bernstein
# Copyright (c) 2016-2017 by Rocky Bernstein
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
# Copyright (c) 1999 John Aycock
@@ -8,7 +8,7 @@ from uncompyle6 import PYTHON3
if PYTHON3:
intern = sys.intern
class Token:
class Token():
"""
Class representing a byte-code instruction.
@@ -16,13 +16,12 @@ class Token:
the contents of one line as output by dis.dis().
"""
# FIXME: match Python 3.4's terms:
# type_ should be opname
# linestart = starts_line
# attr = argval
# pattr = argrepr
def __init__(self, type_, attr=None, pattr=None, offset=-1,
def __init__(self, opname, attr=None, pattr=None, offset=-1,
linestart=None, op=None, has_arg=None, opc=None):
self.type = intern(type_)
self.type = intern(opname)
self.op = op
self.has_arg = has_arg
self.attr = attr
@@ -65,10 +64,10 @@ class Token:
if self.pattr:
pattr = self.pattr
if self.opc:
if self.op in self.opc.hasjrel:
if self.op in self.opc.JREL_OPS:
if not self.pattr.startswith('to '):
pattr = "to " + self.pattr
elif self.op in self.opc.hasjabs:
elif self.op in self.opc.JABS_OPS:
self.pattr= str(self.pattr)
if not self.pattr.startswith('to '):
pattr = "to " + str(self.pattr)

View File

@@ -1,5 +1,5 @@
# Copyright (c) 2017 by Rocky Bernstein
"""Constants used in pysource.py"""
"""Constants and initial table values used in pysource.py and fragments.py"""
import re, sys
from uncompyle6.parsers.astnode import AST
@@ -57,9 +57,7 @@ INDENT_PER_LEVEL = ' ' # additional intent per pretty-print level
TABLE_R = {
'STORE_ATTR': ( '%c.%[1]{pattr}', 0),
# 'STORE_SUBSCR': ( '%c[%c]', 0, 1 ),
'DELETE_ATTR': ( '%|del %c.%[-1]{pattr}\n', 0 ),
# 'EXEC_STMT': ( '%|exec %c in %[1]C\n', 0, (0,maxint,', ') ),
}
TABLE_R0 = {
@@ -67,8 +65,9 @@ TABLE_R0 = {
# 'BUILD_TUPLE': ( '(%C)', (0,-1,', ') ),
# 'CALL_FUNCTION': ( '%c(%P)', 0, (1,-1,', ') ),
}
TABLE_DIRECT = {
'BINARY_ADD': ( '+' ,),
'BINARY_ADD': ( '+' ,),
'BINARY_SUBTRACT': ( '-' ,),
'BINARY_MULTIPLY': ( '*' ,),
'BINARY_DIVIDE': ( '/' ,),
@@ -76,13 +75,13 @@ TABLE_DIRECT = {
'BINARY_TRUE_DIVIDE': ( '/' ,), # Not in <= 2.1
'BINARY_FLOOR_DIVIDE': ( '//' ,),
'BINARY_MODULO': ( '%%',),
'BINARY_POWER': ( '**',),
'BINARY_POWER': ( '**',),
'BINARY_LSHIFT': ( '<<',),
'BINARY_RSHIFT': ( '>>',),
'BINARY_AND': ( '&' ,),
'BINARY_OR': ( '|' ,),
'BINARY_XOR': ( '^' ,),
'INPLACE_ADD': ( '+=' ,),
'BINARY_AND': ( '&' ,),
'BINARY_OR': ( '|' ,),
'BINARY_XOR': ( '^' ,),
'INPLACE_ADD': ( '+=' ,),
'INPLACE_SUBTRACT': ( '-=' ,),
'INPLACE_MULTIPLY': ( '*=' ,),
'INPLACE_MATRIX_MULTIPLY': ( '@=' ,),
@@ -93,125 +92,125 @@ TABLE_DIRECT = {
'INPLACE_POWER': ( '**=',),
'INPLACE_LSHIFT': ( '<<=',),
'INPLACE_RSHIFT': ( '>>=',),
'INPLACE_AND': ( '&=' ,),
'INPLACE_OR': ( '|=' ,),
'INPLACE_XOR': ( '^=' ,),
'binary_expr': ( '%c %c %c', 0, -1, 1 ),
'INPLACE_AND': ( '&=' ,),
'INPLACE_OR': ( '|=' ,),
'INPLACE_XOR': ( '^=' ,),
'binary_expr': ( '%c %c %c', 0, -1, 1 ),
'UNARY_POSITIVE': ( '+',),
'UNARY_NEGATIVE': ( '-',),
'UNARY_INVERT': ( '~%c'),
'unary_expr': ( '%c%c', 1, 0),
'UNARY_INVERT': ( '~'),
'unary_expr': ( '%c%c', 1, 0),
'unary_not': ( 'not %c', 0 ),
'unary_not': ( 'not %c', 0 ),
'unary_convert': ( '`%c`', 0 ),
'get_iter': ( 'iter(%c)', 0 ),
'slice0': ( '%c[:]', 0 ),
'slice1': ( '%c[%p:]', 0, (1, 100) ),
'slice2': ( '%c[:%p]', 0, (1, 100) ),
'slice3': ( '%c[%p:%p]', 0, (1, 100), (2, 100) ),
'get_iter': ( 'iter(%c)', 0 ),
'slice0': ( '%c[:]', 0 ),
'slice1': ( '%c[%p:]', 0, (1, 100) ),
'slice2': ( '%c[:%p]', 0, (1, 100) ),
'slice3': ( '%c[%p:%p]', 0, (1, 100), (2, 100) ),
'IMPORT_FROM': ( '%{pattr}', ),
'load_attr': ( '%c.%[1]{pattr}', 0),
'LOAD_FAST': ( '%{pattr}', ),
'LOAD_NAME': ( '%{pattr}', ),
'IMPORT_FROM': ( '%{pattr}', ),
'load_attr': ( '%c.%[1]{pattr}', 0),
'LOAD_FAST': ( '%{pattr}', ),
'LOAD_NAME': ( '%{pattr}', ),
'LOAD_CLASSNAME': ( '%{pattr}', ),
'LOAD_GLOBAL': ( '%{pattr}', ),
'LOAD_DEREF': ( '%{pattr}', ),
'LOAD_LOCALS': ( 'locals()', ),
'LOAD_ASSERT': ( '%{pattr}', ),
'LOAD_GLOBAL': ( '%{pattr}', ),
'LOAD_DEREF': ( '%{pattr}', ),
'LOAD_LOCALS': ( 'locals()', ),
'LOAD_ASSERT': ( '%{pattr}', ),
# 'LOAD_CONST': ( '%{pattr}', ), # handled by n_LOAD_CONST
'DELETE_FAST': ( '%|del %{pattr}\n', ),
'DELETE_NAME': ( '%|del %{pattr}\n', ),
'DELETE_FAST': ( '%|del %{pattr}\n', ),
'DELETE_NAME': ( '%|del %{pattr}\n', ),
'DELETE_GLOBAL': ( '%|del %{pattr}\n', ),
'delete_subscr': ( '%|del %c[%c]\n', 0, 1,),
'binary_subscr': ( '%c[%p]', 0, (1, 100)),
'binary_subscr2': ( '%c[%p]', 0, (1, 100)),
'store_subscr': ( '%c[%c]', 0, 1),
'STORE_FAST': ( '%{pattr}', ),
'STORE_NAME': ( '%{pattr}', ),
'STORE_GLOBAL': ( '%{pattr}', ),
'STORE_DEREF': ( '%{pattr}', ),
'unpack': ( '%C%,', (1, maxint, ', ') ),
'store_subscr': ( '%c[%c]', 0, 1),
'STORE_FAST': ( '%{pattr}', ),
'STORE_NAME': ( '%{pattr}', ),
'STORE_GLOBAL': ( '%{pattr}', ),
'STORE_DEREF': ( '%{pattr}', ),
'unpack': ( '%C%,', (1, maxint, ', ') ),
# This nonterminal we create on the fly in semantic routines
'unpack_w_parens': ( '(%C%,)', (1, maxint, ', ') ),
'unpack_list': ( '[%C]', (1, maxint, ', ') ),
'build_tuple2': ( '%P', (0, -1, ', ', 100) ),
'unpack_list': ( '[%C]', (1, maxint, ', ') ),
'build_tuple2': ( '%P', (0, -1, ', ', 100) ),
# 'list_compr': ( '[ %c ]', -2), # handled by n_list_compr
'list_iter': ( '%c', 0),
'list_for': ( ' for %c in %c%c', 2, 0, 3 ),
'list_if': ( ' if %c%c', 0, 2 ),
'list_iter': ( '%c', 0 ),
'list_for': ( ' for %c in %c%c', 2, 0, 3 ),
'list_if': ( ' if %c%c', 0, 2 ),
'list_if_not': ( ' if not %p%c', (0, 22), 2 ),
'lc_body': ( '', ), # ignore when recusing
'lc_body': ( '', ), # ignore when recusing
'comp_iter': ( '%c', 0),
'comp_if': ( ' if %c%c', 0, 2 ),
'comp_ifnot': ( ' if not %p%c', (0, 22), 2 ),
'comp_body': ( '', ), # ignore when recusing
'comp_iter': ( '%c', 0 ),
'comp_if': ( ' if %c%c', 0, 2 ),
'comp_ifnot': ( ' if not %p%c', (0, 22), 2 ),
'comp_body': ( '', ), # ignore when recusing
'set_comp_body': ( '%c', 0 ),
'gen_comp_body': ( '%c', 0 ),
'dict_comp_body': ( '%c:%c', 1, 0 ),
'assign': ( '%|%c = %p\n', -1, (0, 200) ),
'assign': ( '%|%c = %p\n', -1, (0, 200) ),
# The 2nd parameter should have a = suffix.
# There is a rule with a 4th parameter "designator"
# which we don't use here.
'augassign1': ( '%|%c %c %c\n', 0, 2, 1),
'augassign1': ( '%|%c %c %c\n', 0, 2, 1),
'augassign2': ( '%|%c.%[2]{pattr} %c %c\n', 0, -3, -4),
'designList': ( '%c = %c', 0, -1 ),
'augassign2': ( '%|%c.%[2]{pattr} %c %c\n', 0, -3, -4 ),
'designList': ( '%c = %c', 0, -1 ),
'and': ( '%c and %c', 0, 2 ),
'ret_and': ( '%c and %c', 0, 2 ),
'and2': ( '%c', 3 ),
'or': ( '%c or %c', 0, 2 ),
'ret_or': ( '%c or %c', 0, 2 ),
'conditional': ( '%p if %p else %p', (2, 27), (0, 27), (4, 27)),
'conditionalTrue': ( '%p if 1 else %p', (0, 27), (2, 27)),
'ret_cond': ( '%p if %p else %p', (2, 27), (0, 27), (-1, 27)),
'conditionalnot': ( '%p if not %p else %p', (2, 27), (0, 22), (4, 27)),
'ret_cond_not': ( '%p if not %p else %p', (2, 27), (0, 22), (-1, 27)),
'ret_or': ( '%c or %c', 0, 2 ),
'conditional': ( '%p if %p else %p', (2, 27), (0, 27), (4, 27) ),
'conditionalTrue': ( '%p if 1 else %p', (0, 27), (2, 27) ),
'ret_cond': ( '%p if %p else %p', (2, 27), (0, 27), (-1, 27) ),
'conditionalnot': ( '%p if not %p else %p', (2, 27), (0, 22), (4, 27) ),
'ret_cond_not': ( '%p if not %p else %p', (2, 27), (0, 22), (-1, 27) ),
'conditional_lambda': ( '(%c if %c else %c)', 2, 0, 3),
'return_lambda': ('%c', 0),
'compare': ( '%p %[-1]{pattr} %p', (0, 19), (1, 19) ),
'cmp_list': ( '%p %p', (0, 29), (1, 30)),
'cmp_list1': ( '%[3]{pattr} %p %p', (0, 19), (-2, 19)),
'cmp_list2': ( '%[1]{pattr} %p', (0, 19)),
'compare': ( '%p %[-1]{pattr.replace("-", " ")} %p', (0, 19), (1, 19) ),
'cmp_list': ( '%p %p', (0, 29), (1, 30)),
'cmp_list1': ( '%[3]{pattr} %p %p', (0, 19), (-2, 19)),
'cmp_list2': ( '%[1]{pattr} %p', (0, 19)),
# 'classdef': (), # handled by n_classdef()
'funcdef': ( '\n\n%|def %c\n', -2), # -2 to handle closures
'funcdef': ( '\n\n%|def %c\n', -2), # -2 to handle closures
'funcdefdeco': ( '\n\n%c', 0),
'mkfuncdeco': ( '%|@%c\n%c', 0, 1),
'mkfuncdeco': ( '%|@%c\n%c', 0, 1),
'mkfuncdeco0': ( '%|def %c\n', 0),
'classdefdeco': ( '\n\n%c', 0),
'classdefdeco1': ( '%|@%c\n%c', 0, 1),
'kwarg': ( '%[0]{pattr}=%c', 1),
'kwargs': ( '%D', (0, maxint, ', ') ),
'kwarg': ( '%[0]{pattr}=%c', 1),
'kwargs': ( '%D', (0, maxint, ', ') ),
'assert_expr_or': ( '%c or %c', 0, 2 ),
'assert_expr_and': ( '%c and %c', 0, 2 ),
'print_items_stmt': ( '%|print %c%c,\n', 0, 2), # Python 2 only
'print_items_nl_stmt': ( '%|print %c%c\n', 0, 2),
'print_item': ( ', %c', 0),
'print_nl': ( '%|print\n', ),
'print_to': ( '%|print >> %c, %c,\n', 0, 1 ),
'print_to_nl': ( '%|print >> %c, %c\n', 0, 1 ),
'print_nl_to': ( '%|print >> %c\n', 0 ),
'assert_expr_or': ( '%c or %c', 0, 2 ),
'assert_expr_and': ( '%c and %c', 0, 2 ),
'print_items_stmt': ( '%|print %c%c,\n', 0, 2 ), # Python 2 only
'print_items_nl_stmt': ( '%|print %c%c\n', 0, 2 ),
'print_item': ( ', %c', 0),
'print_nl': ( '%|print\n', ),
'print_to': ( '%|print >> %c, %c,\n', 0, 1 ),
'print_to_nl': ( '%|print >> %c, %c\n', 0, 1 ),
'print_nl_to': ( '%|print >> %c\n', 0 ),
'print_to_items': ( '%C', (0, 2, ', ') ),
'call_stmt': ( '%|%p\n', (0, 200)),
'break_stmt': ( '%|break\n', ),
'call_stmt': ( '%|%p\n', (0, 200)),
'break_stmt': ( '%|break\n', ),
'continue_stmt': ( '%|continue\n', ),
'raise_stmt0': ( '%|raise\n', ),
'raise_stmt1': ( '%|raise %c\n', 0),
'raise_stmt3': ( '%|raise %c, %c, %c\n', 0, 1, 2),
'raise_stmt0': ( '%|raise\n', ),
'raise_stmt1': ( '%|raise %c\n', 0),
'raise_stmt3': ( '%|raise %c, %c, %c\n', 0, 1, 2),
# 'yield': ( 'yield %c', 0),
# 'return_stmt': ( '%|return %c\n', 0),
'ifstmt': ( '%|if %c:\n%+%c%-', 0, 1 ),
'ifstmt': ( '%|if %c:\n%+%c%-', 0, 1 ),
'iflaststmt': ( '%|if %c:\n%+%c%-', 0, 1 ),
'iflaststmtl': ( '%|if %c:\n%+%c%-', 0, 1 ),
'testtrue': ( 'not %p', (0, 22) ),
@@ -229,37 +228,37 @@ TABLE_DIRECT = {
'elifelsestmtr2': ( '%|elif %c:\n%+%c%-%|else:\n%+%c%-\n\n', 0, 1, 3 ), # has COME_FROM
'whileTruestmt': ( '%|while True:\n%+%c%-\n\n', 1 ),
'whilestmt': ( '%|while %c:\n%+%c%-\n\n', 1, 2 ),
'while1stmt': ( '%|while 1:\n%+%c%-\n\n', 1 ),
'while1elsestmt': ( '%|while 1:\n%+%c%-%|else:\n%+%c%-\n\n', 1, -2 ),
'whilestmt': ( '%|while %c:\n%+%c%-\n\n', 1, 2 ),
'while1stmt': ( '%|while 1:\n%+%c%-\n\n', 1 ),
'while1elsestmt': ( '%|while 1:\n%+%c%-%|else:\n%+%c%-\n\n', 1, -2 ),
'whileelsestmt': ( '%|while %c:\n%+%c%-%|else:\n%+%c%-\n\n', 1, 2, -2 ),
'whileelselaststmt': ( '%|while %c:\n%+%c%-%|else:\n%+%c%-', 1, 2, -2 ),
'forstmt': ( '%|for %c in %c:\n%+%c%-\n\n', 3, 1, 4 ),
'forelsestmt': (
'%|for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 3, 1, 4, -2),
'forstmt': ( '%|for %c in %c:\n%+%c%-\n\n', 3, 1, 4 ),
'forelsestmt': (
'%|for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 3, 1, 4, -2 ),
'forelselaststmt': (
'%|for %c in %c:\n%+%c%-%|else:\n%+%c%-', 3, 1, 4, -2),
'%|for %c in %c:\n%+%c%-%|else:\n%+%c%-', 3, 1, 4, -2 ),
'forelselaststmtl': (
'%|for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 3, 1, 4, -2),
'trystmt': ( '%|try:\n%+%c%-%c\n\n', 1, 3 ),
'tryelsestmt': ( '%|try:\n%+%c%-%c%|else:\n%+%c%-\n\n', 1, 3, 4 ),
'tryelsestmtc': ( '%|try:\n%+%c%-%c%|else:\n%+%c%-', 1, 3, 4 ),
'tryelsestmtl': ( '%|try:\n%+%c%-%c%|else:\n%+%c%-', 1, 3, 4 ),
'tf_trystmt': ( '%c%-%c%+', 1, 3 ),
'%|for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 3, 1, 4, -2 ),
'trystmt': ( '%|try:\n%+%c%-%c\n\n', 1, 3 ),
'tryelsestmt': ( '%|try:\n%+%c%-%c%|else:\n%+%c%-\n\n', 1, 3, 4 ),
'tryelsestmtc': ( '%|try:\n%+%c%-%c%|else:\n%+%c%-', 1, 3, 4 ),
'tryelsestmtl': ( '%|try:\n%+%c%-%c%|else:\n%+%c%-', 1, 3, 4 ),
'tf_trystmt': ( '%c%-%c%+', 1, 3 ),
'tf_tryelsestmt': ( '%c%-%c%|else:\n%+%c', 1, 3, 4 ),
'tryfinallystmt': ( '%|try:\n%+%c%-%|finally:\n%+%c%-\n\n', 1, 5 ),
'except': ( '%|except:\n%+%c%-', 3 ),
'except_cond1': ( '%|except %c:\n', 1 ),
'except_cond1': ( '%|except %c:\n', 1 ),
'except_suite': ( '%+%c%-%C', 0, (1, maxint, '') ),
'except_suite_finalize': ( '%+%c%-%C', 1, (3, maxint, '') ),
'passstmt': ( '%|pass\n', ),
'STORE_FAST': ( '%{pattr}', ),
'kv': ( '%c: %c', 3, 1 ),
'kv2': ( '%c: %c', 1, 2 ),
'mapexpr': ( '{%[1]C}', (0, maxint, ', ') ),
'importstmt': ( '%|import %c\n', 2),
'importfrom': ( '%|from %[2]{pattr} import %c\n', 3 ),
'importstar': ( '%|from %[2]{pattr} import *\n', ),
'passstmt': ( '%|pass\n', ),
'STORE_FAST': ( '%{pattr}', ),
'kv': ( '%c: %c', 3, 1 ),
'kv2': ( '%c: %c', 1, 2 ),
'mapexpr': ( '{%[1]C}', (0, maxint, ', ') ),
'importstmt': ( '%|import %c\n', 2),
'importfrom': ( '%|from %[2]{pattr} import %c\n', 3 ),
'importstar': ( '%|from %[2]{pattr} import *\n', ),
}
@@ -276,7 +275,7 @@ MAP = {
}
# Operator precidence
# See https://docs.python.org/3/reference/expressions.html
# See https://docs.python.org/2/reference/expressions.html
# or https://docs.python.org/3/reference/expressions.html
# for a list.
PRECEDENCE = {

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015, 2016 by Rocky Bernstein
# Copyright (c) 2015-2017 by Rocky Bernstein
# Copyright (c) 2005 by Dan Pascu <dan@windowmaker.org>
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
# Copyright (c) 1999 John Aycock
@@ -8,8 +8,8 @@ Creates Python source code from an uncompyle6 abstract syntax tree,
and indexes fragments which can be accessed by instruction offset
address.
See the comments in pysource for information on the abstract sytax tree
and how semantic actions are written.
See https://github.com/rocky/python-uncompyle6/wiki/Table-driven-semantic-actions.
for a more complete explanation, nicely marked up and with examples.
We add some format specifiers here not used in pysource
@@ -40,11 +40,12 @@ do it recursively which is where offsets are probably located.
2. %b
-----
%b associates the text from the previous start node up to what we have now
%b associates the text from the specified index to what we have now.
it takes an integer argument.
For example in:
'importmultiple': ( '%|import%b %c%c\n', 0, 2, 3 ),
n
The node position 0 will be associated with "import".
"""
@@ -77,11 +78,12 @@ from uncompyle6.semantics.consts import (
)
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from spark_parser.ast import GenericASTTraversalPruningException
from collections import namedtuple
NodeInfo = namedtuple("NodeInfo", "node start finish")
ExtractInfo = namedtuple("ExtractInfo",
"lineNo lineStartOffset markerLine selectedLine selectedText")
"lineNo lineStartOffset markerLine selectedLine selectedText nonterminal")
TABLE_DIRECT_FRAGMENT = {
'break_stmt': ( '%|%rbreak\n', ),
@@ -94,7 +96,7 @@ TABLE_DIRECT_FRAGMENT = {
'list_for': (' for %c%x in %c%c', 2, (2, (1, )), 0, 3 ),
'forstmt': ( '%|for%b %c%x in %c:\n%+%c%-\n\n', 0, 3, (3, (2, )), 1, 4 ),
'forelsestmt': (
'%|for %c in %c%x:\n%+%c%-%|else:\n%+%c%-\n\n', 3, (3, (2,)), 1, 4, -2),
'%|for %c%x in %c:\n%+%c%-%|else:\n%+%c%-\n\n', 3, (3, (2,)), 1, 4, -2),
'forelselaststmt': (
'%|for %c%x in %c:\n%+%c%-%|else:\n%+%c%-', 3, (3, (2,)), 1, 4, -2),
'forelselaststmtl': (
@@ -159,8 +161,9 @@ class FragmentsWalker(pysource.SourceWalker, object):
def set_pos_info(self, node, start, finish, name=None):
if name is None: name = self.name
if hasattr(node, 'offset'):
self.offsets[name, node.offset] = \
NodeInfo(node = node, start = start, finish = finish)
node.start = start
node.finish = finish
self.offsets[name, node.offset] = node
if hasattr(node, 'parent'):
assert node.parent != node
@@ -176,6 +179,34 @@ class FragmentsWalker(pysource.SourceWalker, object):
return
def table_r_node(self, node):
"""General pattern where the last node should should
get the text span attributes of the entire tree"""
start = len(self.f.getvalue())
try:
self.default(node)
except GenericASTTraversalPruningException:
final = len(self.f.getvalue())
self.set_pos_info(node, start, final)
self.set_pos_info(node[-1], start, final)
raise GenericASTTraversalPruningException
n_slice0 = n_slice1 = n_slice2 = n_slice3 = n_binary_subscr = table_r_node
n_augassign_1 = n_print_item = exec_stmt = print_to_item = del_stmt = table_r_node
n_classdefco1 = n_classdefco2 = except_cond1 = except_cond2 = table_r_node
def n_passtmt(self, node):
start = len(self.f.getvalue()) + len(self.indent)
self.set_pos_info(node, start, start+len("pass"))
self.default(node)
def n_trystmt(self, node):
start = len(self.f.getvalue()) + len(self.indent)
self.set_pos_info(node[0], start, start+len("try:"))
self.default(node)
n_tryelsestmt = n_tryelsestmtc = n_tryelsestmtl = n_tryfinallystmt = n_trystmt
def n_return_stmt(self, node):
start = len(self.f.getvalue()) + len(self.indent)
if self.params['isLambda']:
@@ -229,6 +260,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write(' ')
node[0].parent = node
self.preorder(node[0])
self.set_pos_info(node[-1], start, len(self.f.getvalue()))
self.set_pos_info(node, start, len(self.f.getvalue()))
self.prune() # stop recursing
@@ -366,6 +398,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write(sep); sep = ", "
self.preorder(subnode)
self.set_pos_info(node, start, len(self.f.getvalue()))
self.set_pos_info(node[-1], start, len(self.f.getvalue()))
self.println()
self.prune() # stop recursing
@@ -389,10 +422,10 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write(self.indent, 'if ')
self.preorder(node[0])
self.println(':')
self.indentMore()
self.indent_more()
node[1].parent = node
self.preorder(node[1])
self.indentLess()
self.indent_less()
if_ret_at_end = False
if len(node[2][0]) >= 3:
@@ -411,17 +444,17 @@ class FragmentsWalker(pysource.SourceWalker, object):
prev_stmt_is_if_ret = False
if not past_else and not if_ret_at_end:
self.println(self.indent, 'else:')
self.indentMore()
self.indent_more()
past_else = True
n.parent = node
self.preorder(n)
if not past_else or if_ret_at_end:
self.println(self.indent, 'else:')
self.indentMore()
self.indent_more()
node[2][1].parent = node
self.preorder(node[2][1])
self.set_pos_info(node, start, len(self.f.getvalue()))
self.indentLess()
self.indent_less()
self.prune()
def n_elifelsestmtr(self, node):
@@ -438,20 +471,20 @@ class FragmentsWalker(pysource.SourceWalker, object):
node[0].parent = node
self.preorder(node[0])
self.println(':')
self.indentMore()
self.indent_more()
node[1].parent = node
self.preorder(node[1])
self.indentLess()
self.indent_less()
for n in node[2][0]:
n[0].type = 'elifstmt'
n.parent = node
self.preorder(n)
self.println(self.indent, 'else:')
self.indentMore()
self.indent_more()
node[2][1].parent = node
self.preorder(node[2][1])
self.indentLess()
self.indent_less()
self.set_pos_info(node, start, len(self.f.getvalue()))
self.prune()
@@ -495,7 +528,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write(func_name)
self.set_pos_info(code_node, start, len(self.f.getvalue()))
self.indentMore()
self.indent_more()
start = len(self.f.getvalue())
self.make_function(node, isLambda=False, codeNode=code_node)
@@ -505,7 +538,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write('\n\n')
else:
self.write('\n\n\n')
self.indentLess()
self.indent_less()
self.prune() # stop recursing
def n_list_compr(self, node):
@@ -945,9 +978,9 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.println(':')
# class body
self.indentMore()
self.indent_more()
self.build_class(subclass)
self.indentLess()
self.indent_less()
self.currentclass = cclass
self.set_pos_info(node, start, len(self.f.getvalue()))
@@ -1089,7 +1122,6 @@ class FragmentsWalker(pysource.SourceWalker, object):
def traverse(self, node, indent=None, isLambda=False):
'''Buulds up fragment which can be used inside a larger
block of code'''
self.param_stack.append(self.params)
if indent is None: indent = self.indent
p = self.pending_newlines
@@ -1184,16 +1216,38 @@ class FragmentsWalker(pysource.SourceWalker, object):
if elided: selectedLine += ' ...'
if isinstance(nodeInfo, Token):
nodeInfo = nodeInfo.parent
else:
nodeInfo = nodeInfo
if isinstance(nodeInfo, AST):
nonterminal = nodeInfo[0]
else:
nonterminal = nodeInfo.node
return ExtractInfo(lineNo = len(lines), lineStartOffset = lineStart,
markerLine = markerLine,
selectedLine = selectedLine,
selectedText = selectedText)
selectedText = selectedText,
nonterminal = nonterminal)
def extract_line_info(self, name, offset):
if (name, offset) not in list(self.offsets.keys()):
return None
return self.extract_node_info(self.offsets[name, offset])
def prev_node(self, node):
prev = None
if not hasattr(node, 'parent'):
return prev
p = node.parent
for n in p:
if node == n:
return prev
prev = n
return prev
def extract_parent_info(self, node):
if not hasattr(node, 'parent'):
return None, None
@@ -1263,7 +1317,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
p = self.prec
self.prec = 100
self.indentMore(INDENT_PER_LEVEL)
self.indent_more(INDENT_PER_LEVEL)
line_seperator = ',\n' + self.indent
sep = INDENT_PER_LEVEL[:-1]
start = len(self.f.getvalue())
@@ -1340,7 +1394,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
n.parent = node
self.set_pos_info(n, start, finish)
self.set_pos_info(node, start, finish)
self.indentLess(INDENT_PER_LEVEL)
self.indent_less(INDENT_PER_LEVEL)
self.prec = p
self.prune()
@@ -1376,7 +1430,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
else:
flat_elems.append(elem)
self.indentMore(INDENT_PER_LEVEL)
self.indent_more(INDENT_PER_LEVEL)
if len(node) > 3:
line_separator = ',\n' + self.indent
else:
@@ -1401,14 +1455,14 @@ class FragmentsWalker(pysource.SourceWalker, object):
n.parent = node.parent
self.set_pos_info(n, start, finish)
self.set_pos_info(node, start, finish)
self.indentLess(INDENT_PER_LEVEL)
self.indent_less(INDENT_PER_LEVEL)
self.prec = p
self.prune()
def engine(self, entry, startnode):
def template_engine(self, entry, startnode):
"""The format template interpetation engine. See the comment at the
beginning of this module for the how we interpret format specifications such as
%c, %C, and so on.
beginning of this module for the how we interpret format
specifications such as %c, %C, and so on.
"""
# print("-----")
@@ -1445,8 +1499,8 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write('%')
self.set_pos_info(node, start, len(self.f.getvalue()))
elif typ == '+': self.indentMore()
elif typ == '-': self.indentLess()
elif typ == '+': self.indent_more()
elif typ == '-': self.indent_less()
elif typ == '|': self.write(self.indent)
# no longer used, since BUILD_TUPLE_n is pretty printed:
elif typ == 'r': recurse_node = True
@@ -1557,25 +1611,14 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.set_pos_info(startnode, startnode_start, fin)
# FIXME rocky: figure out how to get these casess to be table driven.
#
# 1. for loops. For loops have two positions that correspond to a single text
# location. In "for i in ..." there is the initialization "i" code as well
# as the iteration code with "i". A "copy" spec like %X3,3 - copy parame
# 3 to param 2 would work
#
# 2. subroutine calls. It the last op is the call and for purposes of printing
# we don't need to print anything special there. However it encompases the
# entire string of the node fn(...)
match = re.search(r'^try', startnode.type)
match = re.search(r'^call_function', startnode.type)
if match:
self.set_pos_info(node[0], startnode_start, startnode_start+len("try:"))
self.set_pos_info(node[2], node[3].finish, node[3].finish)
else:
match = re.search(r'^call_function', startnode.type)
if match:
last_node = startnode[-1]
# import traceback; traceback.print_stack()
self.set_pos_info(last_node, startnode_start, self.last_finish)
last_node = startnode[-1]
# import traceback; traceback.print_stack()
self.set_pos_info(last_node, startnode_start, self.last_finish)
return
@classmethod
@@ -1657,6 +1700,13 @@ def deparse_code(version, co, out=StringIO(), showasm=False, showast=False,
if deparsed.ERROR:
raise deparsed.ERROR
# To keep the API consistent with previous releases, convert
# deparse.offset values into NodeInfo items
for tup, node in deparsed.offsets.items():
deparsed.offsets[tup] = NodeInfo(node = node, start = node.start,
finish = node.finish)
deparsed.scanner = scanner
return deparsed
from bisect import bisect_right
@@ -1691,6 +1741,7 @@ def deparse_code_around_offset(name, offset, version, co, out=StringIO(),
if __name__ == '__main__':
from uncompyle6 import IS_PYPY
def deparse_test(co, is_pypy=IS_PYPY):
sys_version = sys.version_info.major + (sys.version_info.minor / 10.0)
walk = deparse_code(sys_version, co, showasm=False, showast=False,

View File

@@ -3,7 +3,7 @@
"""
All the crazy things we have to do to handle Python functions
"""
from xdis.code import iscode
from xdis.code import iscode, code_has_star_arg, code_has_star_star_arg
from uncompyle6.scanner import Code
from uncompyle6.parsers.astnode import AST
from uncompyle6 import PYTHON3
@@ -45,17 +45,6 @@ def find_none(node):
return True
return False
# FIXME: put this in xdis
def code_has_star_arg(code):
"""Return True iff
the code object has a variable positional parameter (*args-like)"""
return (code.co_flags & 4) != 0
def code_has_star_star_arg(code):
"""Return True iff
The code object has a variable keyword parameter (**kwargs-like)."""
return (code.co_flags & 8) != 0
# FIXME: DRY the below code...
def make_function3_annotate(self, node, isLambda, nested=1,
@@ -163,6 +152,9 @@ def make_function3_annotate(self, node, isLambda, nested=1,
i = len(paramnames) - len(defparams)
suffix = ''
no_paramnames = len(paramnames[:i]) == 0
for param in paramnames[:i]:
self.write(suffix, param)
suffix = ', '
@@ -182,6 +174,7 @@ def make_function3_annotate(self, node, isLambda, nested=1,
suffix = ', ' if i > 0 else ''
for n in node:
if n == 'pos_arg':
no_paramnames = False
self.write(suffix)
param = paramnames[i]
self.write(param)
@@ -189,7 +182,11 @@ def make_function3_annotate(self, node, isLambda, nested=1,
aa = annotate_args[param]
if isinstance(aa, tuple):
aa = aa[0]
self.write(': "%s"' % aa)
self.write(': "%s"' % aa)
elif isinstance(aa, AST):
self.write(': ')
self.preorder(aa)
self.write('=')
i += 1
self.preorder(n)
@@ -202,64 +199,65 @@ def make_function3_annotate(self, node, isLambda, nested=1,
# self.println(indent, '#flags:\t', int(code.co_flags))
if kw_args + annotate_argc > 0:
if not code_has_star_arg(code):
if argc > 0:
self.write(", *, ")
else:
self.write("*, ")
pass
else:
self.write(", ")
kwargs = node[0]
last = len(kwargs)-1
i = 0
for n in node[0]:
if n == 'kwarg':
if (line_number != self.line_number):
self.write("\n" + indent)
line_number = self.line_number
self.write('%s=' % n[0].pattr)
self.preorder(n[1])
if i < last:
self.write(', ')
i += 1
if no_paramnames:
if not code_has_star_arg(code):
if argc > 0:
self.write(", *, ")
else:
self.write("*, ")
pass
pass
annotate_args = []
for n in node:
if n == 'annotate_arg':
annotate_args.append(n[0])
elif n == 'annotate_tuple':
t = n[0].attr
if t[-1] == 'return':
t = t[0:-1]
annotate_args = annotate_args[:-1]
pass
last = len(annotate_args) - 1
for i in range(len(annotate_args)):
self.write("%s: " % (t[i]))
self.preorder(annotate_args[i])
else:
self.write(", ")
kwargs = node[0]
last = len(kwargs)-1
i = 0
for n in node[0]:
if n == 'kwarg':
if (line_number != self.line_number):
self.write("\n" + indent)
line_number = self.line_number
self.write('%s=' % n[0].pattr)
self.preorder(n[1])
if i < last:
self.write(', ')
pass
i += 1
pass
break
pass
annotate_args = []
for n in node:
if n == 'annotate_arg':
annotate_args.append(n[0])
elif n == 'annotate_tuple':
t = n[0].attr
if t[-1] == 'return':
t = t[0:-1]
annotate_args = annotate_args[:-1]
pass
last = len(annotate_args) - 1
for i in range(len(annotate_args)):
self.write("%s: " % (t[i]))
self.preorder(annotate_args[i])
if i < last:
self.write(', ')
pass
pass
break
pass
pass
pass
if code_has_star_star_arg(code):
if argc > 0:
self.write(', ')
self.write('**%s' % code.co_varnames[argc + kw_pairs])
if code_has_star_star_arg(code):
if argc > 0:
self.write(', ')
self.write('**%s' % code.co_varnames[argc + kw_pairs])
if isLambda:
self.write(": ")
else:
self.write(')')
if 'return' in annotate_tuple[0].attr:
if (line_number != self.line_number):
if (line_number != self.line_number) and not no_paramnames:
self.write("\n" + indent)
line_number = self.line_number
self.write(' -> ')
@@ -402,7 +400,8 @@ def make_function2(self, node, isLambda, nested=1, codeNode=None):
if code_has_star_star_arg(code):
if argc > 0:
self.write(', ')
self.write('**%s' % code.co_varnames[argc + kw_pairs])
if argc + kw_pairs > 0:
self.write('**%s' % code.co_varnames[argc + kw_pairs])
if isLambda:
self.write(": ")
@@ -428,10 +427,34 @@ def make_function2(self, node, isLambda, nested=1, codeNode=None):
def make_function3(self, node, isLambda, nested=1, codeNode=None):
"""Dump function definition, doc string, and function body."""
"""Dump function definition, doc string, and function body in
Python version 3.0 and above
"""
# FIXME: call make_function3 if we are self.version >= 3.0
# and then simplify the below.
# For Python 3.3, the evaluation stack in MAKE_FUNCTION is:
# * default argument objects in positional order
# * pairs of name and default argument, with the name just below
# the object on the stack, for keyword-only parameters
# * parameter annotation objects
# * a tuple listing the parameter names for the annotations
# (only if there are ony annotation objects)
# * the code associated with the function (at TOS1)
# * the qualified name of the function (at TOS)
# For Python 3.0 .. 3.2 the evaluation stack is:
# The function object is defined to have argc default parameters,
# which are found below TOS.
# * first come positional args in the order they are given in the source,
# * next come the keyword args in the order they given in the source,
# * finally is the code associated with the function (at TOS)
#
# Note: There is no qualified name at TOS
# MAKE_CLOSURE adds an additional closure slot
# Thank you, Python, for a such a well-thought out system that has
# changed 4 or so times.
def build_param(ast, name, default):
"""build parameters:
@@ -451,21 +474,31 @@ def make_function3(self, node, isLambda, nested=1, codeNode=None):
# MAKE_FUNCTION_... or MAKE_CLOSURE_...
assert node[-1].type.startswith('MAKE_')
# Python 3.3+ adds a qualified name at TOS (-1)
# moving down the LOAD_LAMBDA instruction
if 3.0 <= self.version <= 3.2:
lambda_index = -2
elif 3.03 <= self.version:
lambda_index = -3
else:
lambda_index = None
args_node = node[-1]
if isinstance(args_node.attr, tuple):
if self.version <= 3.3 and len(node) > 2 and node[-3] != 'LOAD_LAMBDA':
# positional args are after kwargs
pos_args, kw_args, annotate_argc = args_node.attr
if self.version <= 3.3 and len(node) > 2 and node[lambda_index] != 'LOAD_LAMBDA':
# args are after kwargs; kwargs are bundled as one node
defparams = node[1:args_node.attr[0]+1]
else:
# positional args are before kwargs
# args are before kwargs; kwags as bundled as one node
defparams = node[:args_node.attr[0]]
pos_args, kw_args, annotate_argc = args_node.attr
else:
if self.version < 3.6:
defparams = node[:args_node.attr]
else:
default, kw, annotate, closure = args_node.attr
# FIXME: start here.
# FIXME: start here for Python 3.6 and above:
defparams = []
# if default:
# defparams = node[-(2 + kw + annotate + closure)]
@@ -475,12 +508,6 @@ def make_function3(self, node, isLambda, nested=1, codeNode=None):
kw_args = 0
pass
if 3.0 <= self.version <= 3.2:
lambda_index = -2
elif 3.03 <= self.version:
lambda_index = -3
else:
lambda_index = None
if lambda_index and isLambda and iscode(node[lambda_index].attr):
assert node[lambda_index].type == 'LOAD_LAMBDA'
@@ -496,7 +523,7 @@ def make_function3(self, node, isLambda, nested=1, codeNode=None):
paramnames = list(code.co_varnames[:argc])
# defaults are for last n parameters, thus reverse
if not 3.0 <= self.version <= 3.2:
if not 3.0 <= self.version <= 3.1:
paramnames.reverse(); defparams.reverse()
try:
@@ -510,58 +537,27 @@ def make_function3(self, node, isLambda, nested=1, codeNode=None):
return
kw_pairs = args_node.attr[1] if self.version >= 3.0 else 0
indent = self.indent
# build parameters
if self.version != 3.2:
params = [build_param(ast, name, default) for
name, default in zip_longest(paramnames, defparams, fillvalue=None)]
params = [build_param(ast, name, d) for
name, d in zip_longest(paramnames, defparams, fillvalue=None)]
if not 3.0 <= self.version <= 3.1:
params.reverse() # back to correct order
if code_has_star_arg(code):
if self.version > 3.0:
params.append('*%s' % code.co_varnames[argc + kw_pairs])
else:
params.append('*%s' % code.co_varnames[argc])
argc += 1
# dump parameter list (with default values)
if isLambda:
self.write("lambda ", ", ".join(params))
if code_has_star_arg(code):
if self.version > 3.0:
params.append('*%s' % code.co_varnames[argc + kw_pairs])
else:
self.write("(", ", ".join(params))
# self.println(indent, '#flags:\t', int(code.co_flags))
params.append('*%s' % code.co_varnames[argc])
argc += 1
# dump parameter list (with default values)
if isLambda:
self.write("lambda ", ", ".join(params))
else:
if isLambda:
self.write("lambda ")
else:
self.write("(")
pass
last_line = self.f.getvalue().split("\n")[-1]
l = len(last_line)
indent = ' ' * l
line_number = self.line_number
if code_has_star_arg(code):
self.write('*%s' % code.co_varnames[argc + kw_pairs])
argc += 1
i = len(paramnames) - len(defparams)
self.write(", ".join(paramnames[:i]))
suffix = ', ' if i > 0 else ''
for n in node:
if n == 'pos_arg':
self.write(suffix)
self.write(paramnames[i] + '=')
i += 1
self.preorder(n)
if (line_number != self.line_number):
suffix = ",\n" + indent
line_number = self.line_number
else:
suffix = ', '
self.write("(", ", ".join(params))
# self.println(indent, '#flags:\t', int(code.co_flags))
if kw_args > 0:
if not (4 & code.co_flags):

View File

@@ -11,62 +11,87 @@ and what they mean).
Upper levels of the grammar is a more-or-less conventional grammar for
Python.
Semantic action rules for nonterminal symbols can be specified here by
creating a method prefaced with "n_" for that nonterminal. For
example, "n_exec_stmt" handles the semantic actions for the
"exec_smnt" nonterminal symbol. Similarly if a method with the name
of the nonterminal is suffixed with "_exit" it will be called after
all of its children are called.
Another other way to specify a semantic rule for a nonterminal is via
rule given in one of the tables MAP_R0, MAP_R, or MAP_DIRECT.
These uses a printf-like syntax to direct substitution from attributes
of the nonterminal and its children..
The rest of the below describes how table-driven semantic actions work
and gives a list of the format specifiers. The default() and engine()
methods implement most of the below.
Step 1 determines a table (T) and a path to a
table key (K) from the node type (N) (other nodes are shown as O):
N N N&K
/ | ... \ / | ... \ / | ... \
O O O O O K O O O
|
K
MAP_R0 (TABLE_R0) MAP_R (TABLE_R) MAP_DIRECT (TABLE_DIRECT)
The default is a direct mapping. The key K is then extracted from the
subtree and used to find a table entry T[K], if any. The result is a
format string and arguments (a la printf()) for the formatting engine.
Escapes in the format string are:
%c evaluate children N[A] recursively*
%C evaluate children N[A[0]]..N[A[1]-1] recursively, separate by A[2]*
%P same as %C but sets operator precedence
%D same as %C but is for left-recursive lists like kwargs which
goes to epsilon at the beginning. Using %C an extra separator
with an epsilon appears at the beginning
%, print ',' if last %C only printed one item. This is mostly for tuples
on the LHS of an assignment statement since BUILD_TUPLE_n pretty-prints
other tuples.
%| tab to current indentation level
%+ increase current indentation level
%- decrease current indentation level
%{...} evaluate ... in context of N
%% literal '%'
%p evaluate N setting precedence
* indicates an argument (A) required.
The '%' may optionally be followed by a number (C) in square brackets, which
makes the engine walk down to N[C] before evaluating the escape code.
"""
# The below is a bit long, but still it is somehwat abbreviated.
# See https://github.com/rocky/python-uncompyle6/wiki/Table-driven-semantic-actions.
# for a more complete explanation, nicely marked up and with examples.
#
#
# Semantic action rules for nonterminal symbols can be specified here by
# creating a method prefaced with "n_" for that nonterminal. For
# example, "n_exec_stmt" handles the semantic actions for the
# "exec_stmt" nonterminal symbol. Similarly if a method with the name
# of the nonterminal is suffixed with "_exit" it will be called after
# all of its children are called.
#
# However if this were done for all of the rules, this file would be even longer
# than it is already.
#
# Another more compact way to specify a semantic rule for a nonterminal is via
# rule given in one of the tables MAP_R0, MAP_R, or MAP_DIRECT.
#
# These uses a printf-like syntax to direct substitution from attributes
# of the nonterminal and its children..
#
# The rest of the below describes how table-driven semantic actions work
# and gives a list of the format specifiers. The default() and
# template_engine() methods implement most of the below.
#
# Step 1 determines a table (T) and a path to a
# table key (K) from the node type (N) (other nodes are shown as O):
#
# N&K N N
# / | ... \ / | ... \ / | ... \
# O O O O O K O O O
# |
# K
# TABLE_DIRECT TABLE_R TABLE_R0
#
# The default is a "TABLE_DIRECT" mapping. The key K is then extracted from the
# subtree and used to find a table entry T[K], if any. The result is a
# format string and arguments (a la printf()) for the formatting engine.
# Escapes in the format string are:
#
# %c evaluate the node recursively. Its argument is a single
# integer representing a node index.
# %p like %c but sets the operator precedence.
# Its argument then is a tuple indicating the node
# index and the precidence value, an integer.
#
# %C evaluate children recursively, with sibling children separated by the
# given string. It needs a 3-tuple: a starting node, the maximimum
# value of an end node, and a string to be inserted between sibling children
#
# %, Append ',' if last %C only printed one item. This is mostly for tuples
# on the LHS of an assignment statement since BUILD_TUPLE_n pretty-prints
# other tuples. The specifier takes no arguments
#
# %P same as %C but sets operator precedence. Its argument is a 4-tuple:
# the node low and high indices, the separator, a string the precidence
# value, an integer.
#
# %D Same as `%C` this is for left-recursive lists like kwargs where goes
# to epsilon at the beginning. It needs a 3-tuple: a starting node, the
# maximimum value of an end node, and a string to be inserted between
# sibling children. If we were to use `%C` an extra separator with an
# epsilon would appear at the beginning.
#
# %| Insert spaces to the current indentation level. Takes no arguments.
#
# %+ increase current indentation level. Takes no arguments.
#
# %- decrease current indentation level. Takes no arguments.
#
# %{...} evaluate ... in context of N
#
# %% literal '%'. Takes no arguments.
#
#
# The '%' may optionally be followed by a number (C) in square
# brackets, which makes the template_engine walk down to N[C] before
# evaluating the escape code.
from __future__ import print_function
import sys
@@ -124,6 +149,29 @@ class SourceWalker(GenericASTTraversal, object):
debug_parser=PARSER_DEFAULT_DEBUG,
compile_mode='exec', is_pypy=False,
linestarts={}):
"""version is the Python version (a float) of the Python dialect
of both the AST and language we should produce.
out is IO-like file pointer to where the output should go. It
whould have a getvalue() method.
scanner is a method to call when we need to scan tokens. Sometimes
in producing output we will run across further tokens that need
to be scaned.
If showast is True, we print the AST tree.
compile_mode is is either 'exec' or 'single'. It isthe compile
mode that was used to create the AST and specifies a gramar variant within
a Python version to use.
is_pypy should be True if the AST was generated for PyPy.
linestarts is a dictionary of line number to bytecode offset. This
can sometimes assist in determinte which kind of source-code construct
to use when there is ambiguity.
"""
GenericASTTraversal.__init__(self, ast=None)
self.scanner = scanner
params = {
@@ -306,11 +354,18 @@ class SourceWalker(GenericASTTraversal, object):
# MAKE_FUNCTION ..
code = node[-3]
self.indentMore()
self.indent_more()
for annotate_last in range(len(node)-1, -1, -1):
if node[annotate_last] == 'annotate_tuple':
break
# FIXME: the real situation is that when derived from
# funcdef_annotate we the name has been filled in.
# But when derived from funcdefdeco it hasn't Would like a better
# way to distinquish.
if self.f.getvalue()[-4:] == 'def ':
self.write(code.attr.co_name)
# FIXME: handle and pass full annotate args
make_function3_annotate(self, node, isLambda=False,
codeNode=code, annotate_last=annotate_last)
@@ -319,7 +374,7 @@ class SourceWalker(GenericASTTraversal, object):
self.write('\n\n')
else:
self.write('\n\n\n')
self.indentLess()
self.indent_less()
self.prune() # stop recursing
self.n_mkfunc_annotate = n_mkfunc_annotate
@@ -354,13 +409,39 @@ class SourceWalker(GenericASTTraversal, object):
node.type == 'call_function'
p = self.prec
self.prec = 80
self.engine(('%c(%P)', 0, (1, -4, ', ', 100)), node)
self.template_engine(('%c(%P)', 0,
(1, -4, ', ', 100)), node)
self.prec = p
node.type == 'async_call_function'
self.prune()
self.n_async_call_function = n_async_call_function
self.n_build_list_unpack = self.n_build_list
if version == 3.5:
def n_call_function(node):
mapping = self._get_mapping(node)
table = mapping[0]
key = node
for i in mapping[1:]:
key = key[i]
pass
if key.type.startswith('CALL_FUNCTION_VAR_KW'):
# Python 3.5 changes the stack position of *args. kwargs come
# after *args whereas in earlier Pythons, *args is at the end
# which simpilfiies things from our perspective.
# Python 3.6+ replaces CALL_FUNCTION_VAR_KW with CALL_FUNCTION_EX
# We will just swap the order to make it look like earlier Python 3.
entry = table[key.type]
kwarg_pos = entry[2][1]
args_pos = kwarg_pos - 1
# Put last node[args_pos] after subsequent kwargs
while node[kwarg_pos] == 'kwarg' and kwarg_pos < len(node):
# swap node[args_pos] with node[kwargs_pos]
node[kwarg_pos], node[args_pos] = node[args_pos], node[kwarg_pos]
args_pos = kwarg_pos
kwarg_pos += 1
self.default(node)
self.n_call_function = n_call_function
def n_funcdef(node):
if self.version == 3.6:
@@ -371,9 +452,11 @@ class SourceWalker(GenericASTTraversal, object):
is_code = hasattr(code_node, 'attr') and iscode(code_node.attr)
if (is_code and
(code_node.attr.co_flags & COMPILER_FLAG_BIT['COROUTINE'])):
self.engine(('\n\n%|async def %c\n', -2), node)
self.template_engine(('\n\n%|async def %c\n',
-2), node)
else:
self.engine(('\n\n%|def %c\n', -2), node)
self.template_engine(('\n\n%|def %c\n', -2),
node)
self.prune()
self.n_funcdef = n_funcdef
@@ -471,10 +554,10 @@ class SourceWalker(GenericASTTraversal, object):
super(SourceWalker, self).preorder(node)
self.set_pos_info(node)
def indentMore(self, indent=TAB):
def indent_more(self, indent=TAB):
self.indent += indent
def indentLess(self, indent=TAB):
def indent_less(self, indent=TAB):
self.indent = self.indent[:-len(indent)]
def traverse(self, node, indent=None, isLambda=False):
@@ -525,6 +608,8 @@ class SourceWalker(GenericASTTraversal, object):
if self.pending_newlines:
out = out[:-self.pending_newlines]
if isinstance(out, str) and not PYTHON3:
out = unicode(out, 'utf-8')
self.f.write(out)
def println(self, *data):
@@ -547,26 +632,6 @@ class SourceWalker(GenericASTTraversal, object):
node == AST('return_stmt',
[AST('ret_expr', [NONE]), Token('RETURN_VALUE')]))
def n_continue_stmt(self, node):
if self.version >= 3.0 and node[0] == 'CONTINUE':
t = node[0]
if not t.linestart:
# Artificially-added "continue" statements derived from JUMP_ABSOLUTE
# don't have line numbers associated with them.
# If this is a CONTINUE is to the same target as a JUMP_ABSOLUTE following it,
# then the "continue" can be suppressed.
op, offset = t.op, t.offset
next_offset = self.scanner.next_offset(op, offset)
scanner = self.scanner
code = scanner.code
if next_offset < len(code):
next_inst = code[next_offset]
if (scanner.opc.opname[next_inst] == 'JUMP_ABSOLUTE'
and t.pattr == code[next_offset+1]):
# Suppress "continue"
self.prune()
self.default(node)
def n_return_stmt(self, node):
if self.params['isLambda']:
self.preorder(node[0])
@@ -810,9 +875,9 @@ class SourceWalker(GenericASTTraversal, object):
self.write(self.indent, 'if ')
self.preorder(node[0])
self.println(':')
self.indentMore()
self.indent_more()
self.preorder(node[1])
self.indentLess()
self.indent_less()
if_ret_at_end = False
if len(return_stmts_node[0]) >= 3:
@@ -831,14 +896,14 @@ class SourceWalker(GenericASTTraversal, object):
prev_stmt_is_if_ret = False
if not past_else and not if_ret_at_end:
self.println(self.indent, 'else:')
self.indentMore()
self.indent_more()
past_else = True
self.preorder(n)
if not past_else or if_ret_at_end:
self.println(self.indent, 'else:')
self.indentMore()
self.indent_more()
self.preorder(return_stmts_node[1])
self.indentLess()
self.indent_less()
self.prune()
n_ifelsestmtr2 = n_ifelsestmtr
@@ -860,17 +925,17 @@ class SourceWalker(GenericASTTraversal, object):
self.write(self.indent, 'elif ')
self.preorder(node[0])
self.println(':')
self.indentMore()
self.indent_more()
self.preorder(node[1])
self.indentLess()
self.indent_less()
for n in return_stmts_node[0]:
n[0].type = 'elifstmt'
self.preorder(n)
self.println(self.indent, 'else:')
self.indentMore()
self.indent_more()
self.preorder(return_stmts_node[1])
self.indentLess()
self.indent_less()
self.prune()
def n_import_as(self, node):
@@ -911,14 +976,14 @@ class SourceWalker(GenericASTTraversal, object):
func_name = code_node.attr.co_name
self.write(func_name)
self.indentMore()
self.indent_more()
self.make_function(node, isLambda=False, codeNode=code_node)
if len(self.param_stack) > 1:
self.write('\n\n')
else:
self.write('\n\n\n')
self.indentLess()
self.indent_less()
self.prune() # stop recursing
def make_function(self, node, isLambda, nested=1,
@@ -1389,9 +1454,9 @@ class SourceWalker(GenericASTTraversal, object):
self.println(':')
# class body
self.indentMore()
self.indent_more()
self.build_class(subclass_code)
self.indentLess()
self.indent_less()
self.currentclass = cclass
if len(self.param_stack) > 1:
@@ -1462,7 +1527,7 @@ class SourceWalker(GenericASTTraversal, object):
p = self.prec
self.prec = 100
self.indentMore(INDENT_PER_LEVEL)
self.indent_more(INDENT_PER_LEVEL)
sep = INDENT_PER_LEVEL[:-1]
self.write('{')
line_number = self.line_number
@@ -1600,7 +1665,7 @@ class SourceWalker(GenericASTTraversal, object):
if sep.startswith(",\n"):
self.write(sep[1:])
self.write('}')
self.indentLess(INDENT_PER_LEVEL)
self.indent_less(INDENT_PER_LEVEL)
self.prec = p
self.prune()
@@ -1651,7 +1716,7 @@ class SourceWalker(GenericASTTraversal, object):
else:
flat_elems.append(elem)
self.indentMore(INDENT_PER_LEVEL)
self.indent_more(INDENT_PER_LEVEL)
sep = ''
for elem in flat_elems:
@@ -1676,7 +1741,7 @@ class SourceWalker(GenericASTTraversal, object):
if lastnode.attr == 1 and lastnodetype.startswith('BUILD_TUPLE'):
self.write(',')
self.write(endchar)
self.indentLess(INDENT_PER_LEVEL)
self.indent_less(INDENT_PER_LEVEL)
self.prec = p
self.prune()
@@ -1727,11 +1792,12 @@ class SourceWalker(GenericASTTraversal, object):
node[-2][0].type = 'unpack_w_parens'
self.default(node)
def engine(self, entry, startnode):
def template_engine(self, entry, startnode):
"""The format template interpetation engine. See the comment at the
beginning of this module for the how we interpret format specifications such as
%c, %C, and so on.
beginning of this module for the how we interpret format
specifications such as %c, %C, and so on.
"""
# self.println("----> ", startnode.type, ', ', entry[0])
fmt = entry[0]
arg = 1
@@ -1750,10 +1816,10 @@ class SourceWalker(GenericASTTraversal, object):
if typ == '%': self.write('%')
elif typ == '+':
self.line_number += 1
self.indentMore()
self.indent_more()
elif typ == '-':
self.line_number += 1
self.indentLess()
self.indent_less()
elif typ == '|':
self.line_number += 1
self.write(self.indent)
@@ -1764,10 +1830,9 @@ class SourceWalker(GenericASTTraversal, object):
node[0].attr == 1):
self.write(',')
elif typ == 'c':
if isinstance(entry[arg], int):
entry_node = node[entry[arg]]
self.preorder(entry_node)
arg += 1
entry_node = node[entry[arg]]
self.preorder(entry_node)
arg += 1
elif typ == 'p':
p = self.prec
(index, self.prec) = entry[arg]
@@ -1834,7 +1899,7 @@ class SourceWalker(GenericASTTraversal, object):
pass
if key.type in table:
self.engine(table[key.type], node)
self.template_engine(table[key.type], node)
self.prune()
def customize(self, customize):
@@ -1858,7 +1923,7 @@ class SourceWalker(GenericASTTraversal, object):
'CALL_FUNCTION_VAR_KW', 'CALL_FUNCTION_KW'):
if v == 0:
str = '%c(%C' # '%C' is a dummy here ...
p2 = (0, 0, None) # .. because of this
p2 = (0, 0, None) # .. because of the None in this
else:
str = '%c(%C, '
p2 = (1, -2, ', ')

View File

@@ -45,7 +45,7 @@ BIN_OP_FUNCS = {
'BINARY_OR': operator.or_,
}
JUMP_OPs = None
JUMP_OPS = None
# --- exceptions ---
@@ -227,8 +227,8 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2,
import uncompyle6.scanners.scanner36 as scan
scanner = scan.Scanner36()
global JUMP_OPs
JUMP_OPs = list(scan.JUMP_OPs) + ['JUMP_BACK']
global JUMP_OPS
JUMP_OPS = list(scan.JUMP_OPS) + ['JUMP_BACK']
# use changed Token class
# We (re)set this here to save exception handling,
@@ -333,7 +333,7 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2,
else:
raise CmpErrorCode(name, tokens1[i1].offset, tokens1[i1],
tokens2[i2], tokens1, tokens2)
elif tokens1[i1].type in JUMP_OPs and tokens1[i1].pattr != tokens2[i2].pattr:
elif tokens1[i1].type in JUMP_OPS and tokens1[i1].pattr != tokens2[i2].pattr:
if tokens1[i1].type == 'JUMP_BACK':
dest1 = int(tokens1[i1].pattr)
dest2 = int(tokens2[i2].pattr)
@@ -396,7 +396,7 @@ class Token(scanner.Token):
return 0
if t == 'JUMP_IF_FALSE_OR_POP' and o.type == 'POP_JUMP_IF_FALSE':
return 0
if JUMP_OPs and t in JUMP_OPs:
if JUMP_OPS and t in JUMP_OPS:
# ignore offset
return t == o.type
return (t == o.type) or self.pattr == o.pattr

View File

@@ -1,3 +1,3 @@
# This file is suitable for sourcing inside bash as
# well as importing into Python
VERSION='2.10.1'
VERSION='2.12.0'