Compare commits

..

43 Commits

Author SHA1 Message Date
rocky
d4dab54c7b Merge branch 'master' into python-2.4 2017-05-30 02:18:57 -04:00
rocky
5566b9ba6c Get ready for release 2.9.11 2017-05-06 07:49:09 -04:00
rocky
e56ab2dcd5 Sync with master 2017-05-06 07:17:04 -04:00
rocky
d6c45979ba Merge branch 'master' into python-2.4 2017-05-06 07:16:39 -04:00
rocky
a06e9bf32e Merge branch 'master' into python-2.4 2017-04-14 05:45:53 -04:00
rocky
7e8f7ba674 namedtuple25 -> namedtuple24 2017-04-14 05:42:44 -04:00
rocky
09eb7f7f78 Merge branch 'master' into python-2.4 2017-04-10 00:48:04 -04:00
rocky
f7a910ec66 Merge branch 'master' into python-2.4 2017-03-01 05:55:26 -05:00
rocky
6d6a73eea7 Merge branch 'master' into python-2.4 2017-02-25 21:02:12 -05:00
rocky
e4a7641927 Python <= 2.6 grammar fixes 2017-02-25 05:13:19 -05:00
rocky
b24b46d48c Merge branch 'master' into python-2.4 2017-02-25 04:48:06 -05:00
rocky
a65d7dce5b Python 2.5 was missing try else stmt 2017-02-22 05:30:07 -05:00
rocky
718a0a5d34 Merge branch 'master' into python-2.4 2017-02-22 05:29:49 -05:00
rocky
ea9e3ab3f5 Group coverage Makefile targets 2017-02-10 01:00:26 -05:00
rocky
770e988ff8 Changes based on coverage information 2017-01-29 22:54:30 -05:00
rocky
0fa0641974 Merge branch 'master' into python-2.4 2017-01-29 22:05:55 -05:00
rocky
c13e23cdae Get ready for release 2.9.9 2017-01-11 21:52:20 -05:00
rocky
fab4ebb768 Merge changes ...
* str() in Python 2.4 doesn't detect unicode.
* index() doesn't work on tuples
* ifelse change
2017-01-11 19:34:28 -05:00
rocky
89429339fa Merge branch 'master' into python-2.4 2017-01-11 19:25:44 -05:00
rocky
6ed129bd7a 2.4 verify hacks 2017-01-02 07:15:46 -05:00
rocky
c4fde6b53e Merge branch 'master' into python-2.4 2017-01-02 05:39:50 -05:00
rocky
a7d93e88b4 Merge branch 'master' into python-2.4 2017-01-02 05:39:13 -05:00
rocky
9891494142 We are version 2.9.9 2016-12-31 18:16:23 -05:00
rocky
f8544dfbbe 2.7->2.4 conversion 2016-12-31 10:56:43 -05:00
rocky
b00651d428 Merge master branche
Handle 2.2 list_if
2016-12-31 05:19:21 -05:00
rocky
da8dccbaca Merge branch 'master' into python-2.4 2016-12-29 02:08:12 -05:00
rocky
37272ae827 Merge commit '9b1dd0f' into python-2.4 2016-12-27 10:32:25 -05:00
rocky
7f2bee46b7 Bug in using python2 ast checking in python 2.5 2016-12-26 01:55:16 -05:00
rocky
c8a4dcf72b Removing NAME_MODULE, lint and bug fixes
scanner*.py: show_asm param is optional
verify.py: call correct scanners
main.py, verify.py: Use older Python print statements
2016-12-25 09:16:04 -05:00
rocky
012ff91cfb Merge branch 'master' into python-2.4 2016-12-25 07:57:17 -05:00
rocky
e690ddd50a Merge branch 'master' into python-2.4 2016-12-18 07:43:15 -05:00
rocky
45b7c1948c show-asm on python2.5 is optional
Make scanner2 a little more like scanner3.
2016-12-17 07:57:31 -05:00
rocky
e2fb7ca3d2 Python 2.6/2.7 tolerance in Python 2.4 branch 2016-12-17 06:51:47 -05:00
rocky
b3bda76582 Merge branch 'master' into python-2.4 2016-12-16 22:56:07 -05:00
rocky
ab6d322eca Get ready for release 2.9.7 2016-12-04 14:09:53 -05:00
rocky
1a8a0df107 Merge branch 'master' into python-2.4 2016-12-04 13:40:06 -05:00
rocky
0a37709b0a CircleCI build 2016-11-24 05:41:31 -05:00
rocky
98cd1417df Remove dup Python 3 grammar rule 2016-11-24 05:36:43 -05:00
rocky
460069ceaa Bug in 2.4 "if" dectection and...
Wrong language used in old-style exceptions: use "except Error,e" not
"except Error(e)""
2016-11-24 05:15:35 -05:00
rocky
316aa44f23 Python 2.6 grammary bug and..
__pkginfo.py__: Bump spark_parser version for parse_flags 'dups'
2016-11-24 04:09:32 -05:00
rocky
7133540c23 Make work on 2.4 2016-11-23 08:26:12 -05:00
rocky
590231741d Merge branch 'come-from-type' into python-2.4 2016-11-23 07:54:18 -05:00
rocky
a9349b8f3d Making it run on Python 2.4 and 2.5 2016-11-23 07:53:51 -05:00
57 changed files with 688 additions and 1149 deletions

View File

@@ -3,16 +3,10 @@ language: python
sudo: false
python:
- '3.5'
- '2.7.12'
- '2.6'
- '3.3'
- '3.4'
- '3.2'
- '3.6'
- '2.7' # this is a cheat here because travis doesn't do 2.4-2.6
install:
- pip install -e .
- pip install -r requirements.txt
- pip install -r requirements-dev.txt
script:

608
ChangeLog
View File

@@ -1,384 +1,3 @@
2017-06-25 rocky <rb@dustyfeet.com>
* uncompyle6/version.py: Get ready for release 2.11.1
2017-06-24 rocky <rb@dustyfeet.com>
* __pkginfo__.py, uncompyle6/scanner.py,
uncompyle6/scanners/scanner2.py, uncompyle6/scanners/scanner3.py,
uncompyle6/scanners/scanner30.py, uncompyle6/semantics/pysource.py:
Use xdis' instruction offset calculation fns.. next_offset, op_size, has_argument
2017-06-19 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Python 2 sometimes need
str->uncode in writing?
2017-06-19 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Allow deparsed out to be str as
well as unicode
2017-06-18 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
2.11.0
2017-06-13 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Adjust nodeInfo if it is a
Token
2017-06-13 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Add nonterminal node in
extractInfo
2017-06-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/make_function.py: Fragment tag more expressions Revise make_function3 comment wrt args and kwargs
2017-06-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Fragment tag array subscripts
2017-06-10 R. Bernstein <rocky@users.noreply.github.com>
* README.rst: Create README.rst
2017-06-10 R. Bernstein <rocky@users.noreply.github.com>
* README.rst: Create README.rst
2017-06-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Set YIELD_VALUE offset in a
<yield> expr
2017-06-10 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/make_function.py: Python 3.2 MAKE_FUNCTION
again.. Was handling bug32/01_named_and_kwargs.py wrong again
2017-06-09 R. Bernstein <rocky@users.noreply.github.com>
* : Merge pull request #119 from rocky/scan-longconstant Simplify access to L65536 ...
2017-06-09 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/make_function.py: Attempt to document the
MAKE_FUNCTION/MAKE_LAMBDA mess... in Python 3.0+
2017-06-08 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/make_function.py: Correct make_function3 for
Pytohn 3.2
2017-06-08 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Disable "continue" removal in
pysource.py "continue" could be the only statement and then removing it might
lead to a dangling "else".
2017-06-07 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Mark "pass" offsets. Start routine to find previous node.
2017-06-06 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/semantics/fragments.py:
Remove hacky fragments try fixup... hacky call_function code is also not needed or will be reinstated
properly. Better grammar structure for Python 3.6 call_function.
2017-06-05 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/parsers/parse36.py,
uncompyle6/scanners/scanner36.py: BUILD_{MAP,TUPLE}_UNPACK &
CALL_FUNCTION_EX_KW... Bang on these in 3.6. Not totally succesfull right now. In fact a
regression on one of the test cases
2017-06-05 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Important fragments bug fix... start, finish that had been adjusted wasn't getting reflected in
final returned deparsed.offsets dictionary. Redo keeping API
compatibility, i.e we still use namedtuple NodeInfo.
2017-06-04 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/semantics/pysource.py:
Python 3.5 *args with kwargs handling. 3.5 is a snowflake here. Thank you, Python. Fully fixes Issue 95. 3.6 is broken on this source, but for a *different* reason. Sigh.
2017-06-03 rocky <rb@dustyfeet.com>
* README.rst, __pkginfo__.py,
test/simple_source/bug35/04_CALL_FUNCTION_VAR_KW.py,
uncompyle6/semantics/fragments.py: Small changes. fragment tag EXEC_STMT
2017-06-03 rocky <rb@dustyfeet.com>
* .travis.yml: Streamline .travis.yml a little bit
2017-06-03 rocky <rb@dustyfeet.com>
* __pkginfo__.py: We need six
2017-06-03 rocky <rb@dustyfeet.com>
* README.rst, circle.yml, requirements-dev.txt: Go over
administrivia
2017-06-03 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
2.10.1
2017-06-03 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: Fragment bugs fragment.py: * deparse_code_aorund_offset: was sometimes returning the wrong type * capture function name offset * lint imports pysource.py: use a clearer variable name
2017-06-02 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Track changes in ifelstmtr.. in fragments from pysource
2017-05-30 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
2.10.0
2017-05-30 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py: Python 3.6 makefunction
handling for fragments
2017-05-23 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Fix up 3.6 unmapexpr
2017-05-23 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Fix up retreiving "async"
property on 3.6
2017-05-23 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Fix bug in a 3.6 class name.
2017-05-23 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: Add fuzzy offset deparse lookup
2017-05-21 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py: Correct EXTENDED_ARG handling on
Python 3.6... where it can appear several times and xdis may handle it as well.
It possibly in other versions bug since EXTENDED_ARG is used so
rarely there because it has such a high value 1<<16, it's hard to
test and determine that.
2017-05-20 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Worse results. Revert some of
the last changes
2017-05-20 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/semantics/pysource.py:
More explicit about 3.5 UNMAP_PACK Have to reduce 3.5 bytecode testing for now, code is more solid.
2017-05-19 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/parsers/parse36.py,
uncompyle6/scanners/scanner3.py: Simplify EXTENDED_ARG on 3.x We largely remove them and fold them itno the next op.
MAKE_FUNCTION though before 3.6 is an exception as that indicates an
annotated function
2017-05-19 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner26.py: EXTENDED_ARG is implemented in
2.6
2017-05-19 rocky <rb@dustyfeet.com>
* test/simple_source/expression/06_huge_list.py,
uncompyle6/parsers/parse3.py, uncompyle6/semantics/pysource.py: Fix
EXTENDED_ARG for long lists, sets, maps
2017-05-19 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py: Another attempt at getting
get_target() correct
2017-05-19 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: Bug in pypy JUMP_IF_NOT_DEBUG
handling
2017-05-19 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse36.py, uncompyle6/scanners/scanner3.py:
EXTENDED_ARG handling... get_target() wasn't taking into account EXTENDED_ARG before opcode. This is mostly relevant in Python 3.6 where the max size before
needing EXTENDED_ARG has been reduced to 256, but theoretically
possible in earlier versions.
2017-05-18 rocky <rb@dustyfeet.com>
* __pkginfo__.py: Enforce using xdis >=3.3.1 .. to pick up bug fixes to 3.6 in xdis
2017-05-17 rocky <rb@dustyfeet.com>
* __pkginfo__.py, uncompyle6/parsers/parse36.py,
uncompyle6/scanners/scanner3.py: Small changes.... * __pkginfo__.py: Need spark parser 1.6.1 for corrected
remove_rules() fn * parser36.py: remove replaced Python3 rules * scanner3.py: corrected comment. Thanks to moagstar here. *
2017-05-16 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse36.py: Fix broken CI on 3.6... Another grammar rule replacing SETUP_LOOP with setup_loop
2017-05-16 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse36.py: More EXTENDED_ARGS on 3.6
2017-05-16 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse36.py: extend use of EXTENDED_ARGS in 3.6 switching to a wordcode seems to have made opcode fields smaller so
we need EXTENDED_ARG more?
2017-05-16 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse36.py, uncompyle6/semantics/pysource.py:
Allow LOAD_CONST EXTENDED_ARG
2017-05-15 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: Reinstate 3.6 listcomp rule
2017-05-15 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: Bang on 3.6 MAKE_FUNCTION some more
2017-05-14 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: towards fixing a
3.5.CALL_FUNCTONI_VAR bug
2017-05-14 rocky <rb@dustyfeet.com>
* test/simple_source/bug35/04_CALL_FUNCTION_VAR_KW.py,
uncompyle6/parsers/parse3.py: Python 3.5 kw arg can be an expr Fixes Issue #95
2017-05-14 R. Bernstein <rocky@users.noreply.github.com>
* : Merge pull request #117 from rocky/3.6-MAKE_FUNCTION 3.6 make function
2017-05-13 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py: MAKE_FUNCTION_FLAGS can be a
simpler tuple
2017-05-13 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: Grammar rules for Python 3.6
MAKE_FUNCTION
2017-05-13 rocky <rb@dustyfeet.com>
* README.rst, uncompyle6/parsers/parse3.py,
uncompyle6/parsers/parse36.py, uncompyle6/semantics/pysource.py:
Bang on 3.6 MAKE_FUNCTION a bit more parse3.py, parse36.py: adding return_closure rule tags what's going
on with this rule pysource.py: start changing semantic rules to support code changed
by new make_function semantics README.rst: typo
2017-05-13 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py: Typo
2017-05-12 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse27.py: Bug in 2.7 decompiling ourself! Troublesome file was uncompyle6.semantics.pysource.engine()
2017-05-11 R. Bernstein <rocky@users.noreply.github.com>
* : Merge pull request #113 from grkov90/patch-1 Fixed out_base bug
2017-05-11 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/scanners/scanner3.py,
uncompyle6/semantics/make_function.py: WIP: start 3.6 MAKE_FUNCTION
handling
2017-05-11 Daniel Bradburn <moagstar@gmail.com>
* : Merge pull request #116 from moagstar/function_call_keyword_only Added support for Python 3.6 CALL_FUNCTION_KW
2017-05-10 Daniel Bradburn <moagstar@gmail.com>
* uncompyle6/semantics/pysource.py: Fixed bug in compiling double
star arg only function calls where the closing parenthesis would be
missed
2017-05-10 Daniel Bradburn <moagstar@gmail.com>
* requirements-dev.txt: Adding requirement for pytest >= 3.0 to fix
strange INTERNALERROR in combination with hypothesis when using
pytest 2.6.4
2017-05-10 Daniel Bradburn <moagstar@gmail.com>
* pytest/test_CALL_FUNCTION_KW.sh, pytest/test_function_call.py,
uncompyle6/parsers/parse3.py, uncompyle6/parsers/parse36.py,
uncompyle6/scanners/scanner36.py, uncompyle6/semantics/pysource.py:
Added support for support for Python 3.6 CALL_FUNCTION_KW
2017-05-08 rocky <rb@dustyfeet.com>
* appveyor.yml, test/test_pyenvlib.py,
uncompyle6/semantics/pysource.py: pysource guard and another
appveyor test
2017-05-08 rocky <rb@dustyfeet.com>
* appveyor.yml: appveyor take 2
2017-05-08 rocky <rb@dustyfeet.com>
* appveyor.yml, appveyor/install.ps1, appveyor/run_with_env.cmd: Try
appveyor
2017-05-07 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: More guarded CONTINUE deletion
2017-05-07 rocky <rb@dustyfeet.com>
* uncompyle6/scanner.py, uncompyle6/scanners/scanner3.py,
uncompyle6/semantics/pysource.py: Reduce spurious "continue"
statements
2017-05-07 rocky <rb@dustyfeet.com>
* test/Makefile: --weak-verify on 3.3 with inclusion of last commit Note that the result is sematically equivalent, so it is is correct.
2017-05-07 rocky <rb@dustyfeet.com>
* test/simple_source/looping/12_if_while_true_pass.py,
uncompyle6/scanners/scanner3.py: Python 3.x control-flow bug... "pass" statement inside "while True"
2017-05-07 rocky <rb@dustyfeet.com>
* HOW-TO-REPORT-A-BUG.md: Small typo
2017-05-07 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py: Fix improper COME_FROM_EXCEPT in
Python 3.3+
2017-05-06 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse33.py: python 3.3 while True parsing bug
2017-05-06 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
@@ -386,13 +5,14 @@
2017-05-06 rocky <rb@dustyfeet.com>
* test/Makefile: fix PYTHON variable setting in test/Makefile
* test/Makefile, uncompyle6/scanners/scanner2.py,
uncompyle6/scanners/scanner3.py, uncompyle6/semantics/fragments.py:
Sync with master
2017-05-06 rocky <rb@dustyfeet.com>
* test/simple_source/bug32/01_try_except_raise.py,
test/simple_source/bug32/03_if.py, uncompyle6/parsers/parse32.py,
uncompyle6/parsers/parse33.py: Fix more Python3.2 parser errors
* : commit 4a4782290490187ac2fcaaecd3ca808f933722b2 Author: rocky
<rb@dustyfeet.com> Date: Sat May 6 05:25:56 2017 -0400
2017-05-05 rocky <rb@dustyfeet.com>
@@ -403,14 +23,6 @@
* .travis.yml: Try CI testing on Python 3.6
2017-05-03 Gregory <grkov90@gmail.com>
* uncompyle6/main.py: Some fix
2017-05-03 Gregory <grkov90@gmail.com>
* uncompyle6/main.py: Fixed out_base bug Variable filename using in for tags uncompyle6 -o haven't worked argument -o haven't worked
2017-05-02 rocky <rb@dustyfeet.com>
* test/simple_source/bug35/01_map_unpack.py, uncompyle6/parser.py,
@@ -580,6 +192,11 @@
haphazard way using real flow-control analysis. Hopefully that's on
the way. In the meantime we have this hack.
2017-04-14 rocky <rb@dustyfeet.com>
* : commit 7e8f7ba67431725fceec08344934c929a517efc5 Author: rocky
<rb@dustyfeet.com> Date: Fri Apr 14 05:42:44 2017 -0400
2017-04-14 rocky <rb@dustyfeet.com>
* test/simple_source/bug27+/03_if_1_else.py,
@@ -644,9 +261,10 @@
uncompyle6/parsers/parse2.py, uncompyle6/parsers/parse3.py,
uncompyle6/parsers/parse35.py: Add more while1else grammar rules Towards addressing issue #93
2017-04-09 rocky <rb@dustyfeet.com>
2017-04-10 rocky <rb@dustyfeet.com>
* : One more FUNCTION_VAR test for 3.3
* : commit b9703cf6b41138b717c282fc791c08d807692b07 Author: rocky
<rb@dustyfeet.com> Date: Sun Apr 9 06:58:41 2017 -0400
2017-04-09 rocky <rb@dustyfeet.com>
@@ -757,11 +375,8 @@
2017-03-01 rocky <rb@dustyfeet.com>
* uncompyle6/scanner.py, uncompyle6/scanners/scanner2.py,
uncompyle6/scanners/scanner3.py, uncompyle6/verify.py: COME_FROM for
3.x POP_EXCEPT, DRY with op_name() ... Start adding COME_FROMs for POP_EXCEPT in preparation for getting
tryelse blocks correct. Simpler opname access functions: - self.op_name(op) is self.opc.opname[op] - self.op_name_from_offset(offset) is
self.opc.opname[self.code[offset]] verify.py: not all offsets are ints
* : commit 160ec0d9cc5fe347f6e8bdb69515a28c76cfb368 Author: rocky
<rb@dustyfeet.com> Date: Wed Mar 1 05:50:31 2017 -0500
2017-02-28 rocky <rb@dustyfeet.com>
@@ -785,13 +400,17 @@
2017-02-25 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, __pkginfo__.py, uncompyle6/version.py: Get ready
for release 2.9.10
* : commit 1e3ea60055027dfd3f098661ac4f5979c5c48f7e Author: rocky
<rb@dustyfeet.com> Date: Sat Feb 25 20:18:03 2017 -0500
2017-02-25 rocky <rb@dustyfeet.com>
* uncompyle6/parser.py, uncompyle6/parsers/parse26.py: Python 2.6
parsing bugs .. and some parser list nonterminal cleanup
* uncompyle6/parser.py: Python <= 2.6 grammar fixes
2017-02-25 rocky <rb@dustyfeet.com>
* : commit 2fbbc728b10f0d3a754165708584bd80d33bc7f9 Author: rocky
<rb@dustyfeet.com> Date: Sat Feb 25 04:45:10 2017 -0500
2017-02-24 rocky <rb@dustyfeet.com>
@@ -805,9 +424,15 @@
uncompyle6/parsers/parse25.py: Python 2.5 wasn't handling tryelse
properly
2017-02-20 rocky <rb@dustyfeet.com>
2017-02-22 rocky <rb@dustyfeet.com>
* : New test doesn't --verify correctly. Sigh.
* test/Makefile, test/simple_source/bug25/02_try_else.py,
uncompyle6/parsers/parse25.py: Python 2.5 was missing try else stmt
2017-02-22 rocky <rb@dustyfeet.com>
* : commit b043f6bafc9b9ae26e64dc0f1441d7abae894c37 Author: rocky
<rb@dustyfeet.com> Date: Mon Feb 20 09:22:01 2017 -0500
2017-02-20 rocky <rb@dustyfeet.com>
@@ -846,6 +471,10 @@
* test/simple_source/bug22/01_ops.py, test/test_pythonlib.py: Beef
up grammar coverage
2017-02-10 rocky <rb@dustyfeet.com>
* test/Makefile: Group coverage Makefile targets
2017-01-29 rocky <rb@dustyfeet.com>
* test/Makefile, test/simple_source/bug22/01_ops.py,
@@ -853,9 +482,24 @@
uncompyle6/semantics/pysource.py: Changes based on grammar coverage
info
2017-01-29 R. Bernstein <rocky@users.noreply.github.com>
2017-01-29 rocky <rb@dustyfeet.com>
* : Merge pull request #83 from rocky/coverage Coverage
* test/Makefile, test/simple_source/bug22/01_ops.py,
uncompyle6/parsers/parse25.py, uncompyle6/semantics/consts.py,
uncompyle6/semantics/pysource.py: Changes based on coverage
information
2017-01-29 rocky <rb@dustyfeet.com>
* : commit 9348411056cbe809e07c4ef341effa17bca90e2f Merge: 3dc766d
e71dd01 Author: R. Bernstein <rocky@users.noreply.github.com> Date:
Sun Jan 29 21:54:45 2017 -0500
2017-01-29 rocky <rb@dustyfeet.com>
* test/Makefile, test/simple_source/bug22/01_ops.py,
test/test_pyenvlib.py, test/test_pythonlib.py,
uncompyle6/semantics/consts.py: Simplfy getting coverage consts.py: notes on versions use which ops
2017-01-29 rocky <rb@dustyfeet.com>
@@ -959,6 +603,10 @@
* uncompyle6/__init__.py: sys.recursionlimit is optional, not
essential
2017-01-11 rocky <rb@dustyfeet.com>
* NEWS, uncompyle6/version.py: Get ready for release 2.9.9
2017-01-11 rocky <rb@dustyfeet.com>
* : commit b131c20e99514d3a969a51e841d3a823017f1beb Author: rocky
@@ -968,9 +616,22 @@
* ChangeLog, NEWS: Get ready for release 2.10.9
2017-01-11 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/scanner.py,
uncompyle6/semantics/make_function.py, uncompyle6/version.py: Merge
changes ... * str() in Python 2.4 doesn't detect unicode. * index() doesn't work on tuples * ifelse change
2017-01-11 rocky <rb@dustyfeet.com>
* : commit 7ece296f7638e71fad1117b940f7ffddbe095b1f Merge: 78a5b62
5035d54 Author: R. Bernstein <rocky@users.noreply.github.com> Date:
Wed Jan 11 07:10:23 2017 -0500
2017-01-11 R. Bernstein <rocky@users.noreply.github.com>
* : Merge pull request #79 from rocky/revert-78-patch-1 Revert "fix bug : not generate all files when use "-ro""
* uncompyle6/main.py: Revert "fix bug : not generate all files when
use "-ro""
2017-01-11 R. Bernstein <rocky@users.noreply.github.com>
@@ -1069,6 +730,16 @@
* uncompyle6/parsers/parse35.py, uncompyle6/scanners/scanner3.py:
Python 3.5 continue detection bug
2017-01-02 rocky <rb@dustyfeet.com>
* uncompyle6/verify.py: 2.4 verify hacks
2017-01-02 rocky <rb@dustyfeet.com>
* : commit a7d93e88b4e0dfd6876a7a31bd201a0e40f24bea Merge: 9891494
136f42a Author: rocky <rb@dustyfeet.com> Date: Mon Jan 2 05:39:13
2017 -0500
2017-01-01 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py: add come_from for setup_finally
@@ -1083,6 +754,14 @@
* README.rst: Note how to verify correctness ... with --verify, --weak-verify and cross checking with pycdc
2016-12-31 rocky <rb@dustyfeet.com>
* uncompyle6/version.py: We are version 2.9.9
2016-12-31 rocky <rb@dustyfeet.com>
* uncompyle6/main.py: 2.7->2.4 conversion
2016-12-31 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/version.py: Get ready for release
@@ -1092,6 +771,13 @@
* uncompyle6/parsers/parse26.py: 2.x list_if may have a THEN in it
2016-12-31 rocky <rb@dustyfeet.com>
* test/Makefile, uncompyle6/main.py, uncompyle6/parsers/parse26.py,
uncompyle6/scanners/scanner2.py, uncompyle6/scanners/scanner26.py,
uncompyle6/scanners/scanner3.py, uncompyle6/semantics/pysource.py,
uncompyle6/verify.py: Merge master branche Handle 2.2 list_if
2016-12-31 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner3.py: Towards fixing a Python 3.3
@@ -1120,10 +806,9 @@
* : Merge pull request #73 from rocky/then-crap Add THEN token to improve Python 2.2-2.6 control flow detection
2016-12-28 rocky <rb@dustyfeet.com>
2016-12-29 R. Bernstein <rocky@users.noreply.github.com>
* uncompyle6/parsers/parse3.py, uncompyle6/scanners/tok.py: Misc
bugs
* : Merge pull request #72 from rocky/master THEN psuedo-ops for Python 2.x
2016-12-28 rocky <rb@dustyfeet.com>
@@ -1150,8 +835,7 @@
2016-12-27 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner2.py: Make 2.6 and 2.7 ingest more
alike
* : Merge commit '9b1dd0f' into python-2.4
2016-12-26 rocky <rb@dustyfeet.com>
@@ -1171,6 +855,11 @@
* uncompyle6/parsers/parse25.py: fix bug in using python2 AST rules
in python 2.5
2016-12-26 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse25.py: Bug in using python2 ast checking
in python 2.5
2016-12-26 rocky <rb@dustyfeet.com>
* : commit f1a947f106b231fb1480ba301b15e3ceaf78c94f Author: rocky
@@ -1183,15 +872,19 @@
uncompyle6/verify.py: Scanner call fixes. NAME_MODULE removal for
<=2.4
2016-12-24 rocky <rb@dustyfeet.com>
2016-12-25 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/astnode.py, uncompyle6/parsers/parse2.py,
uncompyle6/parsers/parse26.py, uncompyle6/parsers/parse3.py,
uncompyle6/parsers/parse36.py, uncompyle6/scanners/scanner15.py,
uncompyle6/scanners/scanner2.py, uncompyle6/scanners/scanner21.py,
uncompyle6/scanners/scanner22.py,
uncompyle6/semantics/fragments.py, uncompyle6/semantics/pysource.py:
Lint
* uncompyle6/main.py, uncompyle6/scanners/scanner15.py,
uncompyle6/scanners/scanner21.py, uncompyle6/scanners/scanner22.py,
uncompyle6/scanners/scanner23.py, uncompyle6/scanners/scanner24.py,
uncompyle6/semantics/pysource.py, uncompyle6/verify.py: Removing
NAME_MODULE, lint and bug fixes scanner*.py: show_asm param is optional verify.py: call correct
scanners main.py, verify.py: Use older Python print statements
2016-12-25 rocky <rb@dustyfeet.com>
* : commit e3f4beeb74e33d5b404094765cc63040f62a0b41 Author: rocky
<rb@dustyfeet.com> Date: Sat Dec 24 07:45:02 2016 -0500
2016-12-24 rocky <rb@dustyfeet.com>
@@ -1212,22 +905,34 @@
2016-12-18 rocky <rb@dustyfeet.com>
* pytest/.gitignore, test/simple_source/bug25/02_try_else.py,
uncompyle6/parsers/parse25.py, uncompyle6/parsers/parse26.py: Python
2.5 mistaken try/else
* : commit c7c0a989829a9a625333665516387c1177c611c2 Author: rocky
<rb@dustyfeet.com> Date: Sun Dec 18 00:56:07 2016 -0500
2016-12-17 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner2.py, uncompyle6/scanners/scanner25.py:
show-asm on python2.5 is optional make scanner2 look a little more like scanner3
2016-12-17 rocky <rb@dustyfeet.com>
* uncompyle6/scanners/scanner2.py, uncompyle6/scanners/scanner25.py:
show-asm on python2.5 is optional Make scanner2 a little more like scanner3.
2016-12-17 rocky <rb@dustyfeet.com>
* uncompyle6/parser.py, uncompyle6/scanner.py,
uncompyle6/scanners/scanner2.py, uncompyle6/scanners/scanner3.py,
uncompyle6/semantics/fragments.py: Python 2.6/2.7 tolerance in
Python 2.4 branch
2016-12-16 rocky <rb@dustyfeet.com>
* NEWS: Release 2.9.8 news
2016-12-16 rocky <rb@dustyfeet.com>
* __pkginfo__.py, uncompyle6/version.py: Get ready for release 2.9.8
* : commit 13d5cd1a588b7f4f2c233c436ce6b0b39db9950e Author: rocky
<rb@dustyfeet.com> Date: Fri Dec 16 22:42:46 2016 -0500
2016-12-16 rocky <rb@dustyfeet.com>
@@ -1295,15 +1000,12 @@
2016-12-04 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS, uncompyle6/main.py, uncompyle6/parser.py,
uncompyle6/parsers/parse26.py, uncompyle6/parsers/parse3.py,
uncompyle6/parsers/parse34.py, uncompyle6/parsers/parse35.py,
uncompyle6/scanner.py, uncompyle6/scanners/scanner2.py,
uncompyle6/scanners/scanner23.py, uncompyle6/scanners/scanner24.py,
uncompyle6/scanners/scanner3.py, uncompyle6/scanners/tok.py,
uncompyle6/semantics/make_function.py,
uncompyle6/semantics/pysource.py, uncompyle6/verify.py,
uncompyle6/version.py: Get ready for release 2.9.7 Some of the many lint things. Linting is kind of stupid though.
* ChangeLog: Get ready for release 2.9.7
2016-12-04 rocky <rb@dustyfeet.com>
* : commit d22931cb49f0e28a0fbe48a7c1526b1f170a5b52 Author: rocky
<rb@dustyfeet.com> Date: Sun Dec 4 07:31:34 2016 -0500
2016-11-28 rocky <rb@dustyfeet.com>
@@ -1354,11 +1056,34 @@
* uncompyle6/semantics/pysource.py: Better line number tracking Indent Python 2 list comprehensions, albeit badly. DRY code a
little via indent_if_source_nl
2016-11-24 rocky <rb@dustyfeet.com>
* circle.yml: CircleCI build
2016-11-24 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: Remove dup Python 3 grammar rule
2016-11-24 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py, uncompyle6/scanners/scanner2.py:
<2.7 "if" detection and dup Python 3 grammar rule
2016-11-24 rocky <rb@dustyfeet.com>
* test/test_pyenvlib.py, uncompyle6/linenumbers.py,
uncompyle6/main.py, uncompyle6/scanners/scanner2.py,
uncompyle6/semantics/make_function.py,
uncompyle6/semantics/pysource.py, uncompyle6/verify.py: Bug in 2.4
"if" dectection and... Wrong language used in old-style exceptions: use "except Error,e"
not "except Error(e)""
2016-11-24 rocky <rb@dustyfeet.com>
* __pkginfo__.py, pytest/test_grammar.py, uncompyle6/parser.py,
uncompyle6/parsers/parse26.py: Python 2.6 grammary bug and.. __pkginfo.py__: Bump spark_parser
version for parse_flags 'dups'
2016-11-23 rocky <rb@dustyfeet.com>
* __pkginfo__.py, pytest/test_grammar.py, uncompyle6/parser.py,
@@ -1370,8 +1095,17 @@
2016-11-23 rocky <rb@dustyfeet.com>
* : commit 6aa1531972de83ecab15b4c96b89c873ea5a7458 Author: rocky
<rb@dustyfeet.com> Date: Wed Nov 23 00:48:38 2016 -0500
* : commit 7133540c235e16f02d2db62cb903b70aa311de20 Author: rocky
<rb@dustyfeet.com> Date: Wed Nov 23 08:26:12 2016 -0500
2016-11-23 rocky <rb@dustyfeet.com>
* : commit a9349b8f3d12b2aa0cd88286617c1af9cccad018 Author: rocky
<rb@dustyfeet.com> Date: Tue Nov 22 17:49:47 2016 -0500
2016-11-23 rocky <rb@dustyfeet.com>
* circle.yml: Circle CI uses 2.7.10 and 2.7.12 is not available
2016-11-22 rocky <rb@dustyfeet.com>

View File

@@ -37,7 +37,7 @@ check-3.0 check-3.1 check-3.2 check-3.5 check-3.6:
$(MAKE) -C test $@
#:Tests for Python 2.6 (doesn't have pytest)
check-2.6:
check-2.4 check-2.5 check-2.6:
$(MAKE) -C test $@
#:PyPy 2.6.1 or PyPy 5.0.1
@@ -59,7 +59,7 @@ clean: clean_pyc
#: Create source (tarball) and wheel distribution
dist:
$(PYTHON) ./setup.py sdist bdist_wheel
$(PYTHON) ./setup.py sdist bdist_egg
#: Remove .pyc files
clean_pyc:

23
NEWS
View File

@@ -1,26 +1,3 @@
uncompyle6 2.11.1 2016-06-22
- Python 3.x annotatoin and function signature fixes
- Bump xdis version
- Small pysource bug fixes
uncompyle6 2.11.0 2016-06-18 Fleetwood
- Major improvements in fragment tracking
* Add nonterminal node in extractInfo
* tag more offsets in expressions
* tag array subscripts
* set YIELD value offset in a <yield> expr
* fix a long-standing bug in not adjusting final AST when melding other deparse ASTs
- Fixes yet again for make_function node handling; document what's up here
- Fix bug in snowflake Python 3.5 *args kwargs
uncompyle6 2.10.1 2016-06-3 Marylin Frankel
- fix some fragments parsing bugs
- was returning the wrong type sometimes in deparse_code_around_offset()
- capture function name in offsets
- track changes to ifelstrmtr node from pysource into fragments
uncompyle6 2.10.0 2016-05-30 Elaine Gordon
- Add fuzzy offset deparse lookup

View File

@@ -56,7 +56,7 @@ This uses setup.py, so it follows the standard Python routine:
::
pip install -e .
pip install -r requirements.txt
pip install -r requirements-dev.txt
python setup.py install # may need sudo
# or if you have pyenv:
@@ -171,7 +171,7 @@ See Also
* https://code.google.com/archive/p/unpyc3/ : supports Python 3.2 only. The above projects use a different decompiling technique than what is used here.
* https://github.com/figment/unpyc3/ : fork of above, but supports Python 3.3 only. Include some fixes like supporting function annotations
* The HISTORY_ file.
* `How to report a bug <https://github.com/rocky/python-uncompyle6/blob/master/HISTORY.md>`_
.. |downloads| image:: https://img.shields.io/pypi/dd/uncompyle6.svg
.. _trepan: https://pypi.python.org/pypi/trepan
.. _HISTORY: https://github.com/rocky/python-uncompyle6/blob/master/HISTORY.md

View File

@@ -33,14 +33,14 @@ classifiers = ['Development Status :: 5 - Production/Stable',
# The rest in alphabetic order
author = "Rocky Bernstein, Hartmut Goebel, John Aycock, and others"
author_email = "rb@dustyfeet.com"
entry_points = {
entry_points={
'console_scripts': [
'uncompyle6=uncompyle6.bin.uncompile:main_bin',
'pydisassemble=uncompyle6.bin.pydisassemble:main',
]}
ftp_url = None
install_requires = ['spark-parser >= 1.6.1, < 1.7.0',
'xdis >= 3.4.0, < 3.5.0', 'six']
'xdis >= 3.3.1, < 3.4.0']
license = 'MIT'
mailing_list = 'python-debugger@googlegroups.com'
modname = 'uncompyle6'

View File

@@ -6,8 +6,8 @@ machine:
dependencies:
override:
- pip install -e .
- pip install -r requirements.txt
- pip install -r requirements-dev.txt
test:
override:
- python ./setup.py develop && make check-2.7
- python ./setup.py develop && make check-2.6

View File

@@ -1,150 +0,0 @@
# std
import os
# test
import pytest
import hypothesis
from hypothesis import strategies as st
# uncompyle6
from uncompyle6 import PYTHON_VERSION, deparse_code
@st.composite
def expressions(draw):
# todo : would be nice to generate expressions using hypothesis however
# this is pretty involved so for now just use a corpus of expressions
# from which to select.
return draw(st.sampled_from((
'abc',
'len(items)',
'x + 1',
'lineno',
'container',
'self.attribute',
'self.method()',
# These expressions are failing, I think these are control
# flow problems rather than problems with FORMAT_VALUE,
# however I need to confirm this...
#'sorted(items, key=lambda x: x.name)',
#'func(*args, **kwargs)',
#'text or default',
#'43 if life_the_universe and everything else None'
)))
@st.composite
def format_specifiers(draw):
"""
Generate a valid format specifier using the rules:
format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]
fill ::= <any character>
align ::= "<" | ">" | "=" | "^"
sign ::= "+" | "-" | " "
width ::= integer
precision ::= integer
type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"
See https://docs.python.org/2/library/string.html
:param draw: Let hypothesis draw from other strategies.
:return: An example format_specifier.
"""
alphabet_strategy = st.characters(min_codepoint=ord('a'), max_codepoint=ord('z'))
fill = draw(st.one_of(alphabet_strategy, st.none()))
align = draw(st.sampled_from(list('<>=^')))
fill_align = (fill + align or '') if fill else ''
type_ = draw(st.sampled_from('bcdeEfFgGnosxX%'))
can_have_sign = type_ in 'deEfFgGnoxX%'
can_have_comma = type_ in 'deEfFgG%'
can_have_precision = type_ in 'fFgG'
can_have_pound = type_ in 'boxX%'
can_have_zero = type_ in 'oxX'
sign = draw(st.sampled_from(list('+- ') + [''])) if can_have_sign else ''
pound = draw(st.sampled_from(('#', '',))) if can_have_pound else ''
zero = draw(st.sampled_from(('0', '',))) if can_have_zero else ''
int_strategy = st.integers(min_value=1, max_value=1000)
width = draw(st.one_of(int_strategy, st.none()))
width = str(width) if width is not None else ''
comma = draw(st.sampled_from((',', '',))) if can_have_comma else ''
if can_have_precision:
precision = draw(st.one_of(int_strategy, st.none()))
precision = '.' + str(precision) if precision else ''
else:
precision = ''
return ''.join((fill_align, sign, pound, zero, width, comma, precision, type_,))
@st.composite
def fstrings(draw):
"""
Generate a valid f-string.
See https://www.python.org/dev/peps/pep-0498/#specification
:param draw: Let hypothsis draw from other strategies.
:return: A valid f-string.
"""
character_strategy = st.characters(
blacklist_characters='\r\n\'\\s{}',
min_codepoint=1,
max_codepoint=1000,
)
is_raw = draw(st.booleans())
integer_strategy = st.integers(min_value=0, max_value=3)
expression_count = draw(integer_strategy)
content = []
for _ in range(expression_count):
expression = draw(expressions())
conversion = draw(st.sampled_from(('', '!s', '!r', '!a',)))
has_specifier = draw(st.booleans())
specifier = ':' + draw(format_specifiers()) if has_specifier else ''
content.append('{{{}{}}}'.format(expression, conversion, specifier))
content.append(draw(st.text(character_strategy)))
content = ''.join(content)
return "f{}'{}'".format('r' if is_raw else '', content)
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
@hypothesis.given(format_specifiers())
def test_format_specifiers(format_specifier):
"""Verify that format_specifiers generates valid specifiers"""
try:
exec('"{:' + format_specifier + '}".format(0)')
except ValueError as e:
if 'Unknown format code' not in str(e):
raise
def run_test(text):
hypothesis.assume(len(text))
hypothesis.assume("f'{" in text)
expr = text + '\n'
code = compile(expr, '<string>', 'single')
deparsed = deparse_code(PYTHON_VERSION, code, compile_mode='single')
recompiled = compile(deparsed.text, '<string>', 'single')
if recompiled != code:
assert 'dis(' + deparsed.text.strip('\n') + ')' == 'dis(' + expr.strip('\n') + ')'
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
@hypothesis.given(fstrings())
def test_uncompyle_fstring(fstring):
"""Verify uncompyling fstring bytecode"""
run_test(fstring)
@pytest.mark.skipif(PYTHON_VERSION < 3.6, reason='need at least python 3.6')
@pytest.mark.parametrize('fstring', [
"f'{abc}{abc!s}'",
"f'{abc}0'",
])
def test_uncompyle_direct(fstring):
"""useful for debugging"""
run_test(fstring)

View File

@@ -1,3 +1,4 @@
pytest>=3.0.0
flake8
hypothesis
six

View File

@@ -19,7 +19,7 @@ check:
$(MAKE) check-$(PYTHON_VERSION)
#: Run working tests from Python 2.6 or 2.7
check-2.6 check-2.7: check-bytecode-2 check-bytecode-3 check-bytecode-1 check-native-short
check-2.4 check-2.5 check-2.6 check-2.7: check-bytecode-2 check-bytecode-3 check-bytecode-1 check-native-short
#: Run working tests from Python 3.0
check-3.0: check-bytecode
@@ -67,7 +67,7 @@ check-bytecode-2:
check-bytecode-3:
$(PYTHON) test_pythonlib.py --bytecode-3.0 \
--bytecode-3.1 --bytecode-3.2 --bytecode-3.3 \
--bytecode-3.4 --bytecode-3.5 --bytecode-pypy3.2
--bytecode-3.4 --bytecode-3.5 --bytecode-3.6 --bytecode-pypy3.2
#: Check deparsing bytecode that works running Python 2 and Python 3
check-bytecode: check-bytecode-3
@@ -97,29 +97,6 @@ check-bytecode-2.4:
check-bytecode-2.5:
$(PYTHON) test_pythonlib.py --bytecode-2.5
#: Get grammar coverage for Python 2.5
grammar-coverage-2.5:
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-25.cover $(PYTHON) test_pythonlib.py --bytecode-2.5
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-25.cover $(PYTHON) test_pyenvlib.py --2.5.6
#: Get grammar coverage for Python 2.6
grammar-coverage-2.6:
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-26.cover $(PYTHON) test_pythonlib.py --bytecode-2.6
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-26.cover $(PYTHON) test_pyenvlib.py --2.6.9
#: Get grammar coverage for Python 2.7
grammar-coverage-2.7:
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-27.cover $(PYTHON) test_pythonlib.py --bytecode-2.7
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-27.cover $(PYTHON) test_pyenvlib.py --2.7.13
#: Check deparsing Python 2.6
check-bytecode-2.6:
$(PYTHON) test_pythonlib.py --bytecode-2.6 --weak-verify
#: Check deparsing Python 2.7
check-bytecode-2.7:
$(PYTHON) test_pythonlib.py --bytecode-2.7 --verify
#: Check deparsing Python 3.0
check-bytecode-3.0:
$(PYTHON) test_pythonlib.py --bytecode-3.0
@@ -148,9 +125,33 @@ check-bytecode-3.5:
check-bytecode-3.6:
$(PYTHON) test_pythonlib.py --bytecode-3.6
#: Get grammar coverage for Python 2.4
grammar-coverage-2.4:
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-24.cover $(PYTHON) test_pythonlib.py --bytecode-2.4
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-24.cover $(PYTHON) test_pyenvlib.py --2.4.6
#: Get grammar coverage for Python 2.5
grammar-coverage-2.5:
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-25.cover $(PYTHON) test_pythonlib.py --bytecode-2.5
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-25.cover $(PYTHON) test_pyenvlib.py --2.5.6
#: Get grammar coverage for Python 2.6
grammar-coverage-2.6:
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-26.cover $(PYTHON) test_pythonlib.py --bytecode-2.6
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-26.cover $(PYTHON) test_pyenvlib.py --2.6.9
#: Get grammar coverage for Python 2.7
grammar-coverage-2.7:
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-27.cover $(PYTHON) test_pythonlib.py --bytecode-2.7
SPARK_PARSER_COVERAGE=/tmp/spark-grammar-27.cover $(PYTHON) test_pyenvlib.py --2.7.13
#: short tests for bytecodes only for this version of Python
check-native-short:
$(PYTHON) test_pythonlib.py --bytecode-$(PYTHON_VERSION) --verify $(COMPILE)
$(PYTHON) test_pythonlib.py --bytecode-$(PYTHON_VERSION) --weak-verify $(COMPILE)
#: Run longer Python 2.6's lib files known to be okay
check-2.4-ok:
$(PYTHON) test_pythonlib.py --ok-2.4 --verify $(COMPILE)
#: Run longer Python 2.6's lib files known to be okay
check-2.6-ok:

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -9,7 +9,7 @@ def open(file, mode = "r", buffering = None,
newline = None, closefd = True) -> "IOBase":
return text
def foo1(x: 'an argument that defaults to 5' = 5):
def foo(x: 'an argument that defaults to 5' = 5):
print(x)
def div(a: dict(type=float, help='the dividend'),

View File

@@ -1,5 +1,4 @@
# sql/schema.py
# Note that kwargs comes before "positional" args
def tometadata(self, metadata, schema, Table, args, name=None):
table = Table(
name, metadata, schema=schema,

View File

@@ -19,8 +19,6 @@ Step 2: Run the test:
test_pyenvlib --mylib --verify # decompile verify 'mylib'
"""
from __future__ import print_function
from uncompyle6 import main, PYTHON3
import os, time, shutil, sys
from fnmatch import fnmatch

View File

@@ -27,8 +27,6 @@ Step 2: Run the test:
test_pythonlib.py --mylib --verify # decompile verify 'mylib'
"""
from __future__ import print_function
import getopt, os, py_compile, sys, shutil, tempfile, time
from uncompyle6 import PYTHON_VERSION
@@ -127,8 +125,10 @@ def do_tests(src_dir, obj_patterns, target_dir, opts):
if opts['do_compile']:
compiled_version = opts['compiled_version']
if compiled_version and PYTHON_VERSION != compiled_version:
print("Not compiling: desired Python version is %s but we are running %s" %
(compiled_version, PYTHON_VERSION), file=sys.stderr)
sys.stderr.write("Not compiling: "
"desired Python version is %s "
"but we are running %s" %
(compiled_version, PYTHON_VERSION))
else:
for root, dirs, basenames in os.walk(src_dir):
file_matches(files, root, basenames, PY)
@@ -146,8 +146,8 @@ def do_tests(src_dir, obj_patterns, target_dir, opts):
file_matches(files, dirname, basenames, obj_patterns)
if not files:
print("Didn't come up with any files to test! Try with --compile?",
file=sys.stderr)
sys.stderr.write("Didn't come up with any files to test! "
"Try with --compile?")
exit(1)
os.chdir(cwd)
@@ -161,9 +161,9 @@ def do_tests(src_dir, obj_patterns, target_dir, opts):
except ValueError:
pass
print(time.ctime())
print('Source directory: ', src_dir)
print('Output directory: ', target_dir)
print time.ctime()
print 'Source directory: ', src_dir
print 'Output directory: ', target_dir
try:
_, _, failed_files, failed_verify = \
main(src_dir, target_dir, files, [],
@@ -236,14 +236,13 @@ if __name__ == '__main__':
if os.path.isdir(src_dir):
checked_dirs.append([src_dir, pattern, target_dir])
else:
print("Can't find directory %s. Skipping" % src_dir,
file=sys.stderr)
sys.stderr.write("Can't find directory %s. Skipping" % src_dir)
continue
last_compile_version = compiled_version
pass
if not checked_dirs:
print("No directories found to check", file=sys.stderr)
sys.stderr.write("No directories found to check\n")
sys.exit(1)
test_opts['compiled_version'] = last_compile_version

View File

@@ -3,7 +3,6 @@
#
# Copyright (c) 2015-2016 by Rocky Bernstein <rb@dustyfeet.com>
#
from __future__ import print_function
import sys, os, getopt
from uncompyle6.disas import disassemble_file
@@ -26,7 +25,7 @@ Options:
-V | --version show version and stop
-h | --help show this message
""".format(program)
""" % (program, program)
PATTERNS = ('*.pyc', '*.pyo')
@@ -37,15 +36,15 @@ Type -h for for full help.""" % program
native = True
if len(sys.argv) == 1:
print("No file(s) given", file=sys.stderr)
print(Usage_short, file=sys.stderr)
sys.stderr.write("No file(s) given\n")
sys.stderr.write(Usage_short)
sys.exit(1)
try:
opts, files = getopt.getopt(sys.argv[1:], 'hVU',
['help', 'version', 'uncompyle6'])
except getopt.GetoptError as e:
print('%s: %s' % (os.path.basename(sys.argv[0]), e), file=sys.stderr)
except getopt.GetoptError(e):
sys.stderr.write('%s: %s' % (os.path.basename(sys.argv[0]), e))
sys.exit(-1)
for opt, val in opts:
@@ -59,15 +58,14 @@ Type -h for for full help.""" % program
native = False
else:
print(opt)
print(Usage_short, file=sys.stderr)
sys.stderr.write(Usage_short)
sys.exit(1)
for file in files:
if os.path.exists(files[0]):
disassemble_file(file, sys.stdout, native)
else:
print("Can't read %s - skipping" % files[0],
file=sys.stderr)
sys.stderr.write("Can't read %s - skipping\n" % files[0])
pass
pass
return

View File

@@ -4,7 +4,6 @@
# Copyright (c) 2015-2016 by Rocky Bernstein
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
#
from __future__ import print_function
import sys, os, getopt, time
program, ext = os.path.splitext(os.path.basename(__file__))
@@ -65,11 +64,11 @@ def usage():
def main_bin():
if not (sys.version_info[0:2] in ((2, 6), (2, 7),
(3, 1), (3, 2), (3, 3),
if not (sys.version_info[0:2] in ((2, 4), (2, 5), (2, 6), (2, 7),
(3, 2), (3, 3),
(3, 4), (3, 5), (3, 6))):
print('Error: %s requires Python 2.6-2.7, or 3.1-3.6' % program,
file=sys.stderr)
sys.stderr.write('Error: %s requires Python 2.4 2.5 2.6, 2.7, '
'3.2, 3.3, 3.4, 3.5, or 3.6' % program)
sys.exit(-1)
do_verify = recurse_dirs = False
@@ -84,8 +83,8 @@ def main_bin():
opts, files = getopt.getopt(sys.argv[1:], 'hagtdrVo:c:p:',
'help asm grammar linemaps recurse timestamp tree '
'verify version showgrammar'.split(' '))
except getopt.GetoptError as e:
print('%s: %s' % (os.path.basename(sys.argv[0]), e), file=sys.stderr)
except getopt.GetoptError(e):
sys.stderr.write('%s: %s\n' % (os.path.basename(sys.argv[0]), e))
sys.exit(-1)
options = {}
@@ -119,7 +118,7 @@ def main_bin():
elif opt in ('--recurse', '-r'):
recurse_dirs = True
else:
print(opt, file=sys.stderr)
sys.stderr.write(opt)
usage()
# expand directory if specified
@@ -144,7 +143,7 @@ def main_bin():
files = [f[sb_len:] for f in files]
if not files:
print("No files given", file=sys.stderr)
sys.stderr.write("No files given\n")
usage()
if outfile == '-':

View File

@@ -16,8 +16,6 @@ Second, we need structured instruction information for the
want to run on Python 2.7.
"""
from __future__ import print_function
import sys
from collections import deque
@@ -37,10 +35,9 @@ def disco(version, co, out=None, is_pypy=False):
# store final output stream for case of error
real_out = out or sys.stdout
print('# Python %s' % version, file=real_out)
real_out.write('# Python %s\n' % version)
if co.co_filename:
print('# Embedded file name: %s' % co.co_filename,
file=real_out)
real_out.write('# Embedded file name: %s\n' % co.co_filename)
scanner = get_scanner(version, is_pypy=is_pypy)
@@ -52,16 +49,15 @@ def disco_loop(disasm, queue, real_out):
while len(queue) > 0:
co = queue.popleft()
if co.co_name != '<module>':
print('\n# %s line %d of %s' %
(co.co_name, co.co_firstlineno, co.co_filename),
file=real_out)
real_out.write('\n# %s line %d of %s\n' %
(co.co_name, co.co_firstlineno, co.co_filename))
tokens, customize = disasm(co)
for t in tokens:
if iscode(t.pattr):
queue.append(t.pattr)
elif iscode(t.attr):
queue.append(t.attr)
print(t, file=real_out)
real_out.write(t)
pass
pass

View File

@@ -10,7 +10,7 @@ def line_number_mapping(pyc_filename, src_filename):
source_size) = load_module(pyc_filename)
try:
code2 = load_file(src_filename)
except SyntaxError as e:
except SyntaxError, e:
return str(e)
queue = deque([code1, code2])

View File

@@ -1,4 +1,3 @@
from __future__ import print_function
import datetime, os, subprocess, sys, tempfile
from uncompyle6 import verify, IS_PYPY
@@ -22,31 +21,36 @@ def decompile(
# store final output stream for case of error
real_out = out or sys.stdout
co_pypy_str = 'PyPy ' if is_pypy else ''
run_pypy_str = 'PyPy ' if IS_PYPY else ''
print('# uncompyle6 version %s\n'
'# %sPython bytecode %s%s\n# Decompiled from: %sPython %s' %
(VERSION, co_pypy_str, bytecode_version,
" (%d)" % magic_int if magic_int else "",
run_pypy_str, '\n# '.join(sys.version.split('\n'))),
file=real_out)
if co.co_filename:
print('# Embedded file name: %s' % co.co_filename,
file=real_out)
if timestamp:
print('# Compiled at: %s' % datetime.datetime.fromtimestamp(timestamp),
file=real_out)
if source_size:
print('# Size of source mod 2**32: %d bytes' % source_size,
file=real_out)
if is_pypy:
co_pypy_str = 'PyPy '
else:
co_pypy_str = ''
try:
pysource.deparse_code(bytecode_version, co, out, showasm, showast,
showgrammar, code_objects=code_objects,
is_pypy=is_pypy)
except pysource.SourceWalkerError as e:
# deparsing failed
raise pysource.SourceWalkerError(str(e))
if IS_PYPY:
run_pypy_str = 'PyPy '
else:
run_pypy_str = ''
if magic_int:
m = str(magic_int)
else:
m = ""
real_out.write('# uncompyle6 version %s\n'
'# %sPython bytecode %s%s\n# Decompiled from: %sPython %s\n' %
(VERSION, co_pypy_str, bytecode_version,
" (%s)" % m, run_pypy_str,
'\n# '.join(sys.version.split('\n'))))
if co.co_filename:
real_out.write('# Embedded file name: %s\n' % co.co_filename)
if timestamp:
real_out.write('# Compiled at: %s\n' %
datetime.datetime.fromtimestamp(timestamp))
if source_size:
real_out.write('# Size of source mod 2**32: %d bytes\n' % source_size)
pysource.deparse_code(bytecode_version, co, out, showasm, showast,
showgrammar, code_objects=code_objects,
is_pypy=is_pypy)
# For compatiblity
uncompyle = decompile
@@ -128,7 +132,10 @@ def main(in_base, out_base, files, codes, outfile=None,
junk, outfile = tempfile.mkstemp(suffix=".py",
prefix=prefix)
# Unbuffer output if possible
buffering = -1 if sys.stdout.isatty() else 0
if sys.stdout.isatty():
buffering = -1
else:
buffering = 0
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', buffering)
tee = subprocess.Popen(["tee", outfile], stdin=subprocess.PIPE)
os.dup2(tee.stdin.fileno(), sys.stdout.fileno())
@@ -145,9 +152,9 @@ def main(in_base, out_base, files, codes, outfile=None,
try:
decompile_file(infile, outstream, showasm, showast, showgrammar)
tot_files += 1
except (ValueError, SyntaxError, ParserError, pysource.SourceWalkerError) as e:
except (ValueError, SyntaxError, ParserError, pysource.SourceWalkerError):
sys.stdout.write("\n")
sys.stderr.write("\n# file %s\n# %s\n" % (infile, e))
sys.stderr.write("# file %s\n" % (infile))
failed_files += 1
except KeyboardInterrupt:
if outfile:
@@ -181,24 +188,28 @@ def main(in_base, out_base, files, codes, outfile=None,
msg = verify.compare_code_with_srcfile(infile, current_outfile, weak_verify=weak_verify)
if not current_outfile:
if not msg:
print('\n# okay decompiling %s' % infile)
print '\n# okay decompiling %s' % infile
okay_files += 1
else:
print('\n# %s\n\t%s', infile, msg)
except verify.VerifyCmpError as e:
print '\n# %s\n\t%s', infile, msg
except verify.VerifyCmpError, e:
print(e)
verify_failed_files += 1
os.rename(current_outfile, current_outfile + '_unverified')
sys.stderr.write("### Error Verifying %s\n" % filename)
sys.stderr.write(str(e) + "\n")
if not outfile:
sys.stder.write("### Error Verifiying %s" %
filename)
sys.stderr.write(e)
if raise_on_error:
raise
pass
pass
pass
elif do_verify:
sys.stderr.write("\n### uncompile successful, but no file to compare against\n")
sys.stderr.write("\n### uncompile successful, "
"but no file to compare against")
pass
else:
okay_files += 1

View File

@@ -6,8 +6,6 @@
Common uncompyle parser routines.
"""
from __future__ import print_function
import sys
from xdis.code import iscode
@@ -30,13 +28,13 @@ class PythonParser(GenericASTBuilder):
def __init__(self, AST, start, debug):
super(PythonParser, self).__init__(AST, start, debug)
self.collect = frozenset(
['stmts', 'except_stmts', '_stmts', 'load_attrs',
'exprlist', 'kvlist', 'kwargs', 'come_froms', '_come_from',
# Python < 3
'print_items',
# PyPy:
'kvlist_n'])
self.collect = [
'stmts', 'except_stmts', '_stmts', 'load_attrs',
'exprlist', 'kvlist', 'kwargs', 'come_froms', '_come_from',
# Python < 3
'print_items',
# PyPy:
'kvlist_n']
def ast_first_offset(self, ast):
if hasattr(ast, 'offset'):
@@ -92,7 +90,10 @@ class PythonParser(GenericASTBuilder):
def fix(c):
s = str(c)
i = s.find('_')
return s if i == -1 else s[:i]
if i == -1:
return s
else:
return s[:i]
prefix = ''
if parent and tokens:
@@ -123,7 +124,10 @@ class PythonParser(GenericASTBuilder):
err_token = instructions[index]
print("Instruction context:")
for i in range(start, finish):
indent = ' ' if i != index else '-> '
if i != index:
indent = ' '
else:
indent = '-> '
print("%s%s" % (indent, instructions[i]))
raise ParserError(err_token, err_token.offset)

View File

@@ -12,8 +12,6 @@ If we succeed in creating a parse tree, then we have a Python program
that a later phase can turn into a sequence of ASCII text.
"""
from __future__ import print_function
from uncompyle6.parser import PythonParser, PythonParserSingle, nop_func
from uncompyle6.parsers.astnode import AST
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG

View File

@@ -15,8 +15,6 @@ If we succeed in creating a parse tree, then we have a Python program
that a later phase can turn into a sequence of ASCII text.
"""
from __future__ import print_function
from uncompyle6.parser import PythonParser, PythonParserSingle, nop_func
from uncompyle6.parsers.astnode import AST
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
@@ -496,8 +494,8 @@ class Python3Parser(PythonParser):
token.type = self.call_fn_name(token)
uniq_param = args_kw + args_pos
if self.version == 3.5 and opname.startswith('CALL_FUNCTION_VAR'):
# Python 3.5 changes the stack position of *args. KW args come
# after *args.
# Python 3.5 changes the stack position of where * args, the
# first LOAD_FAST, below are located.
# Python 3.6+ replaces CALL_FUNCTION_VAR_KW with CALL_FUNCTION_EX
if opname.endswith('KW'):
kw = 'expr '
@@ -507,17 +505,11 @@ class Python3Parser(PythonParser):
('pos_arg ' * args_pos) +
('kwarg ' * args_kw) + kw + token.type)
self.add_unique_rule(rule, token.type, uniq_param, customize)
if self.version >= 3.6 and opname == 'CALL_FUNCTION_EX_KW':
rule = ('call_function36 ::= '
'expr build_tuple_unpack_with_call build_map_unpack_with_call '
'CALL_FUNCTION_EX_KW_1')
self.add_unique_rule(rule, token.type, uniq_param, customize)
rule = 'call_function ::= call_function36'
else:
rule = ('call_function ::= expr ' +
('pos_arg ' * args_pos) +
('kwarg ' * args_kw) +
'expr ' * nak + token.type)
rule = ('call_function ::= expr ' +
('pos_arg ' * args_pos) +
('kwarg ' * args_kw) +
'expr ' * nak + token.type)
self.add_unique_rule(rule, token.type, uniq_param, customize)
if self.version >= 3.5:
@@ -537,7 +529,10 @@ class Python3Parser(PythonParser):
"""Python 3.3 added a an addtional LOAD_CONST before MAKE_FUNCTION and
this has an effect on many rules.
"""
new_rule = rule % (('LOAD_CONST ') * (1 if self.version >= 3.3 else 0))
if self.version >= 3.3:
new_rule = rule % (('LOAD_CONST ') * 1)
else:
new_rule = rule % (('LOAD_CONST ') * 0)
self.add_unique_rule(new_rule, opname, attr, customize)
def add_custom_rules(self, tokens, customize):
@@ -616,9 +611,9 @@ class Python3Parser(PythonParser):
assign2_pypy ::= expr expr designator designator
""", nop_func)
continue
elif (opname in ('CALL_FUNCTION', 'CALL_FUNCTION_VAR',
'CALL_FUNCTION_VAR_KW', 'CALL_FUNCTION_EX_KW')
or opname.startswith('CALL_FUNCTION_KW')):
elif opname in ('CALL_FUNCTION', 'CALL_FUNCTION_VAR',
'CALL_FUNCTION_VAR_KW') \
or opname.startswith('CALL_FUNCTION_KW'):
self.custom_classfunc_rule(opname, token, customize)
elif opname == 'LOAD_DICTCOMP':
rule_pat = ("dictcomp ::= LOAD_DICTCOMP %sMAKE_FUNCTION_0 expr "
@@ -639,18 +634,6 @@ class Python3Parser(PythonParser):
self.add_unique_rule(rule, opname, token.attr, customize)
rule = 'expr ::= build_list_unpack'
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname.startswith('BUILD_TUPLE_UNPACK_WITH_CALL'):
v = token.attr
rule = ('build_tuple_unpack_with_call ::= ' + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname.startswith('BUILD_MAP_UNPACK_WITH_CALL'):
v = token.attr
rule = ('build_map_unpack_with_call ::= ' + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname_base in ('BUILD_LIST', 'BUILD_TUPLE', 'BUILD_SET'):
v = token.attr
rule = ('build_list ::= ' + 'expr1024 ' * int(v//1024) +
@@ -660,10 +643,7 @@ class Python3Parser(PythonParser):
if opname_base == 'BUILD_TUPLE':
rule = ('load_closure ::= %s%s' % (('LOAD_CLOSURE ' * v), opname))
self.add_unique_rule(rule, opname, token.attr, customize)
rule = ('build_tuple ::= ' + 'expr1024 ' * int(v//1024) +
'expr32 ' * int((v//32) % 32) +
'expr ' * (v % 32) + opname)
self.add_unique_rule(rule, opname, token.attr, customize)
elif opname == 'LOOKUP_METHOD':
# A PyPy speciality - DRY with parse2
self.add_unique_rule("load_attr ::= expr LOOKUP_METHOD",
@@ -889,7 +869,8 @@ class Python3Parser(PythonParser):
elif lhs == 'annotate_tuple':
return not isinstance(tokens[first].attr, tuple)
elif lhs == 'kwarg':
return not isinstance(tokens[first].attr, str)
return not (isinstance(tokens[first].attr, unicode) or
isinstance(tokens[first].attr, str))
elif lhs == 'while1elsestmt':
# if SETUP_LOOP target spans the else part, then this is
# not while1else. Also do for whileTrue?

View File

@@ -2,7 +2,6 @@
"""
spark grammar differences over Python 3.1 for Python 3.0.
"""
from __future__ import print_function
from uncompyle6.parser import PythonParserSingle
from uncompyle6.parsers.parse31 import Python31Parser

View File

@@ -2,7 +2,6 @@
"""
spark grammar differences over Python 3.2 for Python 3.1.
"""
from __future__ import print_function
from uncompyle6.parser import PythonParserSingle
from uncompyle6.parsers.parse32 import Python32Parser

View File

@@ -2,8 +2,6 @@
"""
spark grammar differences over Python 3 for Python 3.2.
"""
from __future__ import print_function
from uncompyle6.parser import PythonParserSingle
from uncompyle6.parsers.parse3 import Python3Parser

View File

@@ -2,7 +2,6 @@
"""
spark grammar differences over Python 3.2 for Python 3.3.
"""
from __future__ import print_function
from uncompyle6.parser import PythonParserSingle
from uncompyle6.parsers.parse32 import Python32Parser

View File

@@ -2,7 +2,6 @@
"""
spark grammar differences over Python 3.4 for Python 3.5.
"""
from __future__ import print_function
from uncompyle6.parser import PythonParserSingle
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG

View File

@@ -2,7 +2,6 @@
"""
spark grammar differences over Python 3.5 for Python 3.6.
"""
from __future__ import print_function
from uncompyle6.parser import PythonParserSingle
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
@@ -25,13 +24,12 @@ class Python36Parser(Python35Parser):
func_args36 ::= expr BUILD_TUPLE_0
call_function ::= func_args36 unmapexpr CALL_FUNCTION_EX
call_function ::= func_args36 build_map_unpack_with_call CALL_FUNCTION_EX_KW_1
withstmt ::= expr SETUP_WITH POP_TOP suite_stmts_opt POP_BLOCK LOAD_CONST
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
call_function ::= expr expr CALL_FUNCTION_EX
call_function ::= expr expr expr CALL_FUNCTION_EX_KW_1
call_function ::= expr expr expr CALL_FUNCTION_EX_KW
"""
def add_custom_rules(self, tokens, customize):

View File

@@ -10,13 +10,10 @@ scanner/ingestion module. From here we call various version-specific
scanners, e.g. for Python 2.7 or 3.4.
"""
from __future__ import print_function
import sys
from uncompyle6 import PYTHON3, IS_PYPY
from uncompyle6.scanners.tok import Token
from xdis.bytecode import op_size
# The byte code versions we support
PYTHON_VERSIONS = (1.5,
@@ -215,6 +212,9 @@ class Scanner(object):
result.append(offset)
return result
def op_hasArgument(self, op):
return self.op_size(op) > 1
def op_range(self, start, end):
"""
Iterate through positions of opcodes, skipping
@@ -222,7 +222,26 @@ class Scanner(object):
"""
while start < end:
yield start
start += op_size(self.code[start], self.opc)
start += self.op_size(self.code[start])
def next_offset(self, op, offset):
return offset + self.op_size(op)
def op_size(self, op):
"""
Return size of operator with its arguments
for given opcode <op>.
"""
if op < self.opc.HAVE_ARGUMENT:
if self.version >= 3.6:
return 2
else:
return 1
else:
if self.version >= 3.6:
return 2
else:
return 3
def remove_mid_line_ifs(self, ifs):
"""
@@ -254,6 +273,9 @@ class Scanner(object):
self.Token = tokenClass
return self.Token
def op_has_argument(op, opc):
return op >= opc.HAVE_ARGUMENT
def parse_fn_counts(argc):
return ((argc & 0xFF), (argc >> 8) & 0xFF, (argc >> 16) & 0x7FFF)

View File

@@ -13,13 +13,13 @@ import uncompyle6.scanners.scanner21 as scan
from xdis.opcodes import opcode_15
JUMP_OPs = opcode_15.JUMP_OPs
# We base this off of 2.2 instead of the other way around
# We base this off of 2.1 instead of the other way around
# because we cleaned things up this way.
# The history is that 2.7 support is the cleanest,
# then from that we got 2.6 and so on.
class Scanner15(scan.Scanner21):
def __init__(self, show_asm=False):
scan.Scanner21.__init__(self, show_asm)
scan.Scanner21.__init__(self, show_asm=False)
self.opc = opcode_15
self.opname = opcode_15.opname
self.version = 1.5

View File

@@ -20,20 +20,23 @@ For example:
Finally we save token information.
"""
from __future__ import print_function
from uncompyle6 import PYTHON_VERSION
if PYTHON_VERSION < 2.6:
from xdis.namedtuple25 import namedtuple
else:
from collections import namedtuple
from collections import namedtuple
from array import array
from uncompyle6.scanner import L65536
from uncompyle6.scanner import op_has_argument
from xdis.code import iscode
from xdis.bytecode import op_has_argument, op_size
from uncompyle6.scanner import Scanner
import uncompyle6.scanner as scan
class Scanner2(Scanner):
class Scanner2(scan.Scanner):
def __init__(self, version, show_asm=None, is_pypy=False):
Scanner.__init__(self, version, show_asm, is_pypy)
scan.Scanner.__init__(self, version, show_asm, is_pypy)
self.pop_jump_if = frozenset([self.opc.PJIF, self.opc.PJIT])
self.jump_forward = frozenset([self.opc.JUMP_ABSOLUTE, self.opc.JUMP_FORWARD])
# This is the 2.5+ default
@@ -85,7 +88,9 @@ class Scanner2(Scanner):
cause specific rules for the specific number of arguments they take.
"""
show_asm = self.show_asm if not show_asm else show_asm
if not show_asm:
show_asm = self.show_asm
# show_asm = 'after'
if show_asm in ('both', 'before'):
from xdis.bytecode import Bytecode
@@ -187,7 +192,7 @@ class Scanner2(Scanner):
oparg = self.get_argument(offset) + extended_arg
extended_arg = 0
if op == self.opc.EXTENDED_ARG:
extended_arg = oparg * L65536
extended_arg = oparg * scan.L65536
continue
if op in self.opc.hasconst:
const = co.co_consts[oparg]
@@ -328,7 +333,7 @@ class Scanner2(Scanner):
for i in self.op_range(0, n):
op = self.code[i]
self.prev.append(i)
if op_has_argument(op, self.opc):
if self.op_hasArgument(op):
self.prev.append(i)
self.prev.append(i)
pass
@@ -381,7 +386,7 @@ class Scanner2(Scanner):
if elem != code[i]:
match = False
break
i += op_size(code[i], self.opc)
i += self.op_size(code[i])
if match:
i = self.prev[i]
@@ -618,7 +623,7 @@ class Scanner2(Scanner):
'start': jump_back+3,
'end': end})
elif op == self.opc.SETUP_EXCEPT:
start = offset + op_size(op, self.opc)
start = offset + self.op_size(op)
target = self.get_target(offset, op)
end = self.restrict_to_parent(target, parent)
if target != end:
@@ -642,7 +647,7 @@ class Scanner2(Scanner):
setup_except_nest -= 1
elif self.code[end_finally_offset] == self.opc.SETUP_EXCEPT:
setup_except_nest += 1
end_finally_offset += op_size(code[end_finally_offset], self.opc)
end_finally_offset += self.op_size(code[end_finally_offset])
pass
# Add the except blocks
@@ -843,7 +848,7 @@ class Scanner2(Scanner):
else:
# We still have the case in 2.7 that the next instruction
# is a jump to a SETUP_LOOP target.
next_offset = target + op_size(self.code[target], self.opc)
next_offset = target + self.op_size(self.code[target])
next_op = self.code[next_offset]
if self.op_name(next_op) == 'JUMP_FORWARD':
jump_target = self.get_target(next_offset, next_op)
@@ -1024,8 +1029,10 @@ class Scanner2(Scanner):
# FIXME: rocky: I think we need something like this...
if offset not in set(self.ignore_if) or self.version == 2.7:
source = (self.setup_loops[label]
if label in self.setup_loops else offset)
if label in self.setup_loops:
source = self.setup_loops[label]
else:
source = offset
targets[label] = targets.get(label, []) + [source]
pass

View File

@@ -19,7 +19,7 @@ JUMP_OPs = opcode_21.JUMP_OPs
# then from that we got 2.6 and so on.
class Scanner21(scan.Scanner22):
def __init__(self, show_asm=False):
scan.Scanner22.__init__(self, show_asm)
scan.Scanner22.__init__(self, show_asm=False)
self.opc = opcode_21
self.opname = opcode_21.opname
self.version = 2.1

View File

@@ -19,7 +19,7 @@ JUMP_OPs = opcode_22.JUMP_OPs
# then from that we got 2.6 and so on.
class Scanner22(scan.Scanner23):
def __init__(self, show_asm=False):
scan.Scanner23.__init__(self, show_asm)
scan.Scanner23.__init__(self, show_asm=False)
self.opc = opcode_22
self.opname = opcode_22.opname
self.version = 2.2

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015-2017 by Rocky Bernstein
# Copyright (c) 2015, 2016 by Rocky Bernstein
# Copyright (c) 2005 by Dan Pascu <dan@windowmaker.org>
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
"""
@@ -15,7 +15,6 @@ if PYTHON3:
intern = sys.intern
import uncompyle6.scanners.scanner2 as scan
from uncompyle6.scanner import L65536
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_26
@@ -87,7 +86,9 @@ class Scanner26(scan.Scanner2):
cause specific rules for the specific number of arguments they take.
"""
show_asm = self.show_asm if not show_asm else show_asm
if not show_asm:
show_asm = self.show_asm
# show_asm = 'after'
if show_asm in ('both', 'before'):
from xdis.bytecode import Bytecode
@@ -179,7 +180,7 @@ class Scanner26(scan.Scanner2):
oparg = self.get_argument(offset) + extended_arg
extended_arg = 0
if op == self.opc.EXTENDED_ARG:
extended_arg = oparg * L65536
extended_arg = oparg * scan.L65536
continue
if op in self.opc.hasconst:
const = co.co_consts[oparg]

View File

@@ -7,8 +7,6 @@ grammar parsing.
"""
from __future__ import print_function
from uncompyle6.scanners.scanner2 import Scanner2
from uncompyle6 import PYTHON3

View File

@@ -20,16 +20,19 @@ For example:
Finally we save token information.
"""
from __future__ import print_function
from uncompyle6 import PYTHON_VERSION
if PYTHON_VERSION < 2.6:
from xdis.namedtuple25 import namedtuple
else:
from collections import namedtuple
from collections import namedtuple
from array import array
from uncompyle6.scanner import Scanner
from uncompyle6.scanner import Scanner, op_has_argument
from xdis.code import iscode
from xdis.bytecode import Bytecode, op_has_argument, op_size
from xdis.bytecode import Bytecode
from uncompyle6.scanner import Token, parse_fn_counts
import xdis
# Get all the opcodes into globals
import xdis.opcodes.opcode_33 as op3
@@ -162,8 +165,10 @@ class Scanner3(Scanner):
cause specific rules for the specific number of arguments they take.
"""
show_asm = self.show_asm if not show_asm else show_asm
# show_asm = 'both'
if not show_asm:
show_asm = self.show_asm
# show_asm = 'after'
if show_asm in ('both', 'before'):
bytecode = Bytecode(co, self.opc)
for instr in bytecode.get_instructions(co):
@@ -469,7 +474,7 @@ class Scanner3(Scanner):
self.prev = self.prev_op = [0]
for offset in self.op_range(0, codelen):
op = code[offset]
for _ in range(op_size(op, self.opc)):
for _ in range(self.op_size(op)):
self.prev_op.append(offset)
def find_jump_targets(self, debug):
@@ -519,7 +524,7 @@ class Scanner3(Scanner):
oparg = code[offset+1]
else:
oparg = code[offset+1] + code[offset+2] * 256
next_offset = xdis.next_offset(op, self.opc, offset)
next_offset = self.next_offset(op, offset)
if label is None:
if op in op3.hasjrel and op != self.opc.FOR_ITER:
@@ -565,7 +570,7 @@ class Scanner3(Scanner):
if elem != code[i]:
match = False
break
i += op_size(code[i], self.opc)
i += self.op_size(code[i])
if match is True:
i = self.prev_op[i]
@@ -758,7 +763,7 @@ class Scanner3(Scanner):
'start': jump_back+3,
'end': end})
elif op in self.pop_jump_tf:
start = offset + op_size(op, self.opc)
start = offset + self.op_size(op)
target = self.get_target(offset)
rtarget = self.restrict_to_parent(target, parent)
prev_op = self.prev_op
@@ -933,9 +938,9 @@ class Scanner3(Scanner):
# not from SETUP_EXCEPT
next_op = rtarget
if code[next_op] == self.opc.POP_BLOCK:
next_op += op_size(self.code[next_op], self.opc)
next_op += self.op_size(self.code[next_op])
if code[next_op] == self.opc.JUMP_ABSOLUTE:
next_op += op_size(self.code[next_op], self.opc)
next_op += self.op_size(self.code[next_op])
if next_op in targets:
for try_op in targets[next_op]:
come_from_op = code[try_op]
@@ -958,12 +963,12 @@ class Scanner3(Scanner):
end = self.restrict_to_parent(target, parent)
self.fixed_jumps[offset] = end
elif op == self.opc.POP_EXCEPT:
next_offset = xdis.next_offset(op, self.opc, offset)
next_offset = self.next_offset(op, offset)
target = self.get_target(next_offset)
if target > next_offset:
next_op = code[next_offset]
if (self.opc.JUMP_ABSOLUTE == next_op and
END_FINALLY != code[xdis.next_offset(next_op, self.opc, next_offset)]):
END_FINALLY != code[self.next_offset(next_op, next_offset)]):
self.fixed_jumps[next_offset] = target
self.except_targets[target] = next_offset

View File

@@ -6,12 +6,8 @@ This sets up opcodes Python's 3.0 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_30 as opc
from xdis.bytecode import op_size
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
JUMP_TF = frozenset([opc.JUMP_IF_FALSE, opc.JUMP_IF_TRUE])
@@ -135,7 +131,7 @@ class Scanner30(Scanner3):
'start': jump_back+3,
'end': end})
elif op in JUMP_TF:
start = offset + op_size(op, self.opc)
start = offset + self.op_size(op)
target = self.get_target(offset)
rtarget = self.restrict_to_parent(target, parent)
prev_op = self.prev_op
@@ -307,9 +303,9 @@ class Scanner30(Scanner3):
# not from SETUP_EXCEPT
next_op = rtarget
if code[next_op] == self.opc.POP_BLOCK:
next_op += op_size(self.code[next_op], self.opc)
next_op += self.op_size(self.code[next_op])
if code[next_op] == self.opc.JUMP_ABSOLUTE:
next_op += op_size(self.code[next_op], self.opc)
next_op += self.op_size(self.code[next_op])
if next_op in targets:
for try_op in targets[next_op]:
come_from_op = code[try_op]

View File

@@ -6,8 +6,6 @@ This sets up opcodes Python's 3.1 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_31 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)

View File

@@ -6,8 +6,6 @@ This sets up opcodes Python's 3.2 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_32 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)

View File

@@ -6,8 +6,6 @@ This sets up opcodes Python's 3.3 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_33 as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)

View File

@@ -6,8 +6,6 @@ This sets up opcodes Python's 3.4 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
from xdis.opcodes import opcode_34 as opc
# bytecode verification, verify(), uses JUMP_OPs from here

View File

@@ -6,8 +6,6 @@ This sets up opcodes Python's 3.5 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
from uncompyle6.scanners.scanner3 import Scanner3
# bytecode verification, verify(), uses JUMP_OPs from here

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016-2017 by Rocky Bernstein
# Copyright (c) 2016 by Rocky Bernstein
"""
Python 3.6 bytecode scanner/deparser
@@ -6,8 +6,6 @@ This sets up opcodes Python's 3.6 and calls a generalized
scanner routine for Python 3.
"""
from __future__ import print_function
from uncompyle6.scanners.scanner3 import Scanner3
# bytecode verification, verify(), uses JUMP_OPs from here
@@ -28,12 +26,8 @@ class Scanner36(Scanner3):
if t.op == self.opc.CALL_FUNCTION_EX and t.attr & 1:
t.type = 'CALL_FUNCTION_EX_KW'
pass
elif t.op == self.opc.CALL_FUNCTION_KW:
if t.op == self.opc.CALL_FUNCTION_KW:
t.type = 'CALL_FUNCTION_KW_{t.attr}'.format(**locals())
elif t.op == self.opc.BUILD_TUPLE_UNPACK_WITH_CALL:
t.type = 'BUILD_TUPLE_UNPACK_WITH_CALL_%d' % t.attr
elif t.op == self.opc.BUILD_MAP_UNPACK_WITH_CALL:
t.type = 'BUILD_MAP_UNPACK_WITH_CALL_%d' % t.attr
pass
return tokens, customize

View File

@@ -56,12 +56,18 @@ class Token:
return self.format(line_prefix='')
def format(self, line_prefix=''):
prefix = ('\n%s%4d ' % (line_prefix, self.linestart)
if self.linestart else (' ' * (6 + len(line_prefix))))
if self.linestart:
prefix = '\n%s%4d ' % (line_prefix, self.linestart)
else:
prefix = ' ' * (6 + len(line_prefix))
offset_opname = '%6s %-17s' % (self.offset, self.type)
if not self.has_arg:
return "%s%s" % (prefix, offset_opname)
argstr = "%6d " % self.attr if isinstance(self.attr, int) else (' '*7)
if isinstance(self.attr, int):
argstr = "%6d " % self.attr
else:
argstr = ' '*7
if self.pattr:
pattr = self.pattr
if self.opc:

View File

@@ -44,45 +44,53 @@ do it recursively which is where offsets are probably located.
For example in:
'importmultiple': ( '%|import%b %c%c\n', 0, 2, 3 ),
n
The node position 0 will be associated with "import".
"""
# FIXME: DRY code with pysource
from __future__ import print_function
import re, sys
from uncompyle6 import PYTHON3, IS_PYPY, PYTHON_VERSION
from xdis.code import iscode
from uncompyle6.semantics import pysource
from uncompyle6 import parser
from uncompyle6.scanner import Token, Code, get_scanner
from uncompyle6.semantics.check_ast import checker
from uncompyle6.semantics.helper import print_docstring
from uncompyle6.show import (
maybe_show_asm,
maybe_show_ast,
maybe_show_ast_param_default,
)
from uncompyle6.parsers.astnode import AST
from uncompyle6.semantics.pysource import (
ParserError, StringIO)
ParserError, find_globals, StringIO)
from uncompyle6.semantics.consts import (
INDENT_PER_LEVEL, NONE, PRECEDENCE,
TABLE_DIRECT, escape, minint, MAP
)
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from spark_parser.ast import GenericASTTraversalPruningException
from uncompyle6.semantics.make_function import (
find_all_globals, find_none, code_has_star_arg, code_has_star_star_arg
)
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
if PYTHON_VERSION < 2.6:
from xdis.namedtuple25 import namedtuple
else:
from collections import namedtuple
from collections import namedtuple
NodeInfo = namedtuple("NodeInfo", "node start finish")
ExtractInfo = namedtuple("ExtractInfo",
"lineNo lineStartOffset markerLine selectedLine selectedText nonterminal")
"lineNo lineStartOffset markerLine selectedLine selectedText")
TABLE_DIRECT_FRAGMENT = {
'break_stmt': ( '%|%rbreak\n', ),
@@ -160,9 +168,8 @@ class FragmentsWalker(pysource.SourceWalker, object):
def set_pos_info(self, node, start, finish, name=None):
if name is None: name = self.name
if hasattr(node, 'offset'):
node.start = start
node.finish = finish
self.offsets[name, node.offset] = node
self.offsets[name, node.offset] = \
NodeInfo(node = node, start = start, finish = finish)
if hasattr(node, 'parent'):
assert node.parent != node
@@ -178,34 +185,6 @@ class FragmentsWalker(pysource.SourceWalker, object):
return
def table_r_node(self, node):
"""General pattern where the last node should should
get the text span attributes of the entire tree"""
start = len(self.f.getvalue())
try:
self.default(node)
except GenericASTTraversalPruningException:
final = len(self.f.getvalue())
self.set_pos_info(node, start, final)
self.set_pos_info(node[-1], start, final)
raise GenericASTTraversalPruningException
n_slice0 = n_slice1 = n_slice2 = n_slice3 = n_binary_subscr = table_r_node
n_augassign_1 = n_print_item = exec_stmt = print_to_item = del_stmt = table_r_node
n_classdefco1 = n_classdefco2 = except_cond1 = except_cond2 = table_r_node
def n_passtmt(self, node):
start = len(self.f.getvalue()) + len(self.indent)
self.set_pos_info(node, start, start+len("pass"))
self.default(node)
def n_trystmt(self, node):
start = len(self.f.getvalue()) + len(self.indent)
self.set_pos_info(node[0], start, start+len("try:"))
self.default(node)
n_tryelsestmt = n_tryelsestmtc = n_tryelsestmtl = n_tryfinallystmt = n_trystmt
def n_return_stmt(self, node):
start = len(self.f.getvalue()) + len(self.indent)
if self.params['isLambda']:
@@ -259,7 +238,6 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write(' ')
node[0].parent = node
self.preorder(node[0])
self.set_pos_info(node[-1], start, len(self.f.getvalue()))
self.set_pos_info(node, start, len(self.f.getvalue()))
self.prune() # stop recursing
@@ -397,23 +375,15 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.write(sep); sep = ", "
self.preorder(subnode)
self.set_pos_info(node, start, len(self.f.getvalue()))
self.set_pos_info(node[-1], start, len(self.f.getvalue()))
self.println()
self.prune() # stop recursing
def n_ifelsestmtr(self, node):
if node[2] == 'COME_FROM':
return_stmts_node = node[3]
node.type = 'ifelsestmtr2'
else:
return_stmts_node = node[2]
if len(return_stmts_node) != 2:
if len(node[2]) != 2:
self.default(node)
if (not (return_stmts_node[0][0][0] == 'ifstmt'
and return_stmts_node[0][0][0][1][0] == 'return_if_stmts')
and not (return_stmts_node[0][-1][0] == 'ifstmt'
and return_stmts_node[0][-1][0][1][0] == 'return_if_stmts')):
if not (node[2][0][0][0] == 'ifstmt' and node[2][0][0][0][1][0] == 'return_if_stmts') \
and not (node[2][0][-1][0] == 'ifstmt' and node[2][0][-1][0][1][0] == 'return_if_stmts'):
self.default(node)
return
@@ -434,7 +404,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
past_else = False
prev_stmt_is_if_ret = True
for n in return_stmts_node[0]:
for n in node[2][0]:
if (n[0] == 'ifstmt' and n[0][1][0] == 'return_if_stmts'):
if prev_stmt_is_if_ret:
n[0].type = 'elifstmt'
@@ -516,20 +486,17 @@ class FragmentsWalker(pysource.SourceWalker, object):
# LOAD_CONST code object ..
# LOAD_CONST 'x0' if >= 3.3
# MAKE_FUNCTION ..
code_node = node[-3]
elif node[-2] == 'expr':
code_node = node[-2][0]
code_index = -3
else:
# LOAD_CONST code object ..
# MAKE_FUNCTION ..
code_node = node[-2]
func_name = code_node.attr.co_name
code_index = -2
code = node[code_index]
func_name = code.attr.co_name
self.write(func_name)
self.set_pos_info(code_node, start, len(self.f.getvalue()))
self.indentMore()
start = len(self.f.getvalue())
self.make_function(node, isLambda=False, codeNode=code_node)
self.make_function(node, isLambda=False, codeNode=code)
self.set_pos_info(node, start, len(self.f.getvalue()))
@@ -772,7 +739,10 @@ class FragmentsWalker(pysource.SourceWalker, object):
def n_genexpr(self, node):
start = len(self.f.getvalue())
self.write('(')
code_index = -6 if self.version > 3.2 else -5
if self.version > 3.2:
code_index = -6
else:
code_index = -5
self.comprehension_walk(node, iter_index=3, code_index=code_index)
self.write(')')
self.set_pos_info(node, start, len(self.f.getvalue()))
@@ -926,7 +896,10 @@ class FragmentsWalker(pysource.SourceWalker, object):
subclass = n.attr
break
pass
subclass_info = node if node == 'classdefdeco2' else node[0]
if node == 'classdefdeco2':
subclass_info = node
else:
subclass_info = node[0]
elif buildclass[1][0] == 'load_closure':
# Python 3 with closures not functions
load_closure = buildclass[1]
@@ -950,7 +923,10 @@ class FragmentsWalker(pysource.SourceWalker, object):
subclass = buildclass[1][0].attr
subclass_info = node[0]
else:
buildclass = node if (node == 'classdefdeco2') else node[0]
if node == 'classdefdeco2':
buildclass = node
else:
buildclass = node[0]
build_list = buildclass[1][0]
if hasattr(buildclass[-3][0], 'attr'):
subclass = buildclass[-3][0].attr
@@ -1016,7 +992,9 @@ class FragmentsWalker(pysource.SourceWalker, object):
tokens.append(Token('LAMBDA_MARKER'))
try:
ast = parser.parse(self.p, tokens, customize)
except (parser.ParserError, AssertionError) as e:
except parser.ParserError(e):
raise ParserError(e, tokens)
except AssertionError(e):
raise ParserError(e, tokens)
maybe_show_ast(self.showast, ast)
return ast
@@ -1041,7 +1019,9 @@ class FragmentsWalker(pysource.SourceWalker, object):
# Build AST from disassembly.
try:
ast = parser.parse(self.p, tokens, customize)
except (parser.ParserError, AssertionError) as e:
except parser.ParserError(e):
raise ParserError(e, tokens)
except AssertionError(e):
raise ParserError(e, tokens)
maybe_show_ast(self.showast, ast)
@@ -1121,6 +1101,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
def traverse(self, node, indent=None, isLambda=False):
'''Buulds up fragment which can be used inside a larger
block of code'''
self.param_stack.append(self.params)
if indent is None: indent = self.indent
p = self.pending_newlines
@@ -1215,38 +1196,16 @@ class FragmentsWalker(pysource.SourceWalker, object):
if elided: selectedLine += ' ...'
if isinstance(nodeInfo, Token):
nodeInfo = nodeInfo.parent
else:
nodeInfo = nodeInfo
if isinstance(nodeInfo, AST):
nonterminal = nodeInfo[0]
else:
nonterminal = nodeInfo.node
return ExtractInfo(lineNo = len(lines), lineStartOffset = lineStart,
markerLine = markerLine,
selectedLine = selectedLine,
selectedText = selectedText,
nonterminal = nonterminal)
selectedText = selectedText)
def extract_line_info(self, name, offset):
if (name, offset) not in list(self.offsets.keys()):
return None
return self.extract_node_info(self.offsets[name, offset])
def prev_node(self, node):
prev = None
if not hasattr(node, 'parent'):
return prev
p = node.parent
for n in p:
if node == n:
return prev
prev = n
return prev
def extract_parent_info(self, node):
if not hasattr(node, 'parent'):
return None, None
@@ -1610,14 +1569,25 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.set_pos_info(startnode, startnode_start, fin)
# FIXME rocky: figure out how to get these casess to be table driven.
#
# 1. for loops. For loops have two positions that correspond to a single text
# location. In "for i in ..." there is the initialization "i" code as well
# as the iteration code with "i". A "copy" spec like %X3,3 - copy parame
# 3 to param 2 would work
#
# 2. subroutine calls. It the last op is the call and for purposes of printing
# we don't need to print anything special there. However it encompases the
# entire string of the node fn(...)
match = re.search(r'^call_function', startnode.type)
match = re.search(r'^try', startnode.type)
if match:
last_node = startnode[-1]
# import traceback; traceback.print_stack()
self.set_pos_info(last_node, startnode_start, self.last_finish)
self.set_pos_info(node[0], startnode_start, startnode_start+len("try:"))
self.set_pos_info(node[2], node[3].finish, node[3].finish)
else:
match = re.search(r'^call_function', startnode.type)
if match:
last_node = startnode[-1]
# import traceback; traceback.print_stack()
self.set_pos_info(last_node, startnode_start, self.last_finish)
return
@classmethod
@@ -1699,13 +1669,6 @@ def deparse_code(version, co, out=StringIO(), showasm=False, showast=False,
if deparsed.ERROR:
raise deparsed.ERROR
# To keep the API consistent with previous releases, convert
# deparse.offset values into NodeInfo items
for tup, node in deparsed.offsets.items():
deparsed.offsets[tup] = NodeInfo(node = node, start = node.start,
finish = node.finish)
deparsed.scanner = scanner
return deparsed
from bisect import bisect_right
@@ -1727,7 +1690,7 @@ def deparse_code_around_offset(name, offset, version, co, out=StringIO(),
deparsed = deparse_code(version, co, out, showasm, showast, showgrammar, is_pypy)
if (name, offset) in deparsed.offsets.keys():
# This is the easy case
return deparsed
return deparsed.offsets[name, offset]
valid_offsets = [t for t in deparsed.offsets if isinstance(t[1], int)]
offset_list = sorted([t[1] for t in valid_offsets if t[0] == name])
@@ -1740,7 +1703,6 @@ def deparse_code_around_offset(name, offset, version, co, out=StringIO(),
if __name__ == '__main__':
from uncompyle6 import IS_PYPY
def deparse_test(co, is_pypy=IS_PYPY):
sys_version = sys.version_info.major + (sys.version_info.minor / 10.0)
walk = deparse_code(sys_version, co, showasm=False, showast=False,

View File

@@ -6,14 +6,9 @@ All the crazy things we have to do to handle Python functions
from xdis.code import iscode
from uncompyle6.scanner import Code
from uncompyle6.parsers.astnode import AST
from uncompyle6 import PYTHON3
from uncompyle6.semantics.parser_error import ParserError
from uncompyle6.semantics.helper import print_docstring
if PYTHON3:
from itertools import zip_longest
else:
from itertools import izip_longest as zip_longest
from uncompyle6.show import maybe_show_ast_param_default
@@ -139,7 +134,7 @@ def make_function3_annotate(self, node, isLambda, nested=1,
code._customize,
isLambda = isLambda,
noneInNames = ('None' in code.co_names))
except ParserError as p:
except ParserError, p:
self.write(str(p))
self.ERROR = p
return
@@ -163,14 +158,11 @@ def make_function3_annotate(self, node, isLambda, nested=1,
i = len(paramnames) - len(defparams)
suffix = ''
no_paramnames = len(paramnames[:i]) == 0
for param in paramnames[:i]:
self.write(suffix, param)
suffix = ', '
if param in annotate_tuple[0].attr:
p = annotate_tuple[0].attr.index(param)
p = [x for x in annotate_tuple[0].attr].index(param)
self.write(': ')
self.preorder(node[p])
if (line_number != self.line_number):
@@ -182,10 +174,12 @@ def make_function3_annotate(self, node, isLambda, nested=1,
# else:
# self.write(': %s' % value)
suffix = ', ' if i > 0 else ''
if i > 0:
suffix = ', '
else:
suffix = ''
for n in node:
if n == 'pos_arg':
no_paramnames = False
self.write(suffix)
param = paramnames[i]
self.write(param)
@@ -193,11 +187,7 @@ def make_function3_annotate(self, node, isLambda, nested=1,
aa = annotate_args[param]
if isinstance(aa, tuple):
aa = aa[0]
self.write(': "%s"' % aa)
elif isinstance(aa, AST):
self.write(': ')
self.preorder(aa)
self.write(': "%s"' % aa)
self.write('=')
i += 1
self.preorder(n)
@@ -210,65 +200,64 @@ def make_function3_annotate(self, node, isLambda, nested=1,
# self.println(indent, '#flags:\t', int(code.co_flags))
if kw_args + annotate_argc > 0:
if no_paramnames:
if not code_has_star_arg(code):
if argc > 0:
self.write(", *, ")
else:
self.write("*, ")
pass
else:
self.write(", ")
if not code_has_star_arg(code):
if argc > 0:
kwargs = node[0]
last = len(kwargs)-1
i = 0
for n in node[0]:
if n == 'kwarg':
if (line_number != self.line_number):
self.write("\n" + indent)
line_number = self.line_number
self.write('%s=' % n[0].pattr)
self.preorder(n[1])
if i < last:
self.write(', ')
i += 1
pass
pass
annotate_args = []
for n in node:
if n == 'annotate_arg':
annotate_args.append(n[0])
elif n == 'annotate_tuple':
t = n[0].attr
if t[-1] == 'return':
t = t[0:-1]
annotate_args = annotate_args[:-1]
pass
last = len(annotate_args) - 1
for i in range(len(annotate_args)):
self.write("%s: " % (t[i]))
self.preorder(annotate_args[i])
if i < last:
self.write(', ')
pass
pass
break
self.write(", *, ")
else:
self.write("*, ")
pass
else:
self.write(", ")
kwargs = node[0]
last = len(kwargs)-1
i = 0
for n in node[0]:
if n == 'kwarg':
if (line_number != self.line_number):
self.write("\n" + indent)
line_number = self.line_number
self.write('%s=' % n[0].pattr)
self.preorder(n[1])
if i < last:
self.write(', ')
i += 1
pass
pass
annotate_args = []
for n in node:
if n == 'annotate_arg':
annotate_args.append(n[0])
elif n == 'annotate_tuple':
t = n[0].attr
if t[-1] == 'return':
t = t[0:-1]
annotate_args = annotate_args[:-1]
pass
last = len(annotate_args) - 1
for i in range(len(annotate_args)):
self.write("%s: " % (t[i]))
self.preorder(annotate_args[i])
if i < last:
self.write(', ')
pass
pass
break
pass
pass
if code_has_star_star_arg(code):
if argc > 0:
self.write(', ')
self.write('**%s' % code.co_varnames[argc + kw_pairs])
if code_has_star_star_arg(code):
if argc > 0:
self.write(', ')
self.write('**%s' % code.co_varnames[argc + kw_pairs])
if isLambda:
self.write(": ")
else:
self.write(')')
if 'return' in annotate_tuple[0].attr:
if (line_number != self.line_number) and not no_paramnames:
if (line_number != self.line_number):
self.write("\n" + indent)
line_number = self.line_number
self.write(' -> ')
@@ -367,17 +356,21 @@ def make_function2(self, node, isLambda, nested=1, codeNode=None):
code._customize,
isLambda = isLambda,
noneInNames = ('None' in code.co_names))
except ParserError as p:
except ParserError, p:
self.write(str(p))
self.ERROR = p
return
kw_pairs = args_node.attr[1] if self.version >= 3.0 else 0
if self.version >= 3.0:
kw_pairs = args_node.attr[1]
else:
kw_pairs = 0
indent = self.indent
# build parameters
tup = [paramnames, defparams]
params = [build_param(ast, name, default) for
name, default in zip_longest(paramnames, defparams, fillvalue=None)]
name, default in map(lambda *tup:tup, *tup)]
params.reverse() # back to correct order
if code_has_star_arg(code):
@@ -437,34 +430,10 @@ def make_function2(self, node, isLambda, nested=1, codeNode=None):
def make_function3(self, node, isLambda, nested=1, codeNode=None):
"""Dump function definition, doc string, and function body in
Python version 3.0 and above
"""
"""Dump function definition, doc string, and function body."""
# For Python 3.3, the evaluation stack in MAKE_FUNCTION is:
# * default argument objects in positional order
# * pairs of name and default argument, with the name just below
# the object on the stack, for keyword-only parameters
# * parameter annotation objects
# * a tuple listing the parameter names for the annotations
# (only if there are ony annotation objects)
# * the code associated with the function (at TOS1)
# * the qualified name of the function (at TOS)
# For Python 3.0 .. 3.2 the evaluation stack is:
# The function object is defined to have argc default parameters,
# which are found below TOS.
# * first come positional args in the order they are given in the source,
# * next come the keyword args in the order they given in the source,
# * finally is the code associated with the function (at TOS)
#
# Note: There is no qualified name at TOS
# MAKE_CLOSURE adds an additional closure slot
# Thank you, Python, for a such a well-thought out system that has
# changed 4 or so times.
# FIXME: call make_function3 if we are self.version >= 3.0
# and then simplify the below.
def build_param(ast, name, default):
"""build parameters:
@@ -484,31 +453,21 @@ def make_function3(self, node, isLambda, nested=1, codeNode=None):
# MAKE_FUNCTION_... or MAKE_CLOSURE_...
assert node[-1].type.startswith('MAKE_')
# Python 3.3+ adds a qualified name at TOS (-1)
# moving down the LOAD_LAMBDA instruction
if 3.0 <= self.version <= 3.2:
lambda_index = -2
elif 3.03 <= self.version:
lambda_index = -3
else:
lambda_index = None
args_node = node[-1]
if isinstance(args_node.attr, tuple):
pos_args, kw_args, annotate_argc = args_node.attr
if self.version <= 3.3 and len(node) > 2 and node[lambda_index] != 'LOAD_LAMBDA':
# args are after kwargs; kwargs are bundled as one node
if self.version <= 3.3 and len(node) > 2 and node[-3] != 'LOAD_LAMBDA':
# positional args are after kwargs
defparams = node[1:args_node.attr[0]+1]
else:
# args are before kwargs; kwags as bundled as one node
# positional args are before kwargs
defparams = node[:args_node.attr[0]]
pos_args, kw_args, annotate_argc = args_node.attr
else:
if self.version < 3.6:
defparams = node[:args_node.attr]
else:
default, kw, annotate, closure = args_node.attr
# FIXME: start here for Python 3.6 and above:
# FIXME: start here.
defparams = []
# if default:
# defparams = node[-(2 + kw + annotate + closure)]
@@ -518,6 +477,12 @@ def make_function3(self, node, isLambda, nested=1, codeNode=None):
kw_args = 0
pass
if 3.0 <= self.version <= 3.2:
lambda_index = -2
elif 3.03 <= self.version:
lambda_index = -3
else:
lambda_index = None
if lambda_index and isLambda and iscode(node[lambda_index].attr):
assert node[lambda_index].type == 'LOAD_LAMBDA'
@@ -533,7 +498,7 @@ def make_function3(self, node, isLambda, nested=1, codeNode=None):
paramnames = list(code.co_varnames[:argc])
# defaults are for last n parameters, thus reverse
if not 3.0 <= self.version <= 3.1:
if not 3.0 <= self.version <= 3.2:
paramnames.reverse(); defparams.reverse()
try:
@@ -541,33 +506,71 @@ def make_function3(self, node, isLambda, nested=1, codeNode=None):
code._customize,
isLambda = isLambda,
noneInNames = ('None' in code.co_names))
except ParserError as p:
except ParserError, p:
self.write(str(p))
self.ERROR = p
return
kw_pairs = args_node.attr[1] if self.version >= 3.0 else 0
if self.version >= 3.0:
kw_pairs = args_node.attr[1]
else:
kw_pairs = 0
indent = self.indent
# build parameters
params = [build_param(ast, name, d) for
name, d in zip_longest(paramnames, defparams, fillvalue=None)]
if not 3.0 <= self.version <= 3.1:
if self.version != 3.2:
tup = [paramnames, defparams]
params = [build_param(ast, name, default) for
name, default in map(lambda *tup:tup, *tup)]
params.reverse() # back to correct order
if code_has_star_arg(code):
if self.version > 3.0:
params.append('*%s' % code.co_varnames[argc + kw_pairs])
else:
params.append('*%s' % code.co_varnames[argc])
argc += 1
if code_has_star_arg(code):
if self.version > 3.0:
params.append('*%s' % code.co_varnames[argc + kw_pairs])
else:
params.append('*%s' % code.co_varnames[argc])
argc += 1
# dump parameter list (with default values)
if isLambda:
self.write("lambda ", ", ".join(params))
else:
self.write("(", ", ".join(params))
# self.println(indent, '#flags:\t', int(code.co_flags))
# dump parameter list (with default values)
if isLambda:
self.write("lambda ", ", ".join(params))
else:
self.write("(", ", ".join(params))
# self.println(indent, '#flags:\t', int(code.co_flags))
if isLambda:
self.write("lambda ")
else:
self.write("(")
pass
last_line = self.f.getvalue().split("\n")[-1]
l = len(last_line)
indent = ' ' * l
line_number = self.line_number
if code_has_star_arg(code):
self.write('*%s' % code.co_varnames[argc + kw_pairs])
argc += 1
i = len(paramnames) - len(defparams)
self.write(", ".join(paramnames[:i]))
if i > 0:
suffix = ', '
else:
suffix = ''
for n in node:
if n == 'pos_arg':
self.write(suffix)
self.write(paramnames[i] + '=')
i += 1
self.preorder(n)
if (line_number != self.line_number):
suffix = ",\n" + indent
line_number = self.line_number
else:
suffix = ', '
if kw_args > 0:
if not (4 & code.co_flags):

View File

@@ -67,8 +67,6 @@ methods implement most of the below.
The '%' may optionally be followed by a number (C) in square brackets, which
makes the engine walk down to N[C] before evaluating the escape code.
"""
from __future__ import print_function
import sys
from uncompyle6 import PYTHON3
@@ -311,13 +309,6 @@ class SourceWalker(GenericASTTraversal, object):
if node[annotate_last] == 'annotate_tuple':
break
# FIXME: the real situation is that when derived from
# funcdef_annotate we the name has been filled in.
# But when derived from funcdefdeco it hasn't Would like a better
# way to distinquish.
if self.f.getvalue()[-4:] == 'def ':
self.write(code.attr.co_name)
# FIXME: handle and pass full annotate args
make_function3_annotate(self, node, isLambda=False,
codeNode=code, annotate_last=annotate_last)
@@ -365,33 +356,9 @@ class SourceWalker(GenericASTTraversal, object):
self.prec = p
self.prune()
self.n_async_call_function = n_async_call_function
self.n_build_list_unpack = self.n_build_list
if version == 3.5:
def n_call_function(node):
mapping = self._get_mapping(node)
table = mapping[0]
key = node
for i in mapping[1:]:
key = key[i]
pass
if key.type.startswith('CALL_FUNCTION_VAR_KW'):
# Python 3.5 changes the stack position of *args. kwargs come
# after *args whereas in earlier Pythons, *args is at the end
# which simpilfiies things from our perspective.
# Python 3.6+ replaces CALL_FUNCTION_VAR_KW with CALL_FUNCTION_EX
# We will just swap the order to make it look like earlier Python 3.
entry = table[key.type]
kwarg_pos = entry[2][1]
args_pos = kwarg_pos - 1
# Put last node[args_pos] after subsequent kwargs
while node[kwarg_pos] == 'kwarg' and kwarg_pos < len(node):
# swap node[args_pos] with node[kwargs_pos]
node[kwarg_pos], node[args_pos] = node[args_pos], node[kwarg_pos]
args_pos = kwarg_pos
kwarg_pos += 1
self.default(node)
self.n_call_function = n_call_function
def n_funcdef(node):
if self.version == 3.6:
@@ -556,8 +523,6 @@ class SourceWalker(GenericASTTraversal, object):
if self.pending_newlines:
out = out[:-self.pending_newlines]
if isinstance(out, str) and not PYTHON3:
out = unicode(out, 'utf-8')
self.f.write(out)
def println(self, *data):
@@ -580,6 +545,26 @@ class SourceWalker(GenericASTTraversal, object):
node == AST('return_stmt',
[AST('ret_expr', [NONE]), Token('RETURN_VALUE')]))
def n_continue_stmt(self, node):
if self.version >= 3.0 and node[0] == 'CONTINUE':
t = node[0]
if not t.linestart:
# Artificially-added "continue" statements derived from JUMP_ABSOLUTE
# don't have line numbers associated with them.
# If this is a CONTINUE is to the same target as a JUMP_ABSOLUTE following it,
# then the "continue" can be suppressed.
op, offset = t.op, t.offset
next_offset = self.scanner.next_offset(op, offset)
scanner = self.scanner
code = scanner.code
if next_offset < len(code):
next_inst = code[next_offset]
if (scanner.opc.opname[next_inst] == 'JUMP_ABSOLUTE'
and t.pattr == code[next_offset+1]):
# Suppress "continue"
self.prune()
self.default(node)
def n_return_stmt(self, node):
if self.params['isLambda']:
self.preorder(node[0])
@@ -913,19 +898,19 @@ class SourceWalker(GenericASTTraversal, object):
# LOAD_CONST code object ..
# LOAD_CONST 'x0' if >= 3.3
# MAKE_FUNCTION ..
code_node = node[-3]
code = node[-3]
elif node[-2] == 'expr':
code_node = node[-2][0]
code = node[-2][0]
else:
# LOAD_CONST code object ..
# MAKE_FUNCTION ..
code_node = node[-2]
code = node[-2]
func_name = code_node.attr.co_name
func_name = code.attr.co_name
self.write(func_name)
self.indentMore()
self.make_function(node, isLambda=False, codeNode=code_node)
self.make_function(node, isLambda=False, codeNode=code)
if len(self.param_stack) > 1:
self.write('\n\n')
@@ -956,7 +941,10 @@ class SourceWalker(GenericASTTraversal, object):
return
n = node[-1]
elif node[-1] == 'del_stmt':
n = node[-3] if node[-2] == 'JUMP_BACK' else node[-2]
if node[-2] == 'JUMP_BACK':
n = node[-3]
else:
n = node[-2]
assert n == 'list_iter'
@@ -974,7 +962,10 @@ class SourceWalker(GenericASTTraversal, object):
list_iter = node[-1]
else:
expr = n[1]
list_iter = node[-3] if node[-2] == 'JUMP_BACK' else node[-2]
if node[-2] == 'JUMP_BACK':
list_iter = node[-3]
else:
list_iter = node[-2]
assert expr == 'expr'
assert list_iter == 'list_iter'
@@ -1026,7 +1017,10 @@ class SourceWalker(GenericASTTraversal, object):
self.write( '[ ')
expr = n[0]
list_iter = node[-2] if self.is_pypy and node[-1] == 'JUMP_BACK' else node[-1]
if self.is_pypy and node[-1] == 'JUMP_BACK':
list_iter = node[-2]
else:
list_iter = node[-1]
assert expr == 'expr'
assert list_iter == 'list_iter'
@@ -1100,7 +1094,10 @@ class SourceWalker(GenericASTTraversal, object):
self.write(' for ')
self.preorder(ast[iter_index-1])
self.write(' in ')
iter_expr = node[2] if node[2] == 'expr' else node[-3]
if node[2] == 'expr':
iter_expr = node[2]
else:
iter_expr = node[-3]
assert iter_expr == 'expr'
self.preorder(iter_expr)
self.preorder(ast[iter_index])
@@ -1108,7 +1105,10 @@ class SourceWalker(GenericASTTraversal, object):
def n_genexpr(self, node):
self.write('(')
code_index = -6 if self.version > 3.2 else -5
if self.version > 3.2:
code_index = -6
else:
code_index = -5
self.comprehension_walk(node, iter_index=3, code_index=code_index)
self.write(')')
self.prune()
@@ -1349,7 +1349,10 @@ class SourceWalker(GenericASTTraversal, object):
break
pass
pass
subclass_info = node if node == 'classdefdeco2' else node[0]
if node == 'classdefdeco2':
subclass_info = node
else:
subclass_info = node[0]
elif buildclass[1][0] == 'load_closure':
# Python 3 with closures not functions
load_closure = buildclass[1]
@@ -1376,7 +1379,10 @@ class SourceWalker(GenericASTTraversal, object):
subclass_code = buildclass[1][0].attr
subclass_info = node[0]
else:
buildclass = node if (node == 'classdefdeco2') else node[0]
if node == 'classdefdeco2':
buildclass = node
else:
buildclass = node[0]
build_list = buildclass[1][0]
if hasattr(buildclass[-3][0], 'attr'):
subclass_code = buildclass[-3][0].attr
@@ -2069,7 +2075,9 @@ class SourceWalker(GenericASTTraversal, object):
tokens.append(Token('LAMBDA_MARKER'))
try:
ast = python_parser.parse(self.p, tokens, customize)
except (python_parser.ParserError, AssertionError) as e:
except python_parser.ParserError, e:
raise ParserError(e, tokens)
except AssertionError, e:
raise ParserError(e, tokens)
maybe_show_ast(self.showast, ast)
return ast
@@ -2096,7 +2104,9 @@ class SourceWalker(GenericASTTraversal, object):
# Build AST from disassembly.
try:
ast = python_parser.parse(self.p, tokens, customize)
except (python_parser.ParserError, AssertionError) as e:
except python_parser.ParserError, e:
raise ParserError(e, tokens)
except AssertionError, e:
raise ParserError(e, tokens)
maybe_show_ast(self.showast, ast)
@@ -2174,10 +2184,10 @@ def deparse_code(version, co, out=sys.stdout, showasm=None, showast=False,
if __name__ == '__main__':
def deparse_test(co):
"This is a docstring"
sys_version = sys.version_info.major + (sys.version_info.minor / 10.0)
sys_version = float(sys.version[0:3])
deparsed = deparse_code(sys_version, co, showasm='after', showast=True)
# deparsed = deparse_code(sys_version, co, showasm=None, showast=False,
# showgrammar=True)
print(deparsed.text)
return
deparse_test(deparse_test.__code__)
deparse_test(deparse_test.func_code)

View File

@@ -12,7 +12,10 @@ def maybe_show_asm(showasm, tokens):
:param tokens: The asm tokens to show.
"""
if showasm:
stream = showasm if hasattr(showasm, 'write') else sys.stdout
if hasattr(showasm, 'write'):
stream = showasm
else:
stream = sys.stdout
for t in tokens:
stream.write(str(t))
stream.write('\n')
@@ -29,7 +32,10 @@ def maybe_show_ast(showast, ast):
:param ast: The ast to show.
"""
if showast:
stream = showast if hasattr(showast, 'write') else sys.stdout
if hasattr(showast, 'write'):
stream = showast
else:
stream = sys.stdout
stream.write(str(ast))
stream.write('\n')
@@ -48,7 +54,10 @@ def maybe_show_ast_param_default(showast, name, default):
:param default: The function parameter default.
"""
if showast:
stream = showast if hasattr(showast, 'write') else sys.stdout
if hasattr(showast, 'write'):
stream = showast
else:
stream = sys.stdout
stream.write('\n')
stream.write('--' + name)
stream.write('\n')

View File

@@ -6,8 +6,6 @@
byte-code verification
"""
from __future__ import print_function
import dis, operator
import uncompyle6
@@ -365,8 +363,10 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2,
elif member == 'co_flags':
flags1 = code_obj1.co_flags
flags2 = code_obj2.co_flags
if is_pypy:
if is_pypy or version == 2.4:
# For PYPY for now we don't care about PYPY_SOURCE_IS_UTF8:
# Python 2.4 also sets this flag and I am not sure
# where or why
flags2 &= ~0x0100 # PYPY_SOURCE_IS_UTF8
# We also don't care about COROUTINE or GENERATOR for now
flags1 &= ~0x000000a0
@@ -375,6 +375,8 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2,
raise CmpErrorMember(name, 'co_flags',
pretty_flags(flags1),
pretty_flags(flags2))
else:
# all other members must be equal
if getattr(code_obj1, member) != getattr(code_obj2, member):
@@ -418,8 +420,10 @@ def compare_code_with_srcfile(pyc_filename, src_filename, weak_verify=False):
return msg
try:
code_obj2 = load_file(src_filename)
except SyntaxError as e:
except SyntaxError, e:
# src_filename can be the first of a group sometimes
if version == 2.4:
print(pyc_filename)
return str(e).replace(src_filename, pyc_filename)
cmp_code_objects(version, is_pypy, code_obj1, code_obj2, ignore_code=weak_verify)
return None

View File

@@ -1,3 +1,3 @@
# This file is suitable for sourcing inside bash as
# well as importing into Python
VERSION='2.11.1'
VERSION='2.10.0'