Compare commits

...

55 Commits

Author SHA1 Message Date
rocky
5753f8114c Merge branch 'master' into python-2.4 2020-06-12 21:18:55 -04:00
rocky
02f502c40a New grammar rule often imples expanded reduce rule 2020-06-12 21:12:02 -04:00
rocky
de4fbb08f2 Get ready for release 3.7.1 2020-06-12 20:20:58 -04:00
rocky
e14675c2dc Handle 3.7+ "else" branch removal...
As seen in _cmp() of python3.8/distutils/version.py with optimization -O2
2020-06-12 13:18:33 -04:00
rocky
3449be024b CI take 3. 2020-06-10 22:18:28 -04:00
rocky
8b50b15f0a CI update take 2 2020-06-10 22:17:13 -04:00
rocky
e2e925679d Update CI to use git xdis 2020-06-10 22:15:55 -04:00
rocky
7deeee8502 Push "with" grammar improvements back to 3.6 2020-06-04 05:53:21 -04:00
rocky
acdd025162 ast-check "for" is a loop; sync "withas" test ..
with decompyle3.
2020-06-04 05:34:19 -04:00
rocky
9acb3cf068 Fix bug in 3.8 with .. as 2020-06-04 05:24:22 -04:00
rocky
40a653cd3b Bump min xdis version...
it fixes a bug in stdlib testing
2020-05-31 03:17:09 -04:00
rocky
3ac3979535 With a newer xdis, some stdlib test work now 2020-05-31 03:10:52 -04:00
rocky
7eba933cfa More excludes 2020-05-24 21:25:07 -04:00
rocky
ad5d3333da A regression regarding "and"/"or" with "continue" 2020-05-19 10:20:08 -04:00
rocky
e046323b31 Some typos 2020-05-19 01:35:50 -04:00
rocky
e80c13170a Administrivia 2020-05-19 01:29:09 -04:00
rocky
1bfa4228d6 Administrivia 2020-05-19 01:27:54 -04:00
rocky
6116eb64d1 Bump version 2020-05-19 01:25:38 -04:00
rocky
cb411bcd04 Merge branch 'master' into python-2.4 2020-05-19 01:24:08 -04:00
rocky
889417caeb Get ready for release 3.7.0 2020-05-19 01:17:58 -04:00
rocky
5a83c7c643 Simplify imports again using xdis 4.6.0 2020-05-19 00:53:53 -04:00
rocky
31db2f3e04 Small typo 2020-05-18 23:29:33 -04:00
rocky
527d1b4163 Merge branch 'master' into python-2.4 2020-05-18 23:25:53 -04:00
rocky
7fa851765d Regularize "or" so args are in 1..2 and ...
correct "return None" semantic action
2020-05-18 22:55:26 -04:00
rocky
d7c3b8454b 3.8 needs call_stmt -> call
Simplify/regularize how "return" works
2020-05-18 22:26:18 -04:00
rocky
3fb8d90407 Revise for xdis 3.6.0 ...
Simplify xdis imports where we can.
Blacken (most) of those buffers too
2020-05-18 21:49:16 -04:00
rocky
ff43565981 3.4-3.4 mixed "and"/"or" parsing ...
Fix by limiting more the bogus come from.
2020-05-18 05:33:57 -04:00
rocky
4365022f40 Adapt decompyle3's 3.8 try/return grammar rules 2020-05-17 10:18:10 -04:00
rocky
d343384db7 A runnable "async" and "async with" test 2020-05-16 07:55:51 -04:00
rocky
87a891ca54 Skip 2.6 test until I can get around to it. 2020-05-14 23:50:55 -04:00
rocky
b94c649776 3.7 change rule to match op "or" expr's 2020-05-14 21:32:45 -04:00
R. Bernstein
f34375ba99 Create FUNDING.yml 2020-05-14 12:12:18 -04:00
rocky
81b704f597 Simpify an import, blacken a file. 2020-05-09 09:32:44 -04:00
rocky
5233a0716b Correct wong class names in super() 2020-05-08 05:59:20 -04:00
rocky
f82b862c25 Merge branch 'master' into python-2.4 2020-05-05 22:20:54 -04:00
rocky
a810ed1280 Merge branch 'master' of github.com:rocky/python-uncompyle6 2020-05-05 22:18:22 -04:00
rocky
ab54caae34 Runtest.sh improvements 2020-05-05 22:18:15 -04:00
rocky
d3cf87e2d9 Start marking test suite since this is going to be copied 2020-05-04 11:43:16 -04:00
rocky
c5228dbdc4 Small test doc typo 2020-05-01 23:19:31 -04:00
rocky
cafe96a44a Merge branch 'master' into python-2.4 2020-04-30 18:00:37 -04:00
rocky
a72163f6f9 lint 2020-04-30 18:00:04 -04:00
rocky
3e1300eb23 Bugs in nested async for...
* Generalize asyc_for rule
 Fix bug in picking out comprehension iterator in async for
* fix bug in getting expression in such a comprehension
* Add %[n]{%x} pattern to template_engine()
2020-04-29 10:12:54 -04:00
rocky
a4eaeea5b2 See above. 2020-04-27 23:05:05 -04:00
rocky
1141dfefc2 Typo in appveyor config 2020-04-27 23:03:46 -04:00
rocky
fe5cea7042 Merge branch 'master' into python-2.4 2020-04-27 23:01:53 -04:00
rocky
302a5d53d4 Get ready for release 3.6.7 2020-04-27 22:52:39 -04:00
R. Bernstein
698a3073d0 Merge pull request #313 from rocky/task/separate-dis
Task/separate dis
2020-04-24 02:29:52 -04:00
rocky
e6adf822cc Bump xdis version now that this is released 2020-04-24 02:25:07 -04:00
rocky
8c5acef792 Appveyor needs to install xdis from github 2020-04-21 23:03:00 -04:00
rocky
7578253f7d CI from xdis *branch* 2020-04-21 22:49:14 -04:00
rocky
9e193fd7cb Track branch changes in xdis 2020-04-21 22:42:57 -04:00
rocky
ab6b12be56 Small fixes in fragment parser 2020-04-21 19:58:03 -04:00
rocky
6981743788 Merge branch 'master' into python-2.4 2020-04-21 13:49:52 -04:00
rocky
5bd97aa756 lint 2020-04-21 13:49:05 -04:00
rocky
5237d704fa Remove stray debug stmt 2020-04-20 23:13:06 -04:00
75 changed files with 1578 additions and 802 deletions

View File

@@ -31,7 +31,6 @@ jobs:
# fallback to using the latest cache if no exact match is found
- v2-dependencies-
# This is based on your 1.0 configuration file or project settings
- run:
command: sudo easy_install xdis spark-parser && sudo pip install -e . && sudo pip install -r requirements-dev.txt

12
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
# These are supported funding model platforms
github: [rocky]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

1
.gitignore vendored
View File

@@ -10,6 +10,7 @@
/.pytest_cache
/.python-version
/.tox
.mypy_cache
/.venv*
/README
/__pkginfo__.pyc

23
NEWS.md
View File

@@ -1,3 +1,26 @@
3.7.1: 2020-6-12 Fleetwood66
====================================================
Released to pick up new xdis version which has fixes to read bytestings better on 3.x
* Handle 3.7+ "else" branch removal adAs seen in `_cmp()` of `python3.8/distutils/version.py` with optimization `-O2`
* 3.6+ "with" and "with .. as" grammar improvements
* ast-check for "for" loop was missing some grammar rules
3.7.0: 2020-5-19 Primidi 1st Prairial - Alfalfa - HF
====================================================
The main impetus for this release is to pull in the recent changes from xdis.
We simplify imports using xdis 4.6.0.
There were some bugfixes to Python 3.4-3.8. See the ChangeLog for details
3.6.7: 2020-4-27 xdis again
===========================
More upheaval in xdis which we need to track here.
3.6.6: 2020-4-20 Love in the time of Cholera
============================================

View File

@@ -210,7 +210,7 @@ however that the magic of a released version is usually the same as
the *last* candidate version prior to release.
There are also customized Python interpreters, notably Dropbox,
which use their own magic and encrypt bytcode. With the exception of
which use their own magic and encrypt bytecode. With the exception of
the Dropbox's old Python 2.5 interpreter this kind of thing is not
handled.
@@ -229,7 +229,7 @@ There is lots to do, so please dig in and help.
See Also
--------
* https://github.com/rocky/python-decompile3 : Much smaller and more modern code, focusing on 3.7+. Changes in that will get migrated back ehre.
* https://github.com/rocky/python-decompile3 : Much smaller and more modern code, focusing on 3.7+. Changes in that will get migrated back here.
* https://code.google.com/archive/p/unpyc3/ : supports Python 3.2 only. The above projects use a different decompiling technique than what is used here. Currently unmaintained.
* https://github.com/figment/unpyc3/ : fork of above, but supports Python 3.3 only. Includes some fixes like supporting function annotations. Currently unmaintained.
* https://github.com/wibiti/uncompyle2 : supports Python 2.7 only, but does that fairly well. There are situations where :code:`uncompyle6` results are incorrect while :code:`uncompyle2` results are not, but more often uncompyle6 is correct when uncompyle2 is not. Because :code:`uncompyle6` adheres to accuracy over idiomatic Python, :code:`uncompyle2` can produce more natural-looking code when it is correct. Currently :code:`uncompyle2` is lightly maintained. See its issue `tracker <https://github.com/wibiti/uncompyle2/issues>`_ for more details

View File

@@ -21,6 +21,17 @@
# less elegant than having it here with reduced code, albeit there
# still is some room for improvement.
# Python-version | package | last-version |
# -----------------------------------------
# 2.5 | pip | 1.1 |
# 2.6 | pip | 1.5.6 |
# 2.7 | pip | 19.2.3 |
# 2.7 | pip | 1.2.1 |
# 3.1 | pip | 1.5.6 |
# 3.2 | pip | 7.1.2 |
# 3.3 | pip | 10.0.1 |
# 3.4 | pip | 19.1.1 |
# Things that change more often go here.
copyright = """
Copyright (C) 2015-2020 Rocky Bernstein <rb@dustyfeet.com>.
@@ -58,7 +69,7 @@ entry_points = {
]}
ftp_url = None
install_requires = ["spark-parser >= 1.8.9, < 1.9.0",
"xdis >= 4.4.0, < 4.5.0"]
"xdis >= 4.6.1, < 4.8.0"]
license = "GPL3"
mailing_list = "python-debugger@googlegroups.com"

View File

@@ -55,13 +55,10 @@
# Make packages and tag
$ . ./admin-tools/make-dist-older.sh
$ pyenv local 3.8.2
$ twine check dist/uncompyle6-$VERSION*
$ git tag release-python-2.4-$VERSION
$ pyenv local 3.8.3
$ twine check dist/uncompyle6-$VERSION*
$ git tag release-python-2.4-$VERSION
$ . ./admin-tools/make-dist-newer.sh
$ twine check dist/uncompyle6-$VERSION*
# Upload single package and look at Rst Formating

View File

@@ -5,4 +5,4 @@ if [[ $0 == ${BASH_SOURCE[0]} ]] ; then
echo "This script should be *sourced* rather than run directly through bash"
exit 1
fi
export PYVERSIONS='3.5.9 3.6.10 2.6.9 3.3.7 2.7.17 3.2.6 3.1.5 3.4.10 3.7.7 3.8.2'
export PYVERSIONS='3.5.9 3.6.10 2.6.9 3.3.7 2.7.18 3.2.6 3.1.5 3.4.10 3.7.7 3.8.3'

View File

@@ -53,6 +53,7 @@ install:
# compiled extensions and are not provided as pre-built wheel packages,
# pip will build them from source using the MSVC compiler matching the
# target Python version and architecture
- "%CMD_IN_ENV% pip install git+git://github.com/rocky/python-uncompyle6.git#egg=uncompyle6-3.6.6"
- "%CMD_IN_ENV% pip install -r requirements.txt"
build_script:

View File

@@ -12,8 +12,7 @@ import functools
from uncompyle6 import PYTHON_VERSION, PYTHON3, IS_PYPY, code_deparse
# TODO : I think we can get xdis to support the dis api (python 3 version) by doing something like this there
from xdis.bytecode import Bytecode
from xdis.main import get_opcode
from xdis import Bytecode, get_opcode
opc = get_opcode(PYTHON_VERSION, IS_PYPY)
Bytecode = functools.partial(Bytecode, opc=opc)

View File

@@ -15,29 +15,40 @@ if not ((2, 4) <= SYS_VERSION <= (2, 7)):
print(mess)
raise Exception(mess)
from __pkginfo__ import \
author, author_email, install_requires, \
license, long_description, classifiers, \
entry_points, modname, py_modules, \
short_desc, VERSION, web, \
zip_safe
from __pkginfo__ import (
author,
author_email,
install_requires,
license,
long_description,
classifiers,
entry_points,
modname,
py_modules,
short_desc,
VERSION,
web,
zip_safe,
)
from setuptools import setup, find_packages
setup(
author = author,
author_email = author_email,
classifiers = classifiers,
description = short_desc,
entry_points = entry_points,
install_requires = install_requires,
license = license,
long_description = long_description,
long_description_content_type = "text/x-rst",
name = modname,
packages = find_packages(),
py_modules = py_modules,
test_suite = 'nose.collector',
url = web,
tests_require = ['nose>=1.0'],
version = VERSION,
zip_safe = zip_safe)
author=author,
author_email=author_email,
classifiers=classifiers,
description=short_desc,
entry_points=entry_points,
install_requires=install_requires,
license=license,
long_description=long_description,
long_description_content_type="text/x-rst",
name=modname,
packages=find_packages(),
py_modules=py_modules,
test_suite="nose.collector",
url=web,
tests_require=["nose>=1.0"],
version=VERSION,
zip_safe=zip_safe,
)

5
test/.gitignore vendored
View File

@@ -1,3 +1,8 @@
/.coverage
/.python-version
/nohup.out
/pycdc
/test_pycdc_tests.sh
/test_uncompyle2.py
/test_unpy33.py
/test_unpy37.py

View File

@@ -19,6 +19,7 @@ for path in py_source:
else:
cfile = "bytecode_%s%s/%s" % (version, suffix, short) + "c"
print("byte-compiling %s to %s" % (path, cfile))
py_compile.compile(path, cfile)
optimize = 2
py_compile.compile(path, cfile, optimize=optimize)
if isinstance(version, str) or version >= (2, 6, 0):
os.system("../bin/uncompyle6 -a -T %s" % cfile)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -2,7 +2,7 @@
from uncompyle6 import uncompyle
from uncompyle6.main import decompile
from xdis.magics import sysinfo2float
from xdis import sysinfo2float
import sys, inspect
def uncompyle_test():

View File

@@ -1,6 +1,8 @@
# Python 3.3+
#
# From Python 3.3.6 hmac.py
# Problem was getting wrong placement of positional args.
# In 3.6+ paramter handling changes
# In 3.6+ parameter handling changes
# RUNNABLE!

View File

@@ -2,6 +2,7 @@
# Bug was code not knowing which Python versions
# have kwargs coming before positional args in code.
"""This program is self-checking!"""
# RUNNABLE!
def tometadata(self, metadata, schema, Table, args, name=None):

View File

@@ -1,6 +1,9 @@
# 3.6+ type annotations on variables
from typing import List
# This test program is part of the uncompyle6 test suite
# tests STORE_ANNOTATION and SETUP_ANOTATIONS
# RUNNABLE!
y = 2
x: bool

View File

@@ -0,0 +1,12 @@
# From python3.8/distutils/version.py with optimization -O2
# The bug was that the other "else" constant propagated removed.
# NOTE: this program needs to be compile with optimization
def _cmp (b, c):
if b:
if c:
return 0
else:
return 1
else:
assert False, "never get here"

View File

@@ -1,23 +0,0 @@
# from 3.7 test_contextlib_async.py
# Bugs were not adding "async" when a function is a decorator,
# and a misaligment when using "async with as".
@_async_test
async def test_enter(self):
self.assertIs(await manager.__aenter__(), manager)
async with manager as context:
async with woohoo() as x:
x = 1
y = 2
assert manager is context
# From 3.7.6 test_coroutines.py
# Bug was different form of code for "async with" below
class CoroutineTest():
def test_with_8(self):
CNT = 0
async def foo():
nonlocal CNT
async with CM():
CNT += 1
return

View File

@@ -0,0 +1,70 @@
# from 3.7 test_contextlib_async.py
# Bugs were not adding "async" when a function is a decorator,
# and a misaligment when using "async with ... as".
"""This program is self-checking!"""
import asyncio
from contextlib import asynccontextmanager, AbstractAsyncContextManager
import functools
def _async_test(func):
"""Decorator to turn an async function into a test case."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
coro = func(*args, **kwargs)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
return loop.run_until_complete(coro)
finally:
loop.close()
asyncio.set_event_loop(None)
return wrapper
state = []
@asynccontextmanager
async def woohoo():
state.append(1)
yield 42
state.append(999)
@_async_test
async def test_enter():
class DefaultEnter(AbstractAsyncContextManager):
async def __aexit__(*args):
return
# await super().__aexit__(*args)
manager = DefaultEnter()
got_manager = await manager.__aenter__()
# print(got_manager, manager)
assert got_manager is manager
async with manager as context:
async with woohoo() as x:
x = 1
y = 2
assert manager is context
# From 3.7.6 test_coroutines.py
# Bug was different form of code for "async with" below
class CoroutineTest:
def test_with_8(self):
CNT = 0
async def foo():
nonlocal CNT
async with CM():
CNT += 1
return
test_enter()

View File

@@ -16,3 +16,25 @@ def withas_bug(self, nested, a, b):
with self.assertRaises(ZeroDivisionError):
with nested(a(), b()) as (x, y):
1 // 0
# From 3.7.7 test_functools.py
# Bug is a unreachable code after "return"
def test_invalid_registrations(x):
return
with x:
x = 1
# From 3.7.7 test_re.py
# Bug was hooking in c_with.
def test_re_tests(tests):
for t in tests:
with a:
continue
# Adapted from 3.8 distutils/command/config.py
# In 3.8 the problem was in handling "with .. as" code
def _gen_temp_sourcefile(x, a, headers, lang):
with x as y:
if a:
y = 2
return 5

View File

@@ -1 +1,3 @@
/.python-version
/runun33.sh
/runun7.sh

View File

@@ -1,4 +1,12 @@
SKIP_TESTS=(
# ifelsestmt is borked in:
# if filename == 'srcfile':
# return srcfile
# if filename == 'destfile':
# return destfile
# assert 0 # shouldn't reach here.
[test_shutil.py]=1
[test___all__.py]=1 # it fails on its own
[test___all__.py]=1 # it fails on its own
@@ -59,6 +67,7 @@ SKIP_TESTS=(
[test_scriptpackages.py]=1 # it fails on its own
[test_select.py]=1 # test takes too long to run: 11 seconds
[test_socket.py]=1 # test takes too long to run: 12 seconds
[test_startfile.py]=1 # it fails on its own
[test_structmembers.py]=1 # it fails on its own

View File

@@ -17,8 +17,6 @@ SKIP_TESTS=(
[test_peepholer.py]=1
[test_pep352.py]=1
[test_quopri.py]=1 # TypeError: Can't convert 'bytes' object to str implicitly
[test_runpy.py]=1
[test_ssl.py]=1 # too installation specific
@@ -35,5 +33,4 @@ if (( BATCH )) ; then
# Fails in crontab environment?
# Figure out what's up here
SKIP_TESTS[test_exception_variations.py]=1
SKIP_TESTS[test_quopri.py]=1
fi

View File

@@ -1,4 +1,15 @@
SKIP_TESTS=(
# FIXME: Did this work sometime in the past ?
# for elem in g(s):
# if not tgt and isOdd(elem): continue
# is erroneously:
# for elem in g(s):
# if tgt or isOdd(elem):
# pass
# else:
# tgt.append(elem)
[test_itertools.py]=1
[test_buffer.py]=1 # FIXME: Works on c90ff51
[test_cmath.py]=1 # FIXME: Works on c90ff51

View File

@@ -1,4 +1,15 @@
SKIP_TESTS=(
# FIXME: Did this work sometime in the past ?
# for elem in g(s):
# if not tgt and isOdd(elem): continue
# is erroneously:
# for elem in g(s):
# if tgt or isOdd(elem):
# pass
# else:
# tgt.append(elem)
[test_itertools.py]=1
[test_buffer.py]=1 # FIXME: Works on c90ff51
[test_cmath.py]=1 # FIXME: Works on c90ff51
[test_strftime.py]=1 # FIXME: Works on c90ff51
@@ -87,5 +98,4 @@ if (( batch )) ; then
# Figure out what's up here
SKIP_TESTS[test_exception_variations.py]=1
SKIP_TESTS[test_mailbox.py]=1 # Takes to long on POWER; over 15 secs
SKIP_TESTS[test_quopri.py]=1
fi

View File

@@ -138,7 +138,6 @@ if (( BATCH )) ; then
SKIP_TESTS[test_ioctl.py]=1 # it fails on its own
SKIP_TESTS[test_poplib.py]=1 # May be a result of POWER installation
SKIP_TESTS[test_quopri.py]=1
SKIP_TESTS[test_sysconfig.py]=1 # POWER extension fails
SKIP_TESTS[test_tarfile.py]=1 # too long to run on POWER 15 secs
SKIP_TESTS[test_venv.py]=1 # takes too long 11 seconds

View File

@@ -124,8 +124,6 @@ SKIP_TESTS=(
[test_pyclbr.py]=1 # it fails on its own
[test_pydoc.py]=1 # it fails on its own
[test_quopri.py]=1 # AssertionError: b'123=four' != '123=four'
[test_random.py]=1 # it fails on its own
[test_range.py]=1
[test_regrtest.py]=1 # test takes too long to run: 12 seconds

View File

@@ -1,4 +1,19 @@
SKIP_TESTS=(
# FIXME: Did this work sometime in the past ?
# for elem in g(s):
# if not tgt and isOdd(elem): continue
# is erroneously:
# for elem in g(s):
# if tgt or isOdd(elem):
# pass
# else:
# tgt.append(elem)
[test_itertools.py]=1
# Fails on decompyle3 as well.
# complicated control flow and "and/or" expressions
[test_pickle.py]=1
[test_builtin.py]=1 # FIXME works on decompyle6
[test_context.py]=1 # FIXME works on decompyle6
[test_doctest2.py]=1 # FIXME works on decompyle6

View File

@@ -78,7 +78,6 @@ case $PYVERSION in
# Fails in crontab environment?
# Figure out what's up here
SKIP_TESTS[test_exception_variations.py]=1
SKIP_TESTS[test_quopri.py]=1
fi
;;
3.1)
@@ -91,7 +90,6 @@ case $PYVERSION in
# Fails in crontab environment?
# Figure out what's up here
SKIP_TESTS[test_exception_variations.py]=1
SKIP_TESTS[test_quopri.py]=1
fi
;;
3.2)
@@ -131,6 +129,7 @@ fulldir=$(pwd)
# DECOMPILER=uncompyle2
DECOMPILER=${DECOMPILER:-"$fulldir/../../bin/uncompyle6"}
OPTS=${OPTS:-""}
TESTDIR=/tmp/test${PYVERSION}
if [[ -e $TESTDIR ]] ; then
rm -fr $TESTDIR
@@ -139,6 +138,8 @@ fi
PYENV_ROOT=${PYENV_ROOT:-$HOME/.pyenv}
pyenv_local=$(pyenv local)
echo Python version is $pyenv_local
# pyenv version update
for dir in ../ ../../ ; do
cp -v .python-version $dir
@@ -147,18 +148,25 @@ done
mkdir $TESTDIR || exit $?
cp -r ${PYENV_ROOT}/versions/${PYVERSION}.${MINOR}/lib/python${PYVERSION}/test $TESTDIR
cd $TESTDIR/test
if [[ $PYVERSION == 3.2 ]] ; then
cp ${PYENV_ROOT}/versions/${PYVERSION}.${MINOR}/lib/python${PYVERSION}/test/* $TESTDIR
cd $TESTDIR
else
cd $TESTDIR/test
fi
pyenv local $FULLVERSION
export PYTHONPATH=$TESTDIR
export PATH=${PYENV_ROOT}/shims:${PATH}
DONT_SKIP_TESTS=${DONT_SKIP_TESTS:-0}
# Run tests
typeset -i i=0
typeset -i allerrs=0
if [[ -n $1 ]] ; then
files=$1
typeset -a files_ary=( $(echo $1) )
if (( ${#files_ary[@]} == 1 )) ; then
files=$@
typeset -a files_ary=( $(echo $@) )
if (( ${#files_ary[@]} == 1 || DONT_SKIP_TESTS == 1 )) ; then
SKIP_TESTS=()
fi
else
@@ -190,7 +198,7 @@ for file in $files; do
typeset -i ENDTIME=$(date +%s)
typeset -i time_diff
(( time_diff = ENDTIME - STARTTIME))
if (( time_diff > 10 )) ; then
if (( time_diff > $timeout )) ; then
echo "Skipping test $file -- test takes too long to run: $time_diff seconds"
continue
fi
@@ -202,7 +210,7 @@ for file in $files; do
$fulldir/compile-file.py $file && \
mv $file{,.orig} && \
echo ========== $(date +%X) Decompiling $file ===========
$DECOMPILER $decompiled_file > $file
$DECOMPILER $OPTS $decompiled_file > $file
rc=$?
if (( rc == 0 )) ; then
echo ========== $(date +%X) Running $file ===========

View File

@@ -32,8 +32,7 @@ want to run on earlier Python versions.
import sys
from collections import deque
from xdis import iscode
from xdis.load import check_object_path, load_module
from xdis import check_object_path, iscode, load_module
from uncompyle6.scanner import get_scanner

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2015-2016, 2818 by Rocky Bernstein
# Copyright (c) 2015-2016, 2818, 2020 by Rocky Bernstein
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -15,14 +15,27 @@
from collections import deque
from xdis import iscode
from xdis.load import load_file, load_module
from xdis.main import get_opcode
from xdis.bytecode import Bytecode, findlinestarts, offset2line
from xdis import (
Bytecode,
iscode,
findlinestarts,
get_opcode,
offset2line,
load_file,
load_module,
)
def line_number_mapping(pyc_filename, src_filename):
(version, timestamp, magic_int, code1, is_pypy,
source_size, sip_hash) = load_module(pyc_filename)
(
version,
timestamp,
magic_int,
code1,
is_pypy,
source_size,
sip_hash,
) = load_module(pyc_filename)
try:
code2 = load_file(src_filename)
except SyntaxError, e:
@@ -44,7 +57,10 @@ def number_loop(queue, mappings, opc):
assert code1.co_name == code2.co_name
linestarts_orig = findlinestarts(code1)
linestarts_uncompiled = list(findlinestarts(code2))
mappings += [[line, offset2line(offset, linestarts_uncompiled)] for offset, line in linestarts_orig]
mappings += [
[line, offset2line(offset, linestarts_uncompiled)]
for offset, line in linestarts_orig
]
bytecode1 = Bytecode(code1, opc)
bytecode2 = Bytecode(code2, opc)
instr2s = bytecode2.get_instructions(code2)

View File

@@ -15,8 +15,7 @@
import datetime, os, subprocess, sys
from uncompyle6 import verify, IS_PYPY, PYTHON_VERSION
from xdis import iscode
from xdis.magics import sysinfo2float
from xdis import iscode, sysinfo2float
from uncompyle6.disas import check_object_path
from uncompyle6.semantics import pysource
from uncompyle6.parser import ParserError

View File

@@ -21,14 +21,13 @@ Common uncompyle6 parser routines.
import sys
from xdis import iscode
from xdis.magics import py_str2float
from xdis import iscode, py_str2float
from spark_parser import GenericASTBuilder, DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from uncompyle6.show import maybe_show_asm
class ParserError(Exception):
def __init__(self, token, offset, debug):
def __init__(self, token, offset, debug=PARSER_DEFAULT_DEBUG):
self.token = token
self.offset = offset
self.debug = debug

View File

@@ -7,7 +7,7 @@ from uncompyle6.parsers.parse11 import Python11Parser
class Python10Parser(Python11Parser):
def __init__(self, debug_parser=PARSER_DEFAULT_DEBUG):
super(Python11Parser, self).__init__(debug_parser)
super(Python10Parser, self).__init__(debug_parser)
self.customized = {}

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2019 Rocky Bernstein
# Copyright (c) 2019-2020 Rocky Bernstein
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from uncompyle6.parser import PythonParserSingle
@@ -7,7 +7,7 @@ from uncompyle6.parsers.parse12 import Python12Parser
class Python11Parser(Python12Parser):
def __init__(self, debug_parser=PARSER_DEFAULT_DEBUG):
super(Python12Parser, self).__init__(debug_parser)
super(Python11Parser, self).__init__(debug_parser)
self.customized = {}

View File

@@ -8,7 +8,7 @@ from uncompyle6.parsers.parse22 import Python22Parser
class Python21Parser(Python22Parser):
def __init__(self, debug_parser=PARSER_DEFAULT_DEBUG):
super(Python22Parser, self).__init__(debug_parser)
super(Python21Parser, self).__init__(debug_parser)
self.customized = {}
def p_forstmt21(self, args):

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016-2017 Rocky Bernstein
# Copyright (c) 2016-2017, 2020 Rocky Bernstein
# Copyright (c) 2000-2002 by hartmut Goebel <hartmut@goebel.noris.de>
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
@@ -8,7 +8,7 @@ from uncompyle6.parsers.parse23 import Python23Parser
class Python22Parser(Python23Parser):
def __init__(self, debug_parser=PARSER_DEFAULT_DEBUG):
super(Python23Parser, self).__init__(debug_parser)
super(Python22Parser, self).__init__(debug_parser)
self.customized = {}
def p_misc22(self, args):

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016-2018 Rocky Bernstein
# Copyright (c) 2016-2018, 2020 Rocky Bernstein
# Copyright (c) 2000-2002 by hartmut Goebel <hartmut@goebel.noris.de>
# Copyright (c) 1999 John Aycock
@@ -9,7 +9,7 @@ from uncompyle6.parsers.parse24 import Python24Parser
class Python23Parser(Python24Parser):
def __init__(self, debug_parser=PARSER_DEFAULT_DEBUG):
super(Python24Parser, self).__init__(debug_parser)
super(Python23Parser, self).__init__(debug_parser)
self.customized = {}
def p_misc23(self, args):

View File

@@ -681,6 +681,7 @@ class Python3Parser(PythonParser):
"RAISE",
"SETUP",
"UNPACK",
"WITH",
)
)

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016-2019 Rocky Bernstein
# Copyright (c) 2016-2020 Rocky Bernstein
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -304,28 +304,25 @@ class Python36Parser(Python35Parser):
self.addRule(rule, nop_func)
# Check to combine assignment + annotation into one statement
self.check_reduce['assign'] = 'token'
elif opname == "WITH_CLEANUP_START":
rules_str = """
stmt ::= with_null
with_null ::= with_suffix
with_suffix ::= WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
"""
self.addRule(rules_str, nop_func)
elif opname == 'SETUP_WITH':
rules_str = """
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt COME_FROM_WITH
with_suffix
# Removes POP_BLOCK LOAD_CONST from 3.6-
withasstmt ::= expr SETUP_WITH store suite_stmts_opt COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
# Removes POP_BLOCK LOAD_CONST from 3.6-
withasstmt ::= expr SETUP_WITH store suite_stmts_opt COME_FROM_WITH
with_suffix
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt POP_BLOCK
BEGIN_FINALLY COME_FROM_WITH
with_suffix
"""
if self.version < 3.8:
rules_str += """
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt POP_BLOCK
LOAD_CONST
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
"""
else:
rules_str += """
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt POP_BLOCK
BEGIN_FINALLY COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH
END_FINALLY
"""
self.addRule(rules_str, nop_func)
pass
pass

View File

@@ -93,6 +93,9 @@ class Python37Parser(Python37BaseParser):
else_suitec ::= c_stmts
else_suitec ::= returns
else_suite_opt ::= else_suite
else_suite_opt ::= pass
stmt ::= classdef
stmt ::= call_stmt
@@ -634,6 +637,12 @@ class Python37Parser(Python37BaseParser):
if_exp37 ::= expr expr jf_cfs expr COME_FROM
jf_cfs ::= JUMP_FORWARD _come_froms
ifelsestmt ::= testexpr c_stmts_opt jf_cfs else_suite opt_come_from_except
# This is probably more realistically an "ifstmt" (with a null else)
# see _cmp() of python3.8/distutils/__pycache__/version.cpython-38.opt-1.pyc
ifelsestmt ::= testexpr stmts jf_cfs else_suite_opt opt_come_from_except
expr_pjit ::= expr POP_JUMP_IF_TRUE
expr_jit ::= expr JUMP_IF_TRUE
expr_jt ::= expr jmp_true
@@ -929,8 +938,10 @@ class Python37Parser(Python37BaseParser):
jitop_come_from_expr ::= JUMP_IF_TRUE_OR_POP come_froms expr
jifop_come_from ::= JUMP_IF_FALSE_OR_POP come_froms
expr_jitop ::= expr JUMP_IF_TRUE_OR_POP
or ::= and jitop_come_from_expr COME_FROM
or ::= expr JUMP_IF_TRUE_OR_POP expr COME_FROM
or ::= expr_jitop expr COME_FROM
or ::= expr_jit expr COME_FROM
or ::= expr_pjit expr POP_JUMP_IF_FALSE COME_FROM
@@ -953,8 +964,8 @@ class Python37Parser(Python37BaseParser):
and ::= expr JUMP_IF_FALSE_OR_POP expr come_from_opt
and ::= expr jifop_come_from expr
pjit_come_from ::= POP_JUMP_IF_TRUE COME_FROM
or ::= expr pjit_come_from expr
expr_pjit_come_from ::= expr POP_JUMP_IF_TRUE COME_FROM
or ::= expr_pjit_come_from expr
## Note that "jmp_false" is what we check on in the "and" reduce rule.
and ::= expr jmp_false expr COME_FROM

View File

@@ -2,10 +2,10 @@
"""
Python 3.7 base code. We keep non-custom-generated grammar rules out of this file.
"""
from uncompyle6.scanners.tok import Token
from uncompyle6.parser import ParserError, PythonParser, PythonParserSingle, nop_func
from uncompyle6.parser import ParserError, PythonParser, nop_func
from uncompyle6.parsers.treenode import SyntaxTree
from spark_parser import DEFAULT_DEBUG as PARSER_DEFAULT_DEBUG
from spark_parser.spark import rule2str
from uncompyle6.parsers.reducecheck import (
and_check,
@@ -127,6 +127,7 @@ class Python37BaseParser(PythonParser):
"RAISE",
"SETUP",
"UNPACK",
"WITH",
)
)
@@ -614,7 +615,7 @@ class Python37BaseParser(PythonParser):
JUMP_BACK COME_FROM
POP_TOP POP_TOP POP_TOP POP_EXCEPT POP_TOP
list_comp_async ::= BUILD_LIST_0 LOAD_FAST list_afor2
get_aiter ::= LOAD_DEREF GET_AITER
get_aiter ::= expr GET_AITER
list_afor ::= get_aiter list_afor2
list_iter ::= list_afor
""",
@@ -993,55 +994,70 @@ class Python37BaseParser(PythonParser):
)
custom_ops_processed.add(opname)
elif opname == "WITH_CLEANUP_START":
rules_str = """
stmt ::= with_null
with_null ::= with_suffix
with_suffix ::= WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
"""
self.addRule(rules_str, nop_func)
elif opname == "SETUP_WITH":
rules_str = """
stmt ::= with
stmt ::= withasstmt
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
with ::= expr
SETUP_WITH POP_TOP
suite_stmts_opt
COME_FROM_WITH
with_suffix
withasstmt ::= expr SETUP_WITH store suite_stmts_opt COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
with_suffix
with ::= expr
SETUP_WITH POP_TOP
suite_stmts_opt
POP_BLOCK LOAD_CONST COME_FROM_WITH
with_suffix
withasstmt ::= expr
SETUP_WITH store suite_stmts_opt
POP_BLOCK LOAD_CONST COME_FROM_WITH
with_suffix
with ::= expr
SETUP_WITH POP_TOP suite_stmts_opt
POP_BLOCK LOAD_CONST COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
with_suffix
withasstmt ::= expr
SETUP_WITH store suite_stmts_opt
POP_BLOCK LOAD_CONST COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
with ::= expr
SETUP_WITH POP_TOP suite_stmts_opt
POP_BLOCK LOAD_CONST COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
withasstmt ::= expr
SETUP_WITH store suite_stmts_opt
POP_BLOCK LOAD_CONST COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
with_suffix
"""
if self.version < 3.8:
rules_str += """
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt POP_BLOCK
LOAD_CONST
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
with_suffix
"""
else:
rules_str += """
with ::= expr
with ::= expr
SETUP_WITH POP_TOP suite_stmts_opt
POP_BLOCK LOAD_CONST COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH END_FINALLY
with_suffix
withasstmt ::= expr
SETUP_WITH store suite_stmts_opt
POP_BLOCK LOAD_CONST COME_FROM_WITH
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt POP_BLOCK
withasstmt ::= expr
SETUP_WITH store suite_stmts
POP_BLOCK BEGIN_FINALLY COME_FROM_WITH with_suffix
with ::= expr SETUP_WITH POP_TOP suite_stmts_opt POP_BLOCK
BEGIN_FINALLY COME_FROM_WITH
WITH_CLEANUP_START WITH_CLEANUP_FINISH
END_FINALLY
with_suffix
"""
self.addRule(rules_str, nop_func)
@@ -1182,18 +1198,27 @@ class Python37BaseParser(PythonParser):
def reduce_is_invalid(self, rule, ast, tokens, first, last):
lhs = rule[0]
n = len(tokens)
last = min(last, n-1)
last = min(last, n - 1)
fn = self.reduce_check_table.get(lhs, None)
try:
if fn:
return fn(self, lhs, n, rule, ast, tokens, first, last)
except:
import sys, traceback
print("Exception in %s %s\n" +
"rule: %s\n" +
"offsets %s .. %s" %
(fn.__name__, sys.exc_info()[1], rule, tokens[first].offset, tokens[last].offset))
print(traceback.print_tb(sys.exc_info()[2],-1))
print(
("Exception in %s %s\n"
+ "rule: %s\n"
+ "offsets %s .. %s")
% (
fn.__name__,
sys.exc_info()[1],
rule2str(rule),
tokens[first].offset,
tokens[last].offset,
)
)
print(traceback.print_tb(sys.exc_info()[2], -1))
raise ParserError(tokens[last], tokens[last].off2int(), self.debug["rules"])
if lhs in ("aug_assign1", "aug_assign2") and ast[0][0] == "and":

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2017-2019 Rocky Bernstein
# Copyright (c) 2017-2020 Rocky Bernstein
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -38,14 +38,17 @@ class Python38Parser(Python37Parser):
stmt ::= forelselaststmtl38
stmt ::= tryfinally38stmt
stmt ::= tryfinally38rstmt
stmt ::= tryfinally38rstmt2
stmt ::= tryfinally38rstmt3
stmt ::= tryfinally38astmt
stmt ::= try_elsestmtl38
stmt ::= try_except_ret38
stmt ::= try_except38
stmt ::= whilestmt38
stmt ::= whileTruestmt38
stmt ::= call
stmt ::= call_stmt
call_stmt ::= call
break ::= POP_BLOCK BREAK_LOOP
break ::= POP_BLOCK POP_TOP BREAK_LOOP
break ::= POP_TOP BREAK_LOOP
@@ -53,15 +56,12 @@ class Python38Parser(Python37Parser):
# FIXME: this should be restricted to being inside a try block
stmt ::= except_ret38
stmt ::= except_ret38a
# FIXME: this should be added only when seeing GET_AITER or YIELD_FROM
async_for_stmt38 ::= expr
GET_AITER
SETUP_FINALLY
GET_ANEXT
LOAD_CONST
YIELD_FROM
POP_BLOCK
async_for ::= GET_AITER _come_froms
SETUP_FINALLY GET_ANEXT LOAD_CONST YIELD_FROM POP_BLOCK
async_for_stmt38 ::= expr async_for
store for_block
COME_FROM_FINALLY
END_ASYNC_FOR
@@ -80,7 +80,20 @@ class Python38Parser(Python37Parser):
END_ASYNC_FOR
else_suite
return ::= ret_expr ROT_TWO POP_TOP RETURN_VALUE
# Seems to be used to discard values before a return in a "for" loop
discard_top ::= ROT_TWO POP_TOP
discard_tops ::= discard_top+
return ::= ret_expr
discard_tops RETURN_VALUE
return ::= popb_return
return ::= pop_return
return ::= pop_ex_return
except_stmt ::= pop_ex_return
pop_return ::= POP_TOP ret_expr RETURN_VALUE
popb_return ::= ret_expr POP_BLOCK RETURN_VALUE
pop_ex_return ::= ret_expr ROT_FOUR POP_EXCEPT RETURN_VALUE
# 3.8 can push a looping JUMP_BACK into into a JUMP_ from a statement that jumps to it
lastl_stmt ::= ifpoplaststmtl
@@ -127,8 +140,14 @@ class Python38Parser(Python37Parser):
except_handler38
try_except38 ::= SETUP_FINALLY POP_BLOCK POP_TOP suite_stmts_opt
except_handler38a
try_except_ret38 ::= SETUP_FINALLY expr POP_BLOCK
RETURN_VALUE except_ret38a
# suite_stmts has a return
try_except38 ::= SETUP_FINALLY POP_BLOCK suite_stmts
except_handler38b
try_except_ret38 ::= SETUP_FINALLY returns except_ret38a
try_except_ret38a ::= SETUP_FINALLY returns except_handler38c
END_FINALLY come_from_opt
# Note: there is a suite_stmts_opt which seems
# to be bookkeeping which is not expressed in source code
@@ -148,19 +167,42 @@ class Python38Parser(Python37Parser):
tryfinallystmt ::= SETUP_FINALLY suite_stmts_opt POP_BLOCK
BEGIN_FINALLY COME_FROM_FINALLY suite_stmts_opt
END_FINALLY
tryfinally38rstmt ::= SETUP_FINALLY POP_BLOCK CALL_FINALLY
lc_setup_finally ::= LOAD_CONST SETUP_FINALLY
call_finally_pt ::= CALL_FINALLY POP_TOP
cf_cf_finally ::= come_from_opt COME_FROM_FINALLY
pop_finally_pt ::= POP_FINALLY POP_TOP
ss_end_finally ::= suite_stmts END_FINALLY
sf_pb_call_returns ::= SETUP_FINALLY POP_BLOCK CALL_FINALLY returns
# FIXME: DRY rules below
tryfinally38rstmt ::= sf_pb_call_returns
cf_cf_finally
ss_end_finally
tryfinally38rstmt ::= sf_pb_call_returns
cf_cf_finally END_FINALLY
suite_stmts
tryfinally38rstmt ::= sf_pb_call_returns
cf_cf_finally POP_FINALLY
ss_end_finally
tryfinally38rstmt ::= sf_bp_call_returns
COME_FROM_FINALLY POP_FINALLY
ss_end_finally
tryfinally38rstmt2 ::= lc_setup_finally POP_BLOCK call_finally_pt
returns
COME_FROM_FINALLY END_FINALLY suite_stmts
tryfinally38rstmt ::= SETUP_FINALLY POP_BLOCK CALL_FINALLY
returns
COME_FROM_FINALLY POP_FINALLY returns
END_FINALLY
tryfinally38stmt ::= SETUP_FINALLY suite_stmts_opt POP_BLOCK
BEGIN_FINALLY COME_FROM_FINALLY
POP_FINALLY suite_stmts_opt END_FINALLY
cf_cf_finally pop_finally_pt
ss_end_finally POP_TOP
tryfinally38rstmt3 ::= SETUP_FINALLY expr POP_BLOCK CALL_FINALLY RETURN_VALUE
COME_FROM COME_FROM_FINALLY
ss_end_finally
tryfinally38stmt ::= SETUP_FINALLY suite_stmts_opt POP_BLOCK
BEGIN_FINALLY COME_FROM_FINALLY
POP_FINALLY suite_stmts_opt END_FINALLY
tryfinally38astmt ::= LOAD_CONST SETUP_FINALLY suite_stmts_opt POP_BLOCK
BEGIN_FINALLY COME_FROM_FINALLY
POP_FINALLY POP_TOP suite_stmts_opt END_FINALLY POP_TOP

View File

@@ -95,6 +95,16 @@ IFELSE_STMT_RULES = frozenset(
"else_suite",
),
),
(
"ifelsestmt",
(
"testexpr",
"stmts",
"jf_cfs",
"else_suite_opt",
"opt_come_from_except",
),
),
])
def ifelsestmt(self, lhs, n, rule, ast, tokens, first, last):
@@ -108,7 +118,7 @@ def ifelsestmt(self, lhs, n, rule, ast, tokens, first, last):
return False
# Avoid if/else where the "then" is a "raise_stmt1" for an
# assert statemetn. Parse this as an "assert" instead.
# assert statement. Parse this as an "assert" instead.
stmts = ast[1]
if stmts in ("c_stmts",) and len(stmts) == 1:
raise_stmt1 = stmts[0]

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016, 2018-2019 by Rocky Bernstein
# Copyright (c) 2016, 2018-2020 by Rocky Bernstein
# Copyright (c) 2005 by Dan Pascu <dan@windowmaker.org>
# Copyright (c) 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
# Copyright (c) 1999 John Aycock
@@ -27,9 +27,7 @@ import sys
from uncompyle6 import PYTHON3, IS_PYPY, PYTHON_VERSION
from uncompyle6.scanners.tok import Token
import xdis
from xdis.bytecode import Bytecode, instruction_size, extended_arg_val, next_offset
from xdis.magics import canonic_python_version
from xdis.util import code2num
from xdis import Bytecode, canonic_python_version, code2num, instruction_size, extended_arg_val, next_offset
if PYTHON_VERSION < 2.6:
from xdis.namedtuple24 import namedtuple

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2019 by Rocky Bernstein
# Copyright (c) 2019-2020 by Rocky Bernstein
"""
Python PyPy 3.3 decompiler scanner.
@@ -10,6 +10,7 @@ import uncompyle6.scanners.scanner33 as scan
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_33pypy as opc
JUMP_OPs = map(lambda op: opc.opname[op], opc.hasjrel + opc.hasjabs)
# We base this off of 3.3

File diff suppressed because it is too large Load Diff

View File

@@ -40,8 +40,8 @@ if PYTHON_VERSION < 2.6:
else:
from collections import namedtuple
from xdis import iscode
from xdis.bytecode import instruction_size, _get_const_info
from xdis import iscode, instruction_size
from xdis.bytecode import _get_const_info
from uncompyle6.scanner import Token, parse_fn_counts
import xdis
@@ -896,6 +896,7 @@ class Scanner3(Scanner):
start, self.next_stmt[offset], self.opc.POP_JUMP_IF_FALSE, target
)
# FIXME: Remove this whole "if" block
# If we still have any offsets in set, start working on it
if match:
is_jump_forward = self.is_jump_forward(pre_rtarget)
@@ -964,7 +965,7 @@ class Scanner3(Scanner):
)
):
pass
else:
elif self.version <= 3.2:
fix = None
jump_ifs = self.inst_matches(
start,

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016, 2017 by Rocky Bernstein
# Copyright (c) 2016-2017, 2020 by Rocky Bernstein
"""
Python 3.0 bytecode scanner/deparser
@@ -8,17 +8,19 @@ scanner routine for Python 3.
# bytecode verification, verify(), uses JUMP_OPs from here
from xdis.opcodes import opcode_30 as opc
from xdis.bytecode import instruction_size
from xdis import instruction_size
import xdis
JUMP_TF = frozenset([opc.JUMP_IF_FALSE, opc.JUMP_IF_TRUE])
from uncompyle6.scanners.scanner3 import Scanner3
class Scanner30(Scanner3):
class Scanner30(Scanner3):
def __init__(self, show_asm=None, is_pypy=False):
Scanner3.__init__(self, 3.0, show_asm, is_pypy)
return
pass
def detect_control_flow(self, offset, targets, inst_index):
@@ -33,17 +35,18 @@ class Scanner30(Scanner3):
# Detect parent structure
parent = self.structs[0]
start = parent['start']
end = parent['end']
start = parent["start"]
end = parent["end"]
# Pick inner-most parent for our offset
for struct in self.structs:
current_start = struct['start']
current_end = struct['end']
if ((current_start <= offset < current_end)
and (current_start >= start and current_end <= end)):
start = current_start
end = current_end
current_start = struct["start"]
current_end = struct["end"]
if (current_start <= offset < current_end) and (
current_start >= start and current_end <= end
):
start = current_start
end = current_end
parent = struct
if op == self.opc.SETUP_LOOP:
@@ -54,28 +57,35 @@ class Scanner30(Scanner3):
start += instruction_size(op, self.opc)
target = self.get_target(offset)
end = self.restrict_to_parent(target, parent)
end = self.restrict_to_parent(target, parent)
self.setup_loops[target] = offset
if target != end:
self.fixed_jumps[offset] = end
(line_no, next_line_byte) = self.lines[offset]
jump_back = self.last_instr(start, end, self.opc.JUMP_ABSOLUTE,
next_line_byte, False)
jump_back = self.last_instr(
start, end, self.opc.JUMP_ABSOLUTE, next_line_byte, False
)
if jump_back:
jump_forward_offset = xdis.next_offset(code[jump_back], self.opc, jump_back)
jump_forward_offset = xdis.next_offset(
code[jump_back], self.opc, jump_back
)
else:
jump_forward_offset = None
return_val_offset1 = self.prev[self.prev[end]]
if (jump_back and jump_back != self.prev_op[end]
and self.is_jump_forward(jump_forward_offset)):
if (code[self.prev_op[end]] == self.opc.RETURN_VALUE or
(code[self.prev_op[end]] == self.opc.POP_BLOCK
and code[return_val_offset1] == self.opc.RETURN_VALUE)):
if (
jump_back
and jump_back != self.prev_op[end]
and self.is_jump_forward(jump_forward_offset)
):
if code[self.prev_op[end]] == self.opc.RETURN_VALUE or (
code[self.prev_op[end]] == self.opc.POP_BLOCK
and code[return_val_offset1] == self.opc.RETURN_VALUE
):
jump_back = None
if not jump_back:
# loop suite ends in return
@@ -90,56 +100,63 @@ class Scanner30(Scanner3):
if code[self.prev_op[next_line_byte]] not in JUMP_TF:
if_offset = self.prev[next_line_byte]
if if_offset:
loop_type = 'while'
loop_type = "while"
self.ignore_if.add(if_offset)
else:
loop_type = 'for'
loop_type = "for"
target = next_line_byte
end = jump_back + 3
else:
if self.get_target(jump_back) >= next_line_byte:
jump_back = self.last_instr(start, end, self.opc.JUMP_ABSOLUTE, start, False)
jump_back = self.last_instr(
start, end, self.opc.JUMP_ABSOLUTE, start, False
)
jb_inst = self.get_inst(jump_back)
jb_next_offset = self.next_offset(jb_inst.opcode, jump_back)
if end > jb_next_offset and self.is_jump_forward(end):
if self.is_jump_forward(jb_next_offset):
if self.get_target(jump_back+4) == self.get_target(end):
self.fixed_jumps[offset] = jump_back+4
if self.get_target(jump_back + 4) == self.get_target(end):
self.fixed_jumps[offset] = jump_back + 4
end = jb_next_offset
elif target < offset:
self.fixed_jumps[offset] = jump_back+4
self.fixed_jumps[offset] = jump_back + 4
end = jb_next_offset
target = self.get_target(jump_back)
if code[target] in (self.opc.FOR_ITER, self.opc.GET_ITER):
loop_type = 'for'
loop_type = "for"
else:
loop_type = 'while'
loop_type = "while"
test = self.prev_op[next_line_byte]
if test == offset:
loop_type = 'while 1'
loop_type = "while 1"
elif self.code[test] in self.opc.JUMP_OPs:
self.ignore_if.add(test)
test_target = self.get_target(test)
if test_target > (jump_back+3):
if test_target > (jump_back + 3):
jump_back = test_target
self.not_continue.add(jump_back)
self.loops.append(target)
self.structs.append({'type': loop_type + '-loop',
'start': target,
'end': jump_back})
self.structs.append(
{"type": loop_type + "-loop", "start": target, "end": jump_back}
)
after_jump_offset = xdis.next_offset(code[jump_back], self.opc, jump_back)
if (self.get_inst(after_jump_offset).opname == 'POP_TOP'):
after_jump_offset = xdis.next_offset(code[after_jump_offset], self.opc,
after_jump_offset)
if self.get_inst(after_jump_offset).opname == "POP_TOP":
after_jump_offset = xdis.next_offset(
code[after_jump_offset], self.opc, after_jump_offset
)
if after_jump_offset != end:
self.structs.append({'type': loop_type + '-else',
'start': after_jump_offset,
'end': end})
self.structs.append(
{
"type": loop_type + "-else",
"start": after_jump_offset,
"end": end,
}
)
elif op in self.pop_jump_tf:
start = offset + instruction_size(op, self.opc)
target = self.get_target(offset)
@@ -147,7 +164,7 @@ class Scanner30(Scanner3):
prev_op = self.prev_op
# Do not let jump to go out of parent struct bounds
if target != rtarget and parent['type'] == 'and/or':
if target != rtarget and parent["type"] == "and/or":
self.fixed_jumps[offset] = rtarget
return
@@ -156,12 +173,15 @@ class Scanner30(Scanner3):
# rocky: if we have a conditional jump to the next instruction, then
# possibly I am "skipping over" a "pass" or null statement.
if ((code[prev_op[target]] in self.pop_jump_if_pop) and
(target > offset) and prev_op[target] != offset):
if (
(code[prev_op[target]] in self.pop_jump_if_pop)
and (target > offset)
and prev_op[target] != offset
):
self.fixed_jumps[offset] = prev_op[target]
self.structs.append({'type': 'and/or',
'start': start,
'end': prev_op[target]})
self.structs.append(
{"type": "and/or", "start": start, "end": prev_op[target]}
)
return
# The op offset just before the target jump offset is important
@@ -174,35 +194,80 @@ class Scanner30(Scanner3):
# Search for another JUMP_IF_FALSE targetting the same op,
# in current statement, starting from current offset, and filter
# everything inside inner 'or' jumps and midline ifs
match = self.rem_or(start, self.next_stmt[offset],
opc.JUMP_IF_FALSE, target)
match = self.rem_or(
start, self.next_stmt[offset], opc.JUMP_IF_FALSE, target
)
# If we still have any offsets in set, start working on it
if match:
is_jump_forward = self.is_jump_forward(pre_rtarget)
if (is_jump_forward and pre_rtarget not in self.stmts and
self.restrict_to_parent(self.get_target(pre_rtarget), parent) == rtarget):
if (code[prev_op[pre_rtarget]] == self.opc.JUMP_ABSOLUTE
and self.remove_mid_line_ifs([offset]) and
target == self.get_target(prev_op[pre_rtarget]) and
(prev_op[pre_rtarget] not in self.stmts or
self.get_target(prev_op[pre_rtarget]) > prev_op[pre_rtarget]) and
1 == len(self.remove_mid_line_ifs(self.rem_or(start, prev_op[pre_rtarget], JUMP_TF, target)))):
if (
is_jump_forward
and pre_rtarget not in self.stmts
and self.restrict_to_parent(
self.get_target(pre_rtarget), parent
)
== rtarget
):
if (
code[prev_op[pre_rtarget]] == self.opc.JUMP_ABSOLUTE
and self.remove_mid_line_ifs([offset])
and target == self.get_target(prev_op[pre_rtarget])
and (
prev_op[pre_rtarget] not in self.stmts
or self.get_target(prev_op[pre_rtarget])
> prev_op[pre_rtarget]
)
and 1
== len(
self.remove_mid_line_ifs(
self.rem_or(
start, prev_op[pre_rtarget], JUMP_TF, target
)
)
)
):
pass
elif (code[prev_op[pre_rtarget]] == self.opc.RETURN_VALUE
and self.remove_mid_line_ifs([offset]) and
1 == (len(set(self.remove_mid_line_ifs(self.rem_or(start, prev_op[pre_rtarget],
JUMP_TF, target))) |
set(self.remove_mid_line_ifs(self.rem_or(start, prev_op[pre_rtarget],
(opc.JUMP_IF_FALSE,
opc.JUMP_IF_TRUE,
opc.JUMP_ABSOLUTE),
pre_rtarget, True)))))):
elif (
code[prev_op[pre_rtarget]] == self.opc.RETURN_VALUE
and self.remove_mid_line_ifs([offset])
and 1
== (
len(
set(
self.remove_mid_line_ifs(
self.rem_or(
start,
prev_op[pre_rtarget],
JUMP_TF,
target,
)
)
)
| set(
self.remove_mid_line_ifs(
self.rem_or(
start,
prev_op[pre_rtarget],
(
opc.JUMP_IF_FALSE,
opc.JUMP_IF_TRUE,
opc.JUMP_ABSOLUTE,
),
pre_rtarget,
True,
)
)
)
)
)
):
pass
else:
fix = None
jump_ifs = self.inst_matches(start, self.next_stmt[offset],
opc.JUMP_IF_FALSE)
jump_ifs = self.inst_matches(
start, self.next_stmt[offset], opc.JUMP_IF_FALSE
)
last_jump_good = True
for j in jump_ifs:
if target == self.get_target(j):
@@ -224,14 +289,19 @@ class Scanner30(Scanner3):
pass
elif self.is_jump_forward(next) and target == self.get_target(next):
if code[prev_op[next]] == opc.JUMP_IF_FALSE:
if (code[next] == self.opc.JUMP_FORWARD
if (
code[next] == self.opc.JUMP_FORWARD
or target != rtarget
or code[prev_op[pre_rtarget]] not in
(self.opc.JUMP_ABSOLUTE, self.opc.RETURN_VALUE)):
or code[prev_op[pre_rtarget]]
not in (self.opc.JUMP_ABSOLUTE, self.opc.RETURN_VALUE)
):
self.fixed_jumps[offset] = prev_op[next]
return
elif (code[next] == self.opc.JUMP_ABSOLUTE and self.is_jump_forward(target) and
self.get_target(target) == self.get_target(next)):
elif (
code[next] == self.opc.JUMP_ABSOLUTE
and self.is_jump_forward(target)
and self.get_target(target) == self.get_target(next)
):
self.fixed_jumps[offset] = prev_op[next]
return
@@ -239,13 +309,17 @@ class Scanner30(Scanner3):
if offset in self.ignore_if:
return
if (code[pre_rtarget] == self.opc.JUMP_ABSOLUTE and
pre_rtarget in self.stmts and
pre_rtarget != offset and
prev_op[pre_rtarget] != offset and
not (code[rtarget] == self.opc.JUMP_ABSOLUTE and
code[rtarget+3] == self.opc.POP_BLOCK and
code[prev_op[pre_rtarget]] != self.opc.JUMP_ABSOLUTE)):
if (
code[pre_rtarget] == self.opc.JUMP_ABSOLUTE
and pre_rtarget in self.stmts
and pre_rtarget != offset
and prev_op[pre_rtarget] != offset
and not (
code[rtarget] == self.opc.JUMP_ABSOLUTE
and code[rtarget + 3] == self.opc.POP_BLOCK
and code[prev_op[pre_rtarget]] != self.opc.JUMP_ABSOLUTE
)
):
rtarget = pre_rtarget
# Does the "jump if" jump beyond a jump op?
@@ -266,16 +340,17 @@ class Scanner30(Scanner3):
if_end = self.get_target(pre_rtarget, 0)
# If the jump target is back, we are looping
if (if_end < pre_rtarget and
(code[prev_op[if_end]] == self.opc.SETUP_LOOP)):
if (if_end > start):
if if_end < pre_rtarget and (
code[prev_op[if_end]] == self.opc.SETUP_LOOP
):
if if_end > start:
return
end = self.restrict_to_parent(if_end, parent)
self.structs.append({'type': 'if-then',
'start': start,
'end': pre_rtarget})
self.structs.append(
{"type": "if-then", "start": start, "end": pre_rtarget}
)
self.not_continue.add(pre_rtarget)
# if rtarget < end and (
@@ -289,20 +364,17 @@ class Scanner30(Scanner3):
# self.else_start[rtarget] = end
elif self.is_jump_back(pre_rtarget, 0):
if_end = rtarget
self.structs.append({'type': 'if-then',
'start': start,
'end': pre_rtarget})
self.structs.append(
{"type": "if-then", "start": start, "end": pre_rtarget}
)
self.not_continue.add(pre_rtarget)
elif code[pre_rtarget] in (self.opc.RETURN_VALUE,
self.opc.BREAK_LOOP):
self.structs.append({'type': 'if-then',
'start': start,
'end': rtarget})
elif code[pre_rtarget] in (self.opc.RETURN_VALUE, self.opc.BREAK_LOOP):
self.structs.append({"type": "if-then", "start": start, "end": rtarget})
# It is important to distingish if this return is inside some sort
# except block return
jump_prev = prev_op[offset]
if self.is_pypy and code[jump_prev] == self.opc.COMPARE_OP:
if self.opc.cmp_op[code[jump_prev+1]] == 'exception-match':
if self.opc.cmp_op[code[jump_prev + 1]] == "exception-match":
return
if self.version >= 3.5:
# Python 3.5 may remove as dead code a JUMP
@@ -330,7 +402,10 @@ class Scanner30(Scanner3):
if code[next_op] == self.opc.POP_TOP:
next_op = rtarget
for block in self.structs:
if block['type'] == 'while-loop' and block['end'] == next_op:
if (
block["type"] == "while-loop"
and block["end"] == next_op
):
return
next_op += instruction_size(self.code[next_op], self.opc)
if code[next_op] == self.opc.POP_BLOCK:
@@ -340,20 +415,21 @@ class Scanner30(Scanner3):
self.fixed_jumps[offset] = rtarget
self.not_continue.add(pre_rtarget)
elif op == self.opc.SETUP_EXCEPT:
target = self.get_target(offset)
end = self.restrict_to_parent(target, parent)
end = self.restrict_to_parent(target, parent)
self.fixed_jumps[offset] = end
elif op == self.opc.SETUP_FINALLY:
target = self.get_target(offset)
end = self.restrict_to_parent(target, parent)
end = self.restrict_to_parent(target, parent)
self.fixed_jumps[offset] = end
elif op in self.jump_if_pop:
target = self.get_target(offset)
if target > offset:
unop_target = self.last_instr(offset, target, self.opc.JUMP_FORWARD, target)
if unop_target and code[unop_target+3] != self.opc.ROT_TWO:
unop_target = self.last_instr(
offset, target, self.opc.JUMP_FORWARD, target
)
if unop_target and code[unop_target + 3] != self.opc.ROT_TWO:
self.fixed_jumps[offset] = unop_target
else:
self.fixed_jumps[offset] = self.restrict_to_parent(target, parent)
@@ -364,8 +440,11 @@ class Scanner30(Scanner3):
# misclassified as RETURN_END_IF. Handle that here.
# In RETURN_VALUE, JUMP_ABSOLUTE, RETURN_VALUE is never RETURN_END_IF
if op == self.opc.RETURN_VALUE:
if (offset+1 < len(code) and code[offset+1] == self.opc.JUMP_ABSOLUTE and
offset in self.return_end_ifs):
if (
offset + 1 < len(code)
and code[offset + 1] == self.opc.JUMP_ABSOLUTE
and offset in self.return_end_ifs
):
self.return_end_ifs.remove(offset)
pass
pass
@@ -375,8 +454,10 @@ class Scanner30(Scanner3):
# then RETURN_VALUE is not RETURN_END_IF
rtarget = self.get_target(offset)
rtarget_prev = self.prev[rtarget]
if (code[rtarget_prev] == self.opc.RETURN_VALUE and
rtarget_prev in self.return_end_ifs):
if (
code[rtarget_prev] == self.opc.RETURN_VALUE
and rtarget_prev in self.return_end_ifs
):
i = rtarget_prev
while i != offset:
if code[i] in [opc.JUMP_FORWARD, opc.JUMP_ABSOLUTE]:
@@ -386,15 +467,17 @@ class Scanner30(Scanner3):
pass
return
if __name__ == "__main__":
from uncompyle6 import PYTHON_VERSION
if PYTHON_VERSION == 3.0:
import inspect
co = inspect.currentframe().f_code
tokens, customize = Scanner30().ingest(co)
for t in tokens:
print(t)
pass
else:
print("Need to be Python 3.0 to demo; I am %s." %
PYTHON_VERSION)
print("Need to be Python 3.0 to demo; I am %s." % PYTHON_VERSION)

View File

@@ -29,8 +29,8 @@ For example:
Finally we save token information.
"""
from xdis import iscode
from xdis.bytecode import instruction_size, _get_const_info, Instruction
from xdis import iscode, instruction_size, Instruction
from xdis.bytecode import _get_const_info
from uncompyle6.scanner import Token
import xdis

View File

@@ -8,22 +8,29 @@ FIXME idea: extend parsing system to do same kinds of checks or nonterminal
before reduction and don't reduce when there is a problem.
"""
def checker(ast, in_loop, errors):
if ast is None:
return
in_loop = (in_loop or (ast.kind in ('while1stmt', 'whileTruestmt',
'whilestmt', 'whileelsestmt', 'while1elsestmt',
'for_block'))
or ast.kind.startswith('async_for'))
if ast.kind in ('aug_assign1', 'aug_assign2') and ast[0][0] == 'and':
in_loop = (
in_loop
or ast.kind.startswith("for")
or ast.kind.startswith("while")
or ast.kind.startswith("async_for")
)
if ast.kind in ("aug_assign1", "aug_assign2") and ast[0][0] == "and":
text = str(ast)
error_text = '\n# improper augmented assigment (e.g. +=, *=, ...):\n#\t' + '\n# '.join(text.split("\n")) + '\n'
error_text = (
"\n# improper augmented assigment (e.g. +=, *=, ...):\n#\t"
+ "\n# ".join(text.split("\n"))
+ "\n"
)
errors.append(error_text)
for node in ast:
if not in_loop and node.kind in ('continue', 'break'):
if not in_loop and node.kind in ("continue", "break"):
text = str(node)
error_text = '\n# not in loop:\n#\t' + '\n# '.join(text.split("\n"))
error_text = "\n# not in loop:\n#\t" + "\n# ".join(text.split("\n"))
errors.append(error_text)
if hasattr(node, '__repr1__'):
if hasattr(node, "__repr1__"):
checker(node, in_loop, errors)

View File

@@ -339,7 +339,11 @@ TABLE_DIRECT = {
"raise_stmt1": ("%|raise %c\n", 0),
"raise_stmt3": ("%|raise %c, %c, %c\n", 0, 1, 2),
# "yield": ( "yield %c", 0),
# "return": ( "%|return %c\n", 0),
# Note: we have a custom rule, which calls when we don't
# have "return None"
"return": ( "%|return %c\n", 0),
"return_if_stmt": ("return %c\n", 0),
"ifstmt": (
"%|if %c:\n%+%c%-",

View File

@@ -17,9 +17,8 @@
"""
from uncompyle6.semantics.consts import TABLE_DIRECT
from xdis.util import co_flags_is_async
from xdis import iscode
from xdis import co_flags_is_async, iscode
from uncompyle6.scanner import Code
from uncompyle6.semantics.helper import (
find_code_node,

View File

@@ -15,8 +15,7 @@
"""Isolate Python 3.5 version-specific semantic actions here.
"""
from xdis import iscode
from xdis.util import co_flags_is_async
from xdis import co_flags_is_async, iscode
from uncompyle6.semantics.consts import (
INDENT_PER_LEVEL,
PRECEDENCE,

View File

@@ -143,6 +143,12 @@ def customize_for_version37(self, version):
"importattr37": ("%c", (0, "IMPORT_NAME_ATTR")),
"importlist37": ("%C", (0, maxint, ", ")),
"list_afor": (
" async for %[1]{%c} in %c%[1]{%c}",
(1, "store"), (0, "get_aiter"), (3, "list_iter"),
),
"list_if37": (" if %p%c", (0, 27), 1),
"list_if37_not": (" if not %p%c", (0, 27), 1),
"testfalse_not_or": ("not %c or %c", (0, "expr"), (2, "expr")),

View File

@@ -29,9 +29,9 @@ def customize_for_version38(self, version):
# del TABLE_DIRECT[lhs]
TABLE_DIRECT.update({
'async_for_stmt38': (
'%|async for %c in %c:\n%+%c%-%-\n\n',
(7, 'store'), (0, 'expr'), (8, 'for_block') ),
"async_for_stmt38": (
"%|async for %c in %c:\n%+%c%-%-\n\n",
(2, "store"), (0, "expr"), (3, "for_block") ),
'async_forelse_stmt38': (
'%|async for %c in %c:\n%+%c%-%|else:\n%+%c%-\n\n',
@@ -46,6 +46,10 @@ def customize_for_version38(self, version):
(0, 'expr'), (6, 'store'),
(7, 'suite_stmts') ),
"call_stmt": (
"%|%c\n", 0
),
'except_handler38': (
'%c', (2, 'except_stmts') ),
@@ -101,12 +105,26 @@ def customize_for_version38(self, version):
'try_except38': (
'%|try:\n%+%c\n%-%|except:\n%|%-%c\n\n',
(-2, 'suite_stmts_opt'), (-1, 'except_handler38a') ),
'try_except_ret38': (
'%|try:\n%+%|return %c%-\n%|except:\n%+%|%c%-\n\n',
(1, 'expr'), (-1, 'except_ret38a') ),
"try_except_ret38": (
"%|try:\n%+%c%-\n%|except:\n%+%|%c%-\n\n",
(1, "returns"),
(2, "except_ret38a"),
),
'tryfinally38rstmt': (
'%|try:\n%+%c%-%|finally:\n%+%c%-\n\n',
(3, 'returns'), 6 ),
(0, "sf_pb_call_returns"),
(-1, ("ss_end_finally", "suite_stmts")),
),
"tryfinally38rstmt2": (
"%|try:\n%+%c%-%|finally:\n%+%c%-\n\n",
(4, "returns"),
-2, "ss_end_finally"
),
"tryfinally38rstmt3": (
"%|try:\n%+%|return %c%-\n%|finally:\n%+%c%-\n\n",
(1, "expr"),
(-1, "ss_end_finally")
),
'tryfinally38stmt': (
'%|try:\n%+%c%-%|finally:\n%+%c%-\n\n',
(1, "suite_stmts_opt"),

View File

@@ -65,8 +65,7 @@ The node position 0 will be associated with "import".
import re
from xdis import iscode
from xdis.magics import sysinfo2float
from xdis import iscode, sysinfo2float
from uncompyle6.semantics import pysource
from uncompyle6 import parser
from uncompyle6.scanner import Token, Code, get_scanner
@@ -968,7 +967,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
self.prune()
def n_listcomp(self, node):
def n_list_comp(self, node):
self.write("[")
if node[0].kind == "load_closure":
self.listcomprehension_walk2(node)
@@ -1174,6 +1173,7 @@ class FragmentsWalker(pysource.SourceWalker, object):
# modularity is broken here
p_insts = self.p.insts
self.p.insts = self.scanner.insts
self.p.offset2inst_index = self.scanner.offset2inst_index
ast = python_parser.parse(self.p, tokens, customize)
self.p.insts = p_insts
except (parser.ParserError(e), AssertionError(e)):
@@ -1211,10 +1211,11 @@ class FragmentsWalker(pysource.SourceWalker, object):
# modularity is broken here
p_insts = self.p.insts
self.p.insts = self.scanner.insts
self.p.offset2inst_index = self.scanner.offset2inst_index
ast = parser.parse(self.p, tokens, customize)
self.p.insts = p_insts
except (parser.ParserError(e), AssertionError(e)):
raise ParserError(e, tokens)
raise ParserError(e, tokens, {})
maybe_show_tree(self, ast)

View File

@@ -18,9 +18,7 @@ All the crazy things we have to do to handle Python functions in Python before 3
The saga of changes continues in 3.0 and above and in other files.
"""
from xdis import iscode, code_has_star_arg, code_has_star_star_arg
from xdis.util import CO_GENERATOR
from uncompyle6.scanner import Code
from uncompyle6.parsers.treenode import SyntaxTree
from uncompyle6 import PYTHON3
from uncompyle6.semantics.parser_error import ParserError
from uncompyle6.parser import ParserError as ParserError2

View File

@@ -16,8 +16,7 @@
All the crazy things we have to do to handle Python functions in 3.0-3.5 or so.
The saga of changes before and after is in other files.
"""
from xdis import iscode, code_has_star_arg, code_has_star_star_arg
from xdis.util import CO_GENERATOR
from xdis import iscode, code_has_star_arg, code_has_star_star_arg, CO_GENERATOR
from uncompyle6.scanner import Code
from uncompyle6.parsers.treenode import SyntaxTree
from uncompyle6.semantics.parser_error import ParserError
@@ -33,6 +32,7 @@ from uncompyle6.show import maybe_show_tree_param_default
# FIXME: DRY the below code...
def make_function3_annotate(
self, node, is_lambda, nested=1, code_node=None, annotate_last=-1
):
@@ -265,8 +265,8 @@ def make_function3_annotate(
self.write("\n" + indent)
line_number = self.line_number
self.write(" -> ")
if 'return' in annotate_dict:
self.write(annotate_dict['return'])
if "return" in annotate_dict:
self.write(annotate_dict["return"])
else:
# value, string = annotate_args['return']
# if string:
@@ -423,9 +423,7 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
lc_index = -3
pass
if (len(node) > 2
and (have_kwargs or node[lc_index].kind != "load_closure")
):
if len(node) > 2 and (have_kwargs or node[lc_index].kind != "load_closure"):
# Find the index in "node" where the first default
# parameter value is located. Note this is in contrast to
@@ -483,7 +481,7 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
if is_lambda:
kwargs = []
for i in range(kwonlyargcount):
paramnames.append(scanner_code.co_varnames[argc+i])
paramnames.append(scanner_code.co_varnames[argc + i])
pass
else:
kwargs = list(scanner_code.co_varnames[argc : argc + kwonlyargcount])
@@ -686,5 +684,5 @@ def make_function3(self, node, is_lambda, nested=1, code_node=None):
if need_bogus_yield:
self.template_engine(("%|if False:\n%+%|yield None%-",), node)
scanner_code._tokens = None # save memory
scanner_code._tokens = None # save memory
scanner_code._customize = None # save memory

View File

@@ -16,8 +16,13 @@
All the crazy things we have to do to handle Python functions in 3.6 and above.
The saga of changes before 3.6 is in other files.
"""
from xdis import iscode, code_has_star_arg, code_has_star_star_arg
from xdis.util import CO_GENERATOR, CO_ASYNC_GENERATOR
from xdis import (
iscode,
code_has_star_arg,
code_has_star_star_arg,
CO_GENERATOR,
CO_ASYNC_GENERATOR,
)
from uncompyle6.scanner import Code
from uncompyle6.parsers.treenode import SyntaxTree
from uncompyle6.semantics.parser_error import ParserError
@@ -103,9 +108,7 @@ def make_function36(self, node, is_lambda, nested=1, code_node=None):
if annotate_node == "dict" and annotate_name_node.kind.startswith(
"BUILD_CONST_KEY_MAP"
):
types = [
self.traverse(n, indent="") for n in annotate_node[:-2]
]
types = [self.traverse(n, indent="") for n in annotate_node[:-2]]
names = annotate_node[-2].attr
l = len(types)
assert l == len(names)
@@ -328,9 +331,7 @@ def make_function36(self, node, is_lambda, nested=1, code_node=None):
self.write(" -> %s" % annotate_dict["return"])
self.println(":")
if (
node[-2] == "docstring" and not is_lambda
):
if node[-2] == "docstring" and not is_lambda:
# docstring exists, dump it
self.println(self.traverse(node[-2]))
@@ -369,5 +370,5 @@ def make_function36(self, node, is_lambda, nested=1, code_node=None):
if need_bogus_yield:
self.template_engine(("%|if False:\n%+%|yield None%-",), node)
scanner_code._tokens = None # save memory
scanner_code._tokens = None # save memory
scanner_code._customize = None # save memory

View File

@@ -80,7 +80,7 @@ Python.
#
# Escapes in the format string are:
#
# %c evaluate the node recursively. Its argument is a single
# %c evaluate/traverse the node recursively. Its argument is a single
# integer or tuple representing a node index.
# If a tuple is given, the first item is the node index while
# the second item is a string giving the node/noterminal name.
@@ -91,7 +91,7 @@ Python.
# index and the precedence value, an integer. If 3 items are given,
# the second item is the nonterminal name and the precedence is given last.
#
# %C evaluate children recursively, with sibling children separated by the
# %C evaluate/travers children recursively, with sibling children separated by the
# given string. It needs a 3-tuple: a starting node, the maximimum
# value of an end node, and a string to be inserted between sibling children
#
@@ -99,7 +99,7 @@ Python.
# on the LHS of an assignment statement since BUILD_TUPLE_n pretty-prints
# other tuples. The specifier takes no arguments
#
# %P same as %C but sets operator precedence. Its argument is a 4-tuple:
# %P same as %C but sets operator precedence. Its argument is a 4-tuple:
# the node low and high indices, the separator, a string the precidence
# value, an integer.
#
@@ -115,7 +115,13 @@ Python.
#
# %- decrease current indentation level. Takes no arguments.
#
# %{...} evaluate ... in context of N
# %{EXPR} Python eval(EXPR) in context of node. Takes no arguments
#
# %[N]{EXPR} Python eval(EXPR) in context of node[N]. Takes no arguments
#
# %[N]{%X} evaluate/recurse on child node[N], using specifier %X.
# %X can be one of the above, e.g. %c, %p, etc. Takes the arguemnts
# that the specifier uses.
#
# %% literal '%'. Takes no arguments.
#
@@ -129,8 +135,7 @@ import sys
IS_PYPY = "__pypy__" in sys.builtin_module_names
PYTHON3 = sys.version_info >= (3, 0)
from xdis import iscode
from xdis.util import COMPILER_FLAG_BIT
from xdis import iscode, COMPILER_FLAG_BIT
from uncompyle6.parser import get_python_parser
from uncompyle6.parsers.treenode import SyntaxTree
@@ -504,14 +509,14 @@ class SourceWalker(GenericASTTraversal, object):
self.preorder(node[0])
self.prune()
else:
self.write(self.indent, "return")
# One reason we worry over whether we use "return None" or "return"
# is that inside a generator, "return None" is illegal.
# Thank you, Python!
if self.return_none or not self.is_return_none(node):
self.write(" ")
self.preorder(node[0])
self.println()
self.default(node)
else:
self.template_engine(("%|return\n",), node)
self.prune() # stop recursing
def n_return_if_stmt(self, node):
@@ -1205,7 +1210,7 @@ class SourceWalker(GenericASTTraversal, object):
ast = ast[0]
# Pick out important parts of the comprehension:
# * the variable we interate over: "store"
# * the variable we iterate over: "store"
# * the results we accumulate: "n"
is_30_dict_comp = False
@@ -1265,17 +1270,25 @@ class SourceWalker(GenericASTTraversal, object):
# Iterate to find the innermost store
# We'll come back to the list iteration below.
while n in ("list_iter", "comp_iter"):
while n in ("list_iter", "list_afor", "list_afor2", "comp_iter"):
# iterate one nesting deeper
if self.version == 3.0 and len(n) == 3:
assert n[0] == "expr" and n[1] == "expr"
n = n[1]
elif n == "list_afor":
n = n[1]
elif n == "list_afor2":
if n[1] == "store":
store = n[1]
n = n[3]
else:
n = n[0]
if n in ("list_for", "comp_for"):
if n[2] == "store" and not store:
store = n[2]
if not comp_store:
comp_store = store
n = n[3]
elif n in ("list_if", "list_if_not",
"list_if37", "list_if37_not",
@@ -2039,10 +2052,17 @@ class SourceWalker(GenericASTTraversal, object):
elif typ == "c":
index = entry[arg]
if isinstance(index, tuple):
assert node[index[0]] == index[1], (
"at %s[%d], expected '%s' node; got '%s'"
% (node.kind, arg, index[1], node[index[0]].kind)
)
if isinstance(index[1], str):
assert node[index[0]] == index[1], (
"at %s[%d], expected '%s' node; got '%s'"
% (node.kind, arg, index[1], node[index[0]].kind)
)
else:
assert node[index[0]] in index[1], (
"at %s[%d], expected to be in '%s' node; got '%s'"
% (node.kind, arg, index[1], node[index[0]].kind)
)
index = index[0]
assert isinstance(
index, int
@@ -2051,6 +2071,16 @@ class SourceWalker(GenericASTTraversal, object):
arg,
type(index),
)
try:
node[index]
except IndexError:
raise RuntimeError(
"""
Expanding '%' in template '%s[%s]':
%s is invalid; has only %d entries
""" % (node,kind, enty, arg, index, len(node))
)
self.preorder(node[index])
arg += 1
@@ -2112,7 +2142,6 @@ class SourceWalker(GenericASTTraversal, object):
self.prec = p
arg += 1
elif typ == "{":
d = node.__dict__
expr = m.group("expr")
# Line mapping stuff
@@ -2123,10 +2152,16 @@ class SourceWalker(GenericASTTraversal, object):
):
self.source_linemap[self.current_line_number] = node.linestart
try:
self.write(eval(expr, d, d))
except:
raise
if expr[0] == "%":
index = entry[arg]
self.template_engine((expr, index), node)
arg += 1
else:
d = node.__dict__
try:
self.write(eval(expr, d, d))
except:
raise
m = escape.search(fmt, i)
self.write(fmt[i:])

View File

@@ -1,5 +1,5 @@
#
# (C) Copyright 2015-2018 by Rocky Bernstein
# (C) Copyright 2015-2018, 2020 by Rocky Bernstein
# (C) Copyright 2000-2002 by hartmut Goebel <h.goebel@crazy-compilers.com>
#
# This program is free software: you can redistribute it and/or modify
@@ -23,12 +23,9 @@ import xdis.std as dis
from subprocess import call
import uncompyle6
from uncompyle6.scanner import (Token as ScannerToken, get_scanner)
from uncompyle6.scanner import Token as ScannerToken, get_scanner
from uncompyle6 import PYTHON3
from xdis import iscode
from xdis.magics import PYTHON_MAGIC_INT
from xdis.load import load_file, load_module
from xdis.util import pretty_flags
from xdis import iscode, load_file, load_module, pretty_code_flags, PYTHON_MAGIC_INT
# FIXME: DRY
if PYTHON3:
@@ -41,63 +38,77 @@ else:
def code_equal(a, b):
return a.co_code == b.co_code
BIN_OP_FUNCS = {
'BINARY_POWER': operator.pow,
'BINARY_MULTIPLY': operator.mul,
'BINARY_DIVIDE': truediv,
'BINARY_FLOOR_DIVIDE': operator.floordiv,
'BINARY_TRUE_DIVIDE': operator.truediv,
'BINARY_MODULO' : operator.mod,
'BINARY_ADD': operator.add,
'BINARY_SUBRACT': operator.sub,
'BINARY_LSHIFT': operator.lshift,
'BINARY_RSHIFT': operator.rshift,
'BINARY_AND': operator.and_,
'BINARY_XOR': operator.xor,
'BINARY_OR': operator.or_,
"BINARY_POWER": operator.pow,
"BINARY_MULTIPLY": operator.mul,
"BINARY_DIVIDE": truediv,
"BINARY_FLOOR_DIVIDE": operator.floordiv,
"BINARY_TRUE_DIVIDE": operator.truediv,
"BINARY_MODULO": operator.mod,
"BINARY_ADD": operator.add,
"BINARY_SUBRACT": operator.sub,
"BINARY_LSHIFT": operator.lshift,
"BINARY_RSHIFT": operator.rshift,
"BINARY_AND": operator.and_,
"BINARY_XOR": operator.xor,
"BINARY_OR": operator.or_,
}
JUMP_OPS = None
# --- exceptions ---
class VerifyCmpError(Exception):
pass
class CmpErrorConsts(VerifyCmpError):
"""Exception to be raised when consts differ."""
def __init__(self, name, index):
self.name = name
self.index = index
def __str__(self):
return 'Compare Error within Consts of %s at index %i' % \
(repr(self.name), self.index)
return "Compare Error within Consts of %s at index %i" % (
repr(self.name),
self.index,
)
class CmpErrorConstsType(VerifyCmpError):
"""Exception to be raised when consts differ."""
def __init__(self, name, index):
self.name = name
self.index = index
def __str__(self):
return 'Consts type differ in %s at index %i' % \
(repr(self.name), self.index)
return "Consts type differ in %s at index %i" % (repr(self.name), self.index)
class CmpErrorConstsLen(VerifyCmpError):
"""Exception to be raised when length of co_consts differs."""
def __init__(self, name, consts1, consts2):
self.name = name
self.consts = (consts1, consts2)
def __str__(self):
return 'Consts length differs in %s:\n\n%i:\t%s\n\n%i:\t%s\n\n' % \
(repr(self.name),
len(self.consts[0]), repr(self.consts[0]),
len(self.consts[1]), repr(self.consts[1]))
return "Consts length differs in %s:\n\n%i:\t%s\n\n%i:\t%s\n\n" % (
repr(self.name),
len(self.consts[0]),
repr(self.consts[0]),
len(self.consts[1]),
repr(self.consts[1]),
)
class CmpErrorCode(VerifyCmpError):
"""Exception to be raised when code differs."""
def __init__(self, name, index, token1, token2, tokens1, tokens2):
self.name = name
self.index = index
@@ -106,57 +117,74 @@ class CmpErrorCode(VerifyCmpError):
self.tokens = [tokens1, tokens2]
def __str__(self):
s = reduce(lambda s, t: "%s%-37s\t%-37s\n" % (s, t[0], t[1]),
list(map(lambda a, b: (a, b),
self.tokens[0],
self.tokens[1])),
'Code differs in %s\n' % str(self.name))
return ('Code differs in %s at offset %s [%s] != [%s]\n\n' %
(repr(self.name), self.index,
repr(self.token1), repr(self.token2))) + s
s = reduce(
lambda s, t: "%s%-37s\t%-37s\n" % (s, t[0], t[1]),
list(map(lambda a, b: (a, b), self.tokens[0], self.tokens[1])),
"Code differs in %s\n" % str(self.name),
)
return (
"Code differs in %s at offset %s [%s] != [%s]\n\n"
% (repr(self.name), self.index, repr(self.token1), repr(self.token2))
) + s
class CmpErrorCodeLen(VerifyCmpError):
"""Exception to be raised when code length differs."""
def __init__(self, name, tokens1, tokens2):
self.name = name
self.tokens = [tokens1, tokens2]
def __str__(self):
return reduce(lambda s, t: "%s%-37s\t%-37s\n" % (s, t[0], t[1]),
list(map(lambda a, b: (a, b),
self.tokens[0],
self.tokens[1])),
'Code len differs in %s\n' % str(self.name))
return reduce(
lambda s, t: "%s%-37s\t%-37s\n" % (s, t[0], t[1]),
list(map(lambda a, b: (a, b), self.tokens[0], self.tokens[1])),
"Code len differs in %s\n" % str(self.name),
)
class CmpErrorMember(VerifyCmpError):
"""Exception to be raised when other members differ."""
def __init__(self, name, member, data1, data2):
self.name = name
self.member = member
self.data = (data1, data2)
def __str__(self):
return 'Member %s differs in %s:\n\t%s\n\t%s\n' % \
(repr(self.member), repr(self.name),
repr(self.data[0]), repr(self.data[1]))
return "Member %s differs in %s:\n\t%s\n\t%s\n" % (
repr(self.member),
repr(self.name),
repr(self.data[0]),
repr(self.data[1]),
)
# --- compare ---
# these members are ignored
__IGNORE_CODE_MEMBERS__ = ['co_filename', 'co_firstlineno', 'co_lnotab', 'co_stacksize', 'co_names']
__IGNORE_CODE_MEMBERS__ = [
"co_filename",
"co_firstlineno",
"co_lnotab",
"co_stacksize",
"co_names",
]
def cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify,
name=''):
def cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify, name=""):
"""
Compare two code-objects.
This is the main part of this module.
"""
# print code_obj1, type(code_obj2)
assert iscode(code_obj1), \
"cmp_code_object first object type is %s, not code" % type(code_obj1)
assert iscode(code_obj2), \
"cmp_code_object second object type is %s, not code" % type(code_obj2)
assert iscode(
code_obj1
), "cmp_code_object first object type is %s, not code" % type(code_obj1)
assert iscode(
code_obj2
), "cmp_code_object second object type is %s, not code" % type(code_obj2)
# print dir(code_obj1)
if isinstance(code_obj1, object):
# new style classes (Python 2.2)
@@ -168,11 +196,12 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify,
assert dir(code_obj2) == code_obj2.__members__
assert code_obj1.__members__ == code_obj2.__members__
if name == '__main__':
if name == "__main__":
name = code_obj1.co_name
else:
name = '%s.%s' % (name, code_obj1.co_name)
if name == '.?': name = '__main__'
name = "%s.%s" % (name, code_obj1.co_name)
if name == ".?":
name = "__main__"
if isinstance(code_obj1, object) and code_equal(code_obj1, code_obj2):
# use the new style code-classes' __cmp__ method, which
@@ -184,22 +213,22 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify,
pass
if isinstance(code_obj1, object):
members = [x for x in dir(code_obj1) if x.startswith('co_')]
members = [x for x in dir(code_obj1) if x.startswith("co_")]
else:
members = dir(code_obj1)
members.sort() # ; members.reverse()
tokens1 = None
for member in members:
if member in __IGNORE_CODE_MEMBERS__ or verify != 'verify':
if member in __IGNORE_CODE_MEMBERS__ or verify != "verify":
pass
elif member == 'co_code':
if verify != 'strong':
elif member == "co_code":
if verify != "strong":
continue
scanner = get_scanner(version, is_pypy, show_asm=False)
global JUMP_OPS
JUMP_OPS = list(scan.JUMP_OPS) + ['JUMP_BACK']
JUMP_OPS = list(scan.JUMP_OPS) + ["JUMP_BACK"]
# use changed Token class
# We (re)set this here to save exception handling,
@@ -208,25 +237,29 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify,
try:
# ingest both code-objects
tokens1, customize = scanner.ingest(code_obj1)
del customize # save memory
del customize # save memory
tokens2, customize = scanner.ingest(code_obj2)
del customize # save memory
del customize # save memory
finally:
scanner.resetTokenClass() # restore Token class
scanner.resetTokenClass() # restore Token class
targets1 = dis.findlabels(code_obj1.co_code)
tokens1 = [t for t in tokens1 if t.kind != 'COME_FROM']
tokens2 = [t for t in tokens2 if t.kind != 'COME_FROM']
tokens1 = [t for t in tokens1 if t.kind != "COME_FROM"]
tokens2 = [t for t in tokens2 if t.kind != "COME_FROM"]
i1 = 0; i2 = 0
offset_map = {}; check_jumps = {}
i1 = 0
i2 = 0
offset_map = {}
check_jumps = {}
while i1 < len(tokens1):
if i2 >= len(tokens2):
if len(tokens1) == len(tokens2) + 2 \
and tokens1[-1].kind == 'RETURN_VALUE' \
and tokens1[-2].kind == 'LOAD_CONST' \
and tokens1[-2].pattr is None \
and tokens1[-3].kind == 'RETURN_VALUE':
if (
len(tokens1) == len(tokens2) + 2
and tokens1[-1].kind == "RETURN_VALUE"
and tokens1[-2].kind == "LOAD_CONST"
and tokens1[-2].pattr is None
and tokens1[-3].kind == "RETURN_VALUE"
):
break
else:
raise CmpErrorCodeLen(name, tokens1, tokens2)
@@ -235,87 +268,144 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify,
for idx1, idx2, offset2 in check_jumps.get(tokens1[i1].offset, []):
if offset2 != tokens2[i2].offset:
raise CmpErrorCode(name, tokens1[idx1].offset, tokens1[idx1],
tokens2[idx2], tokens1, tokens2)
raise CmpErrorCode(
name,
tokens1[idx1].offset,
tokens1[idx1],
tokens2[idx2],
tokens1,
tokens2,
)
if tokens1[i1].kind != tokens2[i2].kind:
if tokens1[i1].kind == 'LOAD_CONST' == tokens2[i2].kind:
if tokens1[i1].kind == "LOAD_CONST" == tokens2[i2].kind:
i = 1
while tokens1[i1+i].kind == 'LOAD_CONST':
while tokens1[i1 + i].kind == "LOAD_CONST":
i += 1
if tokens1[i1+i].kind.startswith(('BUILD_TUPLE', 'BUILD_LIST')) \
and i == int(tokens1[i1+i].kind.split('_')[-1]):
t = tuple([ elem.pattr for elem in tokens1[i1:i1+i] ])
if tokens1[i1 + i].kind.startswith(
("BUILD_TUPLE", "BUILD_LIST")
) and i == int(tokens1[i1 + i].kind.split("_")[-1]):
t = tuple([elem.pattr for elem in tokens1[i1 : i1 + i]])
if t != tokens2[i2].pattr:
raise CmpErrorCode(name, tokens1[i1].offset, tokens1[i1],
tokens2[i2], tokens1, tokens2)
raise CmpErrorCode(
name,
tokens1[i1].offset,
tokens1[i1],
tokens2[i2],
tokens1,
tokens2,
)
i1 += i + 1
i2 += 1
continue
elif i == 2 and tokens1[i1+i].kind == 'ROT_TWO' and tokens2[i2+1].kind == 'UNPACK_SEQUENCE_2':
elif (
i == 2
and tokens1[i1 + i].kind == "ROT_TWO"
and tokens2[i2 + 1].kind == "UNPACK_SEQUENCE_2"
):
i1 += 3
i2 += 2
continue
elif i == 2 and tokens1[i1+i].kind in BIN_OP_FUNCS:
f = BIN_OP_FUNCS[tokens1[i1+i].kind]
if f(tokens1[i1].pattr, tokens1[i1+1].pattr) == tokens2[i2].pattr:
elif i == 2 and tokens1[i1 + i].kind in BIN_OP_FUNCS:
f = BIN_OP_FUNCS[tokens1[i1 + i].kind]
if (
f(tokens1[i1].pattr, tokens1[i1 + 1].pattr)
== tokens2[i2].pattr
):
i1 += 3
i2 += 1
continue
elif tokens1[i1].kind == 'UNARY_NOT':
if tokens2[i2].kind == 'POP_JUMP_IF_TRUE':
if tokens1[i1+1].kind == 'POP_JUMP_IF_FALSE':
elif tokens1[i1].kind == "UNARY_NOT":
if tokens2[i2].kind == "POP_JUMP_IF_TRUE":
if tokens1[i1 + 1].kind == "POP_JUMP_IF_FALSE":
i1 += 2
i2 += 1
continue
elif tokens2[i2].kind == 'POP_JUMP_IF_FALSE':
if tokens1[i1+1].kind == 'POP_JUMP_IF_TRUE':
elif tokens2[i2].kind == "POP_JUMP_IF_FALSE":
if tokens1[i1 + 1].kind == "POP_JUMP_IF_TRUE":
i1 += 2
i2 += 1
continue
elif tokens1[i1].kind in ('JUMP_FORWARD', 'JUMP_BACK') \
and tokens1[i1-1].kind == 'RETURN_VALUE' \
and tokens2[i2-1].kind in ('RETURN_VALUE', 'RETURN_END_IF') \
and int(tokens1[i1].offset) not in targets1:
elif (
tokens1[i1].kind in ("JUMP_FORWARD", "JUMP_BACK")
and tokens1[i1 - 1].kind == "RETURN_VALUE"
and tokens2[i2 - 1].kind in ("RETURN_VALUE", "RETURN_END_IF")
and int(tokens1[i1].offset) not in targets1
):
i1 += 1
continue
elif tokens1[i1].kind == 'JUMP_BACK' and tokens2[i2].kind == 'CONTINUE':
elif (
tokens1[i1].kind == "JUMP_BACK"
and tokens2[i2].kind == "CONTINUE"
):
# FIXME: should make sure that offset is inside loop, not outside of it
i1 += 2
i2 += 2
continue
elif tokens1[i1].kind == 'JUMP_FORWARD' and tokens2[i2].kind == 'JUMP_BACK' \
and tokens1[i1+1].kind == 'JUMP_BACK' and tokens2[i2+1].kind == 'JUMP_BACK' \
and int(tokens1[i1].pattr) == int(tokens1[i1].offset) + 3:
if int(tokens1[i1].pattr) == int(tokens1[i1+1].offset):
elif (
tokens1[i1].kind == "JUMP_FORWARD"
and tokens2[i2].kind == "JUMP_BACK"
and tokens1[i1 + 1].kind == "JUMP_BACK"
and tokens2[i2 + 1].kind == "JUMP_BACK"
and int(tokens1[i1].pattr) == int(tokens1[i1].offset) + 3
):
if int(tokens1[i1].pattr) == int(tokens1[i1 + 1].offset):
i1 += 2
i2 += 2
continue
elif tokens1[i1].kind == 'LOAD_NAME' and tokens2[i2].kind == 'LOAD_CONST' \
and tokens1[i1].pattr == 'None' and tokens2[i2].pattr is None:
elif (
tokens1[i1].kind == "LOAD_NAME"
and tokens2[i2].kind == "LOAD_CONST"
and tokens1[i1].pattr == "None"
and tokens2[i2].pattr is None
):
pass
elif tokens1[i1].kind == 'LOAD_GLOBAL' and tokens2[i2].kind == 'LOAD_NAME' \
and tokens1[i1].pattr == tokens2[i2].pattr:
elif (
tokens1[i1].kind == "LOAD_GLOBAL"
and tokens2[i2].kind == "LOAD_NAME"
and tokens1[i1].pattr == tokens2[i2].pattr
):
pass
elif tokens1[i1].kind == 'LOAD_ASSERT' and tokens2[i2].kind == 'LOAD_NAME' \
and tokens1[i1].pattr == tokens2[i2].pattr:
elif (
tokens1[i1].kind == "LOAD_ASSERT"
and tokens2[i2].kind == "LOAD_NAME"
and tokens1[i1].pattr == tokens2[i2].pattr
):
pass
elif (tokens1[i1].kind == 'RETURN_VALUE' and
tokens2[i2].kind == 'RETURN_END_IF'):
elif (
tokens1[i1].kind == "RETURN_VALUE"
and tokens2[i2].kind == "RETURN_END_IF"
):
pass
elif (tokens1[i1].kind == 'BUILD_TUPLE_0' and
tokens2[i2].pattr == ()):
elif (
tokens1[i1].kind == "BUILD_TUPLE_0" and tokens2[i2].pattr == ()
):
pass
else:
raise CmpErrorCode(name, tokens1[i1].offset, tokens1[i1],
tokens2[i2], tokens1, tokens2)
elif tokens1[i1].kind in JUMP_OPS and tokens1[i1].pattr != tokens2[i2].pattr:
if tokens1[i1].kind == 'JUMP_BACK':
raise CmpErrorCode(
name,
tokens1[i1].offset,
tokens1[i1],
tokens2[i2],
tokens1,
tokens2,
)
elif (
tokens1[i1].kind in JUMP_OPS
and tokens1[i1].pattr != tokens2[i2].pattr
):
if tokens1[i1].kind == "JUMP_BACK":
dest1 = int(tokens1[i1].pattr)
dest2 = int(tokens2[i2].pattr)
if offset_map[dest1] != dest2:
raise CmpErrorCode(name, tokens1[i1].offset, tokens1[i1],
tokens2[i2], tokens1, tokens2)
raise CmpErrorCode(
name,
tokens1[i1].offset,
tokens1[i1],
tokens2[i2],
tokens1,
tokens2,
)
else:
# import pdb; pdb.set_trace()
try:
@@ -329,17 +419,16 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify,
i1 += 1
i2 += 1
del tokens1, tokens2 # save memory
elif member == 'co_consts':
del tokens1, tokens2 # save memory
elif member == "co_consts":
# partial optimization can make the co_consts look different,
# so we'll just compare the code consts
codes1 = ( c for c in code_obj1.co_consts if hasattr(c, 'co_consts') )
codes2 = ( c for c in code_obj2.co_consts if hasattr(c, 'co_consts') )
codes1 = (c for c in code_obj1.co_consts if hasattr(c, "co_consts"))
codes2 = (c for c in code_obj2.co_consts if hasattr(c, "co_consts"))
for c1, c2 in zip(codes1, codes2):
cmp_code_objects(version, is_pypy, c1, c2, verify,
name=name)
elif member == 'co_flags':
cmp_code_objects(version, is_pypy, c1, c2, verify, name=name)
elif member == "co_flags":
flags1 = code_obj1.co_flags
flags2 = code_obj2.co_flags
if is_pypy or version == 2.4:
@@ -348,56 +437,68 @@ def cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify,
# where or why
flags2 &= ~0x0100 # PYPY_SOURCE_IS_UTF8
# We also don't care about COROUTINE or GENERATOR for now
flags1 &= ~0x000000a0
flags2 &= ~0x000000a0
flags1 &= ~0x000000A0
flags2 &= ~0x000000A0
if flags1 != flags2:
raise CmpErrorMember(name, 'co_flags',
pretty_flags(flags1),
pretty_flags(flags2))
raise CmpErrorMember(
name,
"co_flags",
pretty_code_flags(flags1),
pretty_code_flags(flags2),
)
else:
# all other members must be equal
if getattr(code_obj1, member) != getattr(code_obj2, member):
raise CmpErrorMember(name, member,
getattr(code_obj1, member),
getattr(code_obj2, member))
raise CmpErrorMember(
name, member, getattr(code_obj1, member), getattr(code_obj2, member)
)
class Token(ScannerToken):
"""Token class with changed semantics for 'cmp()'."""
def __cmp__(self, o):
t = self.kind # shortcut
if t == 'BUILD_TUPLE_0' and o.kind == 'LOAD_CONST' and o.pattr == ():
t = self.kind # shortcut
if t == "BUILD_TUPLE_0" and o.kind == "LOAD_CONST" and o.pattr == ():
return 0
if t == 'COME_FROM' == o.kind:
if t == "COME_FROM" == o.kind:
return 0
if t == 'PRINT_ITEM_CONT' and o.kind == 'PRINT_ITEM':
if t == "PRINT_ITEM_CONT" and o.kind == "PRINT_ITEM":
return 0
if t == 'RETURN_VALUE' and o.kind == 'RETURN_END_IF':
if t == "RETURN_VALUE" and o.kind == "RETURN_END_IF":
return 0
if t == 'JUMP_IF_FALSE_OR_POP' and o.kind == 'POP_JUMP_IF_FALSE':
if t == "JUMP_IF_FALSE_OR_POP" and o.kind == "POP_JUMP_IF_FALSE":
return 0
if JUMP_OPS and t in JUMP_OPS:
# ignore offset
return t == o.kind
return (t == o.kind) or self.pattr == o.pattr
return (t == o.kind) or self.pattr == o.pattr
def __repr__(self):
return '%s %s (%s)' % (str(self.kind), str(self.attr),
repr(self.pattr))
return "%s %s (%s)" % (str(self.kind), str(self.attr), repr(self.pattr))
def __str__(self):
return '%s\t%-17s %r' % (self.offset, self.kind, self.pattr)
return "%s\t%-17s %r" % (self.offset, self.kind, self.pattr)
def compare_code_with_srcfile(pyc_filename, src_filename, verify):
"""Compare a .pyc with a source code file. If everything is okay, None
is returned. Otherwise a string message describing the mismatch is returned.
"""
(version, timestamp, magic_int, code_obj1, is_pypy,
source_size, sip_hash) = load_module(pyc_filename)
(
version,
timestamp,
magic_int,
code_obj1,
is_pypy,
source_size,
sip_hash,
) = load_module(pyc_filename)
if magic_int != PYTHON_MAGIC_INT:
msg = ("Can't compare code - Python is running with magic %s, but code is magic %s "
% (PYTHON_MAGIC_INT, magic_int))
msg = (
"Can't compare code - Python is running with magic %s, but code is magic %s "
% (PYTHON_MAGIC_INT, magic_int)
)
return msg
try:
code_obj2 = load_file(src_filename)
@@ -407,7 +508,7 @@ def compare_code_with_srcfile(pyc_filename, src_filename, verify):
print(pyc_filename)
return str(e).replace(src_filename, pyc_filename)
cmp_code_objects(version, is_pypy, code_obj1, code_obj2, verify)
if verify == 'verify-run':
if verify == "verify-run":
try:
retcode = call("%s %s" % (sys.executable, src_filename), shell=True)
if retcode != 0:
@@ -418,19 +519,35 @@ def compare_code_with_srcfile(pyc_filename, src_filename, verify):
pass
return None
def compare_files(pyc_filename1, pyc_filename2, verify):
"""Compare two .pyc files."""
(version1, timestamp, magic_int1, code_obj1, is_pypy,
source_size, sip_hash) = uncompyle6.load_module(pyc_filename1)
(version2, timestamp, magic_int2, code_obj2, is_pypy,
source_size, sip_hash) = uncompyle6.load_module(pyc_filename2)
if (magic_int1 != magic_int2) and verify == 'verify':
verify = 'weak_verify'
(
version1,
timestamp,
magic_int1,
code_obj1,
is_pypy,
source_size,
sip_hash,
) = uncompyle6.load_module(pyc_filename1)
(
version2,
timestamp,
magic_int2,
code_obj2,
is_pypy,
source_size,
sip_hash,
) = uncompyle6.load_module(pyc_filename2)
if (magic_int1 != magic_int2) and verify == "verify":
verify = "weak_verify"
cmp_code_objects(version1, is_pypy, code_obj1, code_obj2, verify)
if __name__ == '__main__':
t1 = Token('LOAD_CONST', None, 'code_object _expandLang', 52)
t2 = Token('LOAD_CONST', -421, 'code_object _expandLang', 55)
if __name__ == "__main__":
t1 = Token("LOAD_CONST", None, "code_object _expandLang", 52)
t2 = Token("LOAD_CONST", -421, "code_object _expandLang", 55)
print(repr(t1))
print(repr(t2))
print(t1.kind == t2.kind, t1.attr == t2.attr)
print(t1.kind == t2.kind, t1.attr == t2.attr)

View File

@@ -10,6 +10,6 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# This file is suitable for sourcing inside bash as
# This file is suitable for sourcing inside POSIX shell as
# well as importing into Python
VERSION="3.6.6" # noqa
VERSION="3.7.1" # noqa