Get ready for release 2.13.0

This commit is contained in:
rocky
2017-10-10 21:50:06 -04:00
parent b4426931ef
commit 9cfd7d669e
9 changed files with 191 additions and 15 deletions

View File

@@ -1,3 +1,90 @@
2017-10-10 rocky <rb@dustyfeet.com>
* HOW-TO-REPORT-A-BUG.md, test/Makefile, uncompyle6/parser.py,
uncompyle6/parsers/parse3.py, uncompyle6/scanners/scanner3.py,
uncompyle6/semantics/consts.py, uncompyle6/semantics/pysource.py:
Improve parse trace. lambda fixes yet again
2017-10-10 rocky <rb@dustyfeet.com>
* test/simple_source/branching/02_ifelse_lambda.py,
uncompyle6/semantics/consts.py: Address dead code in lambda ifelse
2017-10-10 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse24.py, uncompyle6/scanners/scanner3.py:
Misc bugs
2017-10-10 R. Bernstein <rocky@users.noreply.github.com>
* : Merge pull request #131 from rocky/type2kind-rework Adjust for spark-parser 2.7.0 incompatibilities
2017-10-10 rocky <rb@dustyfeet.com>
* __pkginfo__.py, pytest/test_grammar.py, pytest/test_pysource.py,
uncompyle6/parser.py, uncompyle6/parsers/astnode.py,
uncompyle6/parsers/parse2.py, uncompyle6/parsers/parse24.py,
uncompyle6/parsers/parse26.py, uncompyle6/parsers/parse27.py,
uncompyle6/parsers/parse3.py, uncompyle6/parsers/parse32.py,
uncompyle6/parsers/parse34.py, uncompyle6/parsers/parse35.py,
uncompyle6/parsers/parse36.py, uncompyle6/parsers/parse37.py,
uncompyle6/scanners/scanner22.py, uncompyle6/scanners/scanner26.py,
uncompyle6/scanners/scanner27.py, uncompyle6/scanners/scanner3.py,
uncompyle6/scanners/tok.py, uncompyle6/semantics/check_ast.py,
uncompyle6/semantics/fragments.py,
uncompyle6/semantics/make_function.py,
uncompyle6/semantics/pysource.py, uncompyle6/verify.py,
uncompyle6/version.py: Adjust for spark-parser 2.7.0
incompatabilities
2017-10-05 rocky <rb@dustyfeet.com>
* : One more test
2017-10-05 rocky <rb@dustyfeet.com>
* : commit b3359439f94c136619b198beaecbfce1b827d2db Author: rocky
<rb@dustyfeet.com> Date: Thu Oct 5 11:00:55 2017 -0400
2017-10-03 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse2.py, uncompyle6/parsers/parse24.py,
uncompyle6/parsers/parse26.py: handle newer parser reduction
behavior
2017-10-03 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: Remove schumutz
2017-10-03 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/pysource.py: More table doc tweaks
2017-10-03 rocky <rb@dustyfeet.com>
* uncompyle6/semantics/fragments.py,
uncompyle6/semantics/pysource.py: Go over table-semantics
description yet again
2017-10-02 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse2.py, uncompyle6/parsers/parse3.py:
spark-parser induced changes... reduce rules can be called without token streams.
2017-09-30 rocky <rb@dustyfeet.com>
* uncompyle6/parser.py, uncompyle6/scanners/scanner2.py,
uncompyle6/scanners/scanner3.py: Document hacky customize arg count
better.
2017-09-26 rocky <rb@dustyfeet.com>
* README.rst: Word hacking
2017-09-26 rocky <rb@dustyfeet.com>
* ChangeLog, NEWS: Get ready for release 2.12.0
2017-09-26 rocky <rb@dustyfeet.com> 2017-09-26 rocky <rb@dustyfeet.com>
* uncompyle6/parsers/parse3.py: No unicode in Python3. but we need it in Python2. The bug was probably introduced as a * uncompyle6/parsers/parse3.py: No unicode in Python3. but we need it in Python2. The bug was probably introduced as a

View File

@@ -3,8 +3,9 @@
## The difficulty of the problem ## The difficulty of the problem
There is no Python decompiler yet, that I know about that will There is no Python decompiler yet, that I know about that will
decompyle everything. This one probably does the decompyle everything. This one probably does the best job of *any*
best job of *any* Python decompiler. But it is a constant work in progress: Python keeps changing, and so does its code generation. Python decompiler. But it is a constant work in progress: Python keeps
changing, and so does its code generation.
I have found bugs in *every* Python decompiler I have tried. Even I have found bugs in *every* Python decompiler I have tried. Even
those where authors/maintainers claim that they have used it on those where authors/maintainers claim that they have used it on
@@ -14,6 +15,55 @@ but that the program is *semantically* not equivalent.
So it is likely you'll find a mistranslation in decompiling. So it is likely you'll find a mistranslation in decompiling.
## Is it really a bug?
If the code emitted is semantically equivalent, then this isn't a bug.
For example the code might be
```
if a:
if b:
x = 1
```
and we might produce:
```
if a and b:
x = 1
```
These are equivalent. Sometimes
```
else:
if ...
```
may out as `elif`.
As mentioned in the README. It is possible that Python changes what
you write to be more efficient. For example, for:
```
if True:
x = 5
```
Python will generate code like:
```
x = 5
```
So just because the text isn't the same, does not
necessarily mean there's a bug.
## What to send (minimum requirements) ## What to send (minimum requirements)
The basic requirement is pretty simple: The basic requirement is pretty simple:
@@ -21,6 +71,12 @@ The basic requirement is pretty simple:
* Python bytecode * Python bytecode
* Python source text * Python source text
Please don't put files on download services that one has to register
for. If you can't attach it to the issue, or create a github gist,
then the code you are sending is too large.
Please also try to narrow the bug. See below.
## What to send (additional helpful information) ## What to send (additional helpful information)
Some kind folks also give the invocation they used and the output Some kind folks also give the invocation they used and the output

8
NEWS
View File

@@ -1,9 +1,17 @@
uncompyle6 2.13.0 2017-10-10
- Fixes in deparsing lambda expressions
- Improve table-semantics descriptions
- Document hacky customize arg count better (until we can remove it)
- Update to use xdis 3.7.0 or greater
uncompyle6 2.12.0 2017-09-26 uncompyle6 2.12.0 2017-09-26
- Use xdis 3.6.0 or greater now - Use xdis 3.6.0 or greater now
- Small semantic table cleanups - Small semantic table cleanups
- Python 3.4's terms a little names better - Python 3.4's terms a little names better
- Slightly more Python 3.7, but still failing a lot - Slightly more Python 3.7, but still failing a lot
- Cross Python 2/3 compatibility with annotation arguments
uncompyle6 2.11.5 2017-08-31 uncompyle6 2.11.5 2017-08-31

View File

@@ -47,7 +47,7 @@ check-3.5: check-bytecode
#: Run working tests from Python 3.6 #: Run working tests from Python 3.6
check-3.6: check-bytecode check-3.6: check-bytecode
$(PYTHON) test_pythonlib.py --bytecode-3.6 --verify $(COMPILE) $(PYTHON) test_pythonlib.py --bytecode-3.6 --weak-verify $(COMPILE)
#: Check deparsing only, but from a different Python version #: Check deparsing only, but from a different Python version
check-disasm: check-disasm:

View File

@@ -91,14 +91,14 @@ class PythonParser(GenericASTBuilder):
for i in dir(self): for i in dir(self):
setattr(self, i, None) setattr(self, i, None)
def debug_reduce(self, rule, tokens, parent, i): def debug_reduce(self, rule, tokens, parent, last_token_pos):
"""Customized format and print for our kind of tokens """Customized format and print for our kind of tokens
which gets called in debugging grammar reduce rules which gets called in debugging grammar reduce rules
""" """
def fix(c): def fix(c):
s = str(c) s = str(c)
i = s.find('_') last_token_pos = s.find('_')
return s if i == -1 else s[:i] return s if last_token_pos == -1 else s[:last_token_pos]
prefix = '' prefix = ''
if parent and tokens: if parent and tokens:
@@ -110,13 +110,13 @@ class PythonParser(GenericASTBuilder):
if hasattr(p_token, 'offset'): if hasattr(p_token, 'offset'):
prefix += "%3s" % fix(p_token.offset) prefix += "%3s" % fix(p_token.offset)
if len(rule[1]) > 1: if len(rule[1]) > 1:
prefix += '-%-3s ' % fix(tokens[i-1].offset) prefix += '-%-3s ' % fix(tokens[last_token_pos-1].offset)
else: else:
prefix += ' ' prefix += ' '
else: else:
prefix = ' ' prefix = ' '
print("%s%s ::= %s" % (prefix, rule[0], ' '.join(rule[1]))) print("%s%s ::= %s (%d)" % (prefix, rule[0], ' '.join(rule[1]), last_token_pos))
def error(self, instructions, index): def error(self, instructions, index):
# Find the last line boundary # Find the last line boundary

View File

@@ -157,8 +157,13 @@ class Python3Parser(PythonParser):
# of missing "else" clauses. Therefore we include grammar # of missing "else" clauses. Therefore we include grammar
# rules with and without ELSE. # rules with and without ELSE.
ifelsestmt ::= testexpr c_stmts_opt JUMP_FORWARD else_suite opt_come_from_except ifelsestmt ::= testexpr c_stmts_opt JUMP_FORWARD
ifelsestmt ::= testexpr c_stmts_opt jump_forward_else else_suite _come_from else_suite opt_come_from_except
ifelsestmt ::= testexpr c_stmts_opt jump_forward_else
else_suite _come_from
# ifelsestmt ::= testexpr c_stmts_opt jump_forward_else
# passstmt _come_from
ifelsestmtc ::= testexpr c_stmts_opt JUMP_ABSOLUTE else_suitec ifelsestmtc ::= testexpr c_stmts_opt JUMP_ABSOLUTE else_suitec
ifelsestmtc ::= testexpr c_stmts_opt jump_absolute_else else_suitec ifelsestmtc ::= testexpr c_stmts_opt jump_absolute_else else_suitec
@@ -254,8 +259,14 @@ class Python3Parser(PythonParser):
POP_BLOCK LOAD_CONST COME_FROM_WITH POP_BLOCK LOAD_CONST COME_FROM_WITH
WITH_CLEANUP END_FINALLY WITH_CLEANUP END_FINALLY
## FIXME: Right now we have erroneous jump targets
## This below is probably not correct when the COME_FROM is put in the right place
and ::= expr jmp_false expr COME_FROM and ::= expr jmp_false expr COME_FROM
or ::= expr jmp_true expr COME_FROM or ::= expr jmp_true expr COME_FROM
# # something like the below is needed when the jump targets are fixed
## or ::= expr JUMP_IF_TRUE_OR_POP COME_FROM expr
## and ::= expr JUMP_IF_FALSE_OR_POP COME_FROM expr
''' '''
def p_misc3(self, args): def p_misc3(self, args):

View File

@@ -779,6 +779,10 @@ class Scanner3(Scanner):
if ((code[prev_op[target]] in self.pop_jump_if_pop) and if ((code[prev_op[target]] in self.pop_jump_if_pop) and
(target > offset) and prev_op[target] != offset): (target > offset) and prev_op[target] != offset):
# FIXME: this is not accurate The commented out below
# is what it should be. However grammar rules right now
# assume the incorrect offsets.
# self.fixed_jumps[offset] = target
self.fixed_jumps[offset] = prev_op[target] self.fixed_jumps[offset] = prev_op[target]
self.structs.append({'type': 'and/or', self.structs.append({'type': 'and/or',
'start': start, 'start': start,

View File

@@ -175,10 +175,6 @@ TABLE_DIRECT = {
'ret_cond_not': ( '%p if not %p else %p', (2, 27), (0, 22), (-1, 27) ), 'ret_cond_not': ( '%p if not %p else %p', (2, 27), (0, 22), (-1, 27) ),
'conditional_lambda': ( '%c if %c else %c', 2, 0, 4), 'conditional_lambda': ( '%c if %c else %c', 2, 0, 4),
# Python 3.x can have be dead code as a result of its optimization?
# So we'll add a # at the end of the return lambda so the rest is ignored
'return_lambda': ('%c # Avoid dead code: ', 0),
'compare': ( '%p %[-1]{pattr.replace("-", " ")} %p', (0, 19), (1, 19) ), 'compare': ( '%p %[-1]{pattr.replace("-", " ")} %p', (0, 19), (1, 19) ),
'cmp_list': ( '%p %p', (0, 29), (1, 30)), 'cmp_list': ( '%p %p', (0, 29), (1, 30)),
'cmp_list1': ( '%[3]{pattr} %p %p', (0, 19), (-2, 19)), 'cmp_list1': ( '%[3]{pattr} %p %p', (0, 19), (-2, 19)),

View File

@@ -647,6 +647,20 @@ class SourceWalker(GenericASTTraversal, object):
node == AST('return_stmt', node == AST('return_stmt',
[AST('ret_expr', [NONE]), Token('RETURN_VALUE')])) [AST('ret_expr', [NONE]), Token('RETURN_VALUE')]))
# Python 3.x can have be dead code as a result of its optimization?
# So we'll add a # at the end of the return lambda so the rest is ignored
def n_return_lambda(self, node):
if 1 <= len(node) <= 2:
self.preorder(node[0])
self.write(' # Avoid dead code: ')
self.prune()
else:
# We can't comment out like above because there may be a trailing ')'
# that needs to be written
assert len(node) == 3 and node[2] == 'LAMBDA_MARKER'
self.preorder(node[0])
self.prune()
def n_return_stmt(self, node): def n_return_stmt(self, node):
if self.params['isLambda']: if self.params['isLambda']:
self.preorder(node[0]) self.preorder(node[0])
@@ -2198,7 +2212,7 @@ def deparse_code(version, co, out=sys.stdout, showasm=None, showast=False,
debug_parser = dict(PARSER_DEFAULT_DEBUG) debug_parser = dict(PARSER_DEFAULT_DEBUG)
if showgrammar: if showgrammar:
debug_parser['reduce'] = showgrammar debug_parser['reduce'] = showgrammar
debug_parser['errorstack'] = True debug_parser['errorstack'] = 'full'
# Build AST from disassembly. # Build AST from disassembly.
linestarts = dict(scanner.opc.findlinestarts(co)) linestarts = dict(scanner.opc.findlinestarts(co))