#45 Doctest failure using python 2.7

release 1.1.1
open
pyke (39)
5
2014-08-26
2011-05-16
Daniele Tricoli
No

A doctest is failing with python2.7:

$ python2.7 /usr/bin/nosetests --with-doctest pyke/
.................writing [compiled_krb]/example_fc.py
writing [compiled_krb]/example_bc.py
writing [compiled_krb]/example_plans.py
writing [compiled_krb]/bc_example_bc.py
writing [compiled_krb]/bc2_example_bc.py
writing [compiled_krb]/family.fbc
writing [compiled_krb]/fc_example_fc.py
writing [compiled_krb]/compiled_pyke_files.py
................F................
======================================================================
FAIL: Doctest: pyke.krb_compiler.scanner.tokenize_file
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/doctest.py", line 2166, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for pyke.krb_compiler.scanner.tokenize_file
File "/home/eriol/devel/debian/pyke/tmp/pyke-1.1.1/pyke/krb_compiler/scanner.py", line 572, in tokenize_file

----------------------------------------------------------------------
File "/home/eriol/devel/debian/pyke/tmp/pyke-1.1.1/pyke/krb_compiler/scanner.py", line 578, in pyke.krb_compiler.scanner.tokenize_file
Failed example:
tokenize_file(os.path.join(os.path.dirname(__file__),
'TEST/scan_test'))
Expected:
LexToken(NL_TOK,'\n# line 2 of comment\n\n# comment after blank line\n',1,19)
LexToken(IDENTIFIER_TOK,'name1',5,68)
LexToken(:,':',5,73)
LexToken(NL_TOK,'\n',5,74)
LexToken(INDENT_TOK,'\n ',6,74)
LexToken(FOREACH_TOK,'foreach',6,79)
LexToken(NL_TOK,'\n',6,86)
LexToken(INDENT_TOK,'\n\t',7,86)
LexToken(LP_TOK,'(',7,88)
LexToken(NUMBER_TOK,100,7,89)
LexToken(NUMBER_TOK,64,7,93)
LexToken(ANONYMOUS_VAR_TOK,"'_'",7,98)
LexToken(PATTERN_VAR_TOK,"'foo'",7,101)
LexToken(NUMBER_TOK,256,8,118)
LexToken(NUMBER_TOK,0,8,124)
LexToken(RP_TOK,')',8,125)
LexToken(NL_TOK,'\n',8,126)
LexToken(NUMBER_TOK,3.14,9,129)
LexToken(NUMBER_TOK,0.99,9,134)
LexToken(NUMBER_TOK,3.0,10,143)
LexToken(NUMBER_TOK,0.29999999999999999,10,146)
LexToken(NUMBER_TOK,3000000.0,10,149)
LexToken(NUMBER_TOK,3.0000000000000001e-06,10,153)
LexToken(NL_TOK,'\n',10,158)
LexToken(DEINDENT_TOK,'\n ',11,158)
LexToken(ASSERT_TOK,'assert',11,163)
LexToken(NL_TOK,'\n',11,169)
LexToken(INDENT_TOK,'\n\t',12,169)
LexToken(STRING_TOK,"'this is a string'",12,172)
LexToken(STRING_TOK,'"so is this"',12,191)
LexToken(STRING_TOK,"'''\n\tand this \\t too'''",12,204)
LexToken(STRING_TOK,"'should be\\\n able to do this too'",13,229)
LexToken(TRUE_TOK,'True',15,278)
LexToken(NL_TOK,'\n',15,283)
LexToken(!,'!',16,292)
LexToken(IDENTIFIER_TOK,'can',16,293)
LexToken(IDENTIFIER_TOK,'I',17,311)
LexToken(IDENTIFIER_TOK,'do',17,313)
LexToken(IDENTIFIER_TOK,'this',17,316)
LexToken(NL_TOK,'\n',17,320)
LexToken(IDENTIFIER_TOK,'too',18,329)
LexToken(NL_TOK,'\n',18,332)
LexToken(DEINDENT_TOK,'\n',19,332)
LexToken(DEINDENT_TOK,'\n',19,332)
Got:
LexToken(NL_TOK,'\n# line 2 of comment\n\n# comment after blank line\n',1,19)
LexToken(IDENTIFIER_TOK,'name1',5,68)
LexToken(:,':',5,73)
LexToken(NL_TOK,'\n',5,74)
LexToken(INDENT_TOK,'\n ',6,74)
LexToken(FOREACH_TOK,'foreach',6,79)
LexToken(NL_TOK,'\n',6,86)
LexToken(INDENT_TOK,'\n\t',7,86)
LexToken(LP_TOK,'(',7,88)
LexToken(NUMBER_TOK,100,7,89)
LexToken(NUMBER_TOK,64,7,93)
LexToken(ANONYMOUS_VAR_TOK,"'_'",7,98)
LexToken(PATTERN_VAR_TOK,"'foo'",7,101)
LexToken(NUMBER_TOK,256,8,118)
LexToken(NUMBER_TOK,0,8,124)
LexToken(RP_TOK,')',8,125)
LexToken(NL_TOK,'\n',8,126)
LexToken(NUMBER_TOK,3.14,9,129)
LexToken(NUMBER_TOK,0.99,9,134)
LexToken(NUMBER_TOK,3.0,10,143)
LexToken(NUMBER_TOK,0.3,10,146)
LexToken(NUMBER_TOK,3000000.0,10,149)
LexToken(NUMBER_TOK,3e-06,10,153)
LexToken(NL_TOK,'\n',10,158)
LexToken(DEINDENT_TOK,'\n ',11,158)
LexToken(ASSERT_TOK,'assert',11,163)
LexToken(NL_TOK,'\n',11,169)
LexToken(INDENT_TOK,'\n\t',12,169)
LexToken(STRING_TOK,"'this is a string'",12,172)
LexToken(STRING_TOK,'"so is this"',12,191)
LexToken(STRING_TOK,"'''\n\tand this \\t too'''",12,204)
LexToken(STRING_TOK,"'should be\\\n able to do this too'",13,229)
LexToken(TRUE_TOK,'True',15,278)
LexToken(NL_TOK,'\n',15,283)
LexToken(!,'!',16,292)
LexToken(IDENTIFIER_TOK,'can',16,293)
LexToken(IDENTIFIER_TOK,'I',17,311)
LexToken(IDENTIFIER_TOK,'do',17,313)
LexToken(IDENTIFIER_TOK,'this',17,316)
LexToken(NL_TOK,'\n',17,320)
LexToken(IDENTIFIER_TOK,'too',18,329)
LexToken(NL_TOK,'\n',18,332)
LexToken(DEINDENT_TOK,'\n',19,332)
LexToken(DEINDENT_TOK,'\n',19,332)

----------------------------------------------------------------------
Ran 50 tests in 0.534s

FAILED (failures=1)

The diff of expected and got is:
$ diff -u expected.txt got.txt
--- expected.txt 2011-05-16 04:39:29.000000000 +0200
+++ got.txt 2011-05-16 04:39:52.000000000 +0200
@@ -15,12 +15,12 @@
LexToken(NUMBER_TOK,0,8,124)
LexToken(RP_TOK,')',8,125)
LexToken(NL_TOK,'\n',8,126)
- LexToken(NUMBER_TOK,3.1400000000000001,9,129)
- LexToken(NUMBER_TOK,0.98999999999999999,9,134)
+ LexToken(NUMBER_TOK,3.14,9,129)
+ LexToken(NUMBER_TOK,0.99,9,134)
LexToken(NUMBER_TOK,3.0,10,143)
- LexToken(NUMBER_TOK,0.29999999999999999,10,146)
+ LexToken(NUMBER_TOK,0.3,10,146)
LexToken(NUMBER_TOK,3000000.0,10,149)
- LexToken(NUMBER_TOK,3.0000000000000001e-06,10,153)
+ LexToken(NUMBER_TOK,3e-06,10,153)
LexToken(NL_TOK,'\n',10,158)
LexToken(DEINDENT_TOK,'\n ',11,158)
LexToken(ASSERT_TOK,'assert',11,163)

So the problem is due to backported features to 2.7 from 3.1. In particular quoting from What’s New in Python 2.7[¹]:
The repr() of a float x is shorter in many cases: it’s now based on the shortest decimal string that’s guaranteed to round back to x. As in previous versions of Python, it’s guaranteed that float(repr(x)) recovers x.

Cheers,
Daniele Tricoli

[¹] http://docs.python.org/dev/whatsnew/2.7.html#python-3-1-features

Discussion

  • I'm using the attached patch to make tests not fail. I choosed str() instead of format() to not break python2.5.

    Cheers,
    Daniele Tricoli

     
  • Jakub Wilk send me a less invasive patch, it's attached.