I'm glad to have this discussion! More inline:

On Sat, Mar 22, 2014 at 11:26 AM, Jeff Allen <ja.py@farowl.co.uk> wrote:
Reasoning about this a little, unless there are fixed expectations for the quality of each successive beta release, we should base them largely on the passage of time. By "expectations" I mean you could decide e.g. that the number of test suite fails, guilty skips, or accepted bugs has to be half what it was last time, or matches some target. But I don't advocate this.

A red buildbot ought to be a blocker, although that's simply a matter of skipping enough tests. Apart from that, I advocate time-based betas to engage a community not prepared to build from source. It's nice to get a feature you care about in the next beta, but it's more important to maintain the habit of production. I believe I could argue that a period of steady bug-fixing makes a better prelude to a release than the first appearance of a shiny feature.

Good points, especially with respect to the following:

1. The significant console fixes you've done, which directly supports a representative user (as we saw in another email thread). The vast preponderance of our users likely do not build from source is another important factor.

2. We are doing feature-based development, converging on a 2.7.0 as set by the CPython reference. But you're right, we should complement this with regular releases.

3. We will get more user feedback as they see and use betas. 

So +1 on an immediate beta release. Any others? Frank, what do you think?


We believe we have reached "beta quality", hence b1 exists. Change is almost always done without regressions and we have added important features and fixes. Therefore we must be maintaining at least beta quality and must eventually cross the threshold into release candidate territory.

I'm not sure when that release candidate threshold might be reached.  I'm not aware of any neglected features (like buffer was). MBCS maybe. io is still messy. I don't think Windows is specially an issue now, as we've nearly equalised the number of test failures between that and Linux. (Sorry, I think I added one to Linux.)

I'm largely content to be guided by others on the project about what's good enough to release, but Jim's analysis strikes me as optimistic. We have quite a lot of fails and skips in the regression tests, and sometimes they represent knotty issues. On the plus side, when we fix a knotty one, we get rid of several skips at once. This is where my energy goes at the moment. Good fun if anyone wants to join in.

Absolutely agreed on the satisfaction of seeing those bugs get fixed!

Let's recall the reason for the socket-reboot work, and why I see it as important enough to be a blocker in our release cycle. Jython 2.7 needs to be a full participant of the Python ecosystem, especially since this is the case with 2.5. I was quite reluctant to put this time into fixing ssl, because I was not then an expert in low-level networking APIs, certainly on the C side. But socket-reboot needed to be done, and I'm glad it is getting close to be completed.

In particular, it's getting quite close for test_socket passing, and that's why I'm now rather optimistic about it. Perhaps the last feature to be implemented for socket itself is socket.dup, which is also important for actually using socket.makefile. The socket.dup method is just one more example of what is required to support socket closing semantics. So perhaps not unexpected, much of the work I have been doing in the last week has been about the closing of sockets overall, which the test_socket test suites have been good at identifying any issues in. Suffice to say, sockets can/must be closed for many reasons!

- Jim