Alex Fabijanic wrote:
> so much information (and no code or a coherent document) that intent
You are not going to get code and be able to say 'no'. By the time its
done, I'll not
be in a mood to change it, because it will meet my needs. You seem to be
that I make a big formal proposal as a done deal for a design and then
you get to veto
it, but that isn't going to happen. As, indeed, you have made clear for
your IO code.
You don't want it to change very much.
I HAVE suggested discussion. I HAVE NOT suggested a finished design
be put on the table - that ignores any iterative design for a start.
If you have no comment to make on the design of an AOI wrapper, then
'fess up and
we'll finish this farce. I've asked for opinion on a design approach.
I think it would be
important that a common approach to what constitutes a channel be agreed
but you have
avoided the issue again. Please, lets at least try to agree what we
could do for a
common abstraction for pushing data through a system to some sort of
sink (ditto source), whether its a serial port or related process or
I really think this is important because otherwise there will be
for nearly the same thing and that can only be bad. We might not have
all the variations
of blocking, non blocking, pulled, or automatically delivered on all
I'd hope that where there IS commonality then there is a common API
You went so far as to implement an abstraction for Sockets - but its
Can we discuss how that sort of model works for active and passive
and partial shutdown? Even if we shirk OOB? I don't understand and you're
the architect of this code.
Do you not regard it as appropriate to discuss how to model streams
are more concerns than a pair of raw data streams and there are
(in a sense of a data stream rather than your existing dimple duplex model)?
I'm quite happy to get on with coding if you want, but we will have lost
to collaborate over requirements or design. I won't be sympathetic if
you have objections
later that could be addressed at this stage. I understand that you don't
but don't understand what you might propose instead except to wait for
hardware to get bigger and operating systems to be able to cope.
You want to know what I want to do by way of butchering existing Poco.
is a simple list. This isn't addressing the old issue of how
documentation is extracted
from the sources.
> is partially clear to me, but scope is not at all. And we are still
> discussing whether you'll join the project or go on your own. Why
> don't you implement POCO::AIO or POCO::Coroutine library (after all,
> it's not that much work - they'll be just wrappers) in the sandbox?
Because there are issues in trunk that require a full branch rather than
treatment of additional
modules tacked on the side of the existing code?
I'm not expecting to create a POCO::Coroutine library. That's the point
of a design discussion.
I think it appropriate to consider a more abstracted 'message driven
task' that becomes logically
runnable when it receives a message.
That might be:
- code that executes directly in the message delivery
- a coroutine task that becomes runnable when the message is queued,
and is queued onto a
- an active object (with a limited stack size) that is released by the
> Then we can talk about concrete stuff, attacking one or two concerns
I am trying to talk about things without spending too much time coding.
Talk is cheap.
Code is not.
Seriously, we don't even have an agreed objective, how can you consider code
> at a time. Even if you go on your own, the code is free for anyone, so
> you can use it, too.
I do not believe that code freedom necessarily implies any freedom over
the use of the
term 'Poco' however, and since that's a namespace name I'm not sure
where things stand.
Any statement on that needs to come from Guenter I suspect. I don't
believe I can take
your word for it.
Anyway, here's a list that I constructed from my notes on 'Gotchas in
POCO' and some bits
that I wrote already.
Changes to Poco:
First, split out the very basic configuration detection so it can be
used from the imported referece code, much of which is written in C.
And look at a unified build env for all the reference code and any
tools (eg re2c, lemon, ragel, etc etc - some of which might be GPL
generating unencumbered sources, so care is needed in the core
build unification: perhaps use POSH and feed into the Foundation
Import and reorganise reference code, so we can determine at config to build
a custom library or (where appropriate) reference an existing system library
maintained with the system:
4) new: ossp uuid (displace UUID code)
5) new: fast LZW compressor from ZFS
6) new: encoders/decoders base64 etc from google code (might displace
some of zlib)
7) new: json
8) new: atomic ops from *BSD, Solaris, HP, possibly PostgreSQL.
9) new: custom allocator (nedmalloc, jemalloc, ptmalloc3 - TBD)
10) new: high performance 'movable' collections etc from NTL (extract
11) New: CDR (from ACE)
12) New: ONC streams (TBD)
13) New: MUSCLE message codecs
14) New: coroutines (dekorte, look for other non-GPL sources for crossref)
15) New: libev (or possibly libevent - TBD)
16) New: OpenSSL
Think about it: iconv, cryptopp, ajisai (sp?), YARD.
Immediate fixes (there will be more as I work through more of the library):
Make it harder to get the sharing wrong when passing pointers around
Use atomic ops in counting policy.
use custom allocator.
3) Timestamp, Timespan, Timestap::TimeDiff etc
Clean up to avoid unexpected assignments between raw int64 values.
Use float instead of int64 for better performance on 32 bit systems
and atomic copy operations.
4) Configuration tweaks
I have some helpers and want to review what happens when you make changes
and try to persis them as application overrides. Plus json config.
5) command line
I'm still in two minds about this: I prefer to use a code generator
where possible and this is a good candidate.
There are a number of hand-written parsers. I'd rather replace them
with generated from a specification, using re2c and/or ragel. TBD.
Ragel is attractive with the preprocessor that enables almost direct
use of RFC-style EDNF.
The simple select should go even if we're going to do a reactive
system rather than aio in favour of a model based on libev or libevent.
It will scale better on platforms which have it.
There are some API cheats I had to use last time I tried making
some progress on C10K, so I could expose some of the API wrapping
in SocketImpl. It was all messy and SocketImpl should just be
changed rather than do these hacks that temporarily wrap a
poco_socket_t. I also have some code that is designed to help with
the socket retention and reuse that's allowed on Win32 after a
socket is shut down.
8) Single Thread Build
I'd like to make sure that the library will build for a single-threaded
runtime, with threading stubbed out. Some systems (eg postgreSQL) that
have plugin APIs don't like to have threaded libraries loaded.
This is a low-ish priority. Until I need to integrate with PostgreSQL
Compiler-provided TLS is a LOT faster - 30 more more times faster.
This requires some preprocessor trickery and for the application to
request it (won't work in an explicitly dynamically loaded addin)
but it is hugely helpful when available. It makes the implicit
TLS usage of the NDC stack not suck.
I have an implementation for this but it may be contentious.
Additionally, I have concerns about the way the TLS system reacts
to code that is executed by threads that were not started by POCO.
There seems to be an assumption that there is just one such thread,
the initial thread, but its not true when you use the Tibco RV
libraries for example, nor upcalls from threaded 'scripting'
The behaviour of ThreadPool (which throws an insert rejection when
it is 'full') is pretty naff. Some sort of queueing is necessary
because applications can't usually handle such a throw except by
sleeping and we then have the same deadlock possibilities we do with
a queue in front of a bounded thread pool - and the latter is a lot
cleaner in code and easier to document, especially if a manager thread
is able to (slowly) create threads over the limit if the pool looks
stuck (I think the .NET pool does this). Consider throwing in the
towel and hiding this in favour of a model like the Java executor
I have a set of templated synchronization primitives that provides
some additional functionality intra-process (which is probably
neither here nor there) but also implementations for inter-process
use, which is rather important for me.
I plan to review whether it makes sense to rip the existing
implementations and replace them, and leave legacy adapters.
That's probably more than enough to be going on with. Its not
even touching on the things I think will be needed to support
AIO properly such as flow control propagation, rate limiting,
page-aligned IO buffer pooling, and an abstracted 'protocol
task' system that handles passive 'push model' tasks, coroutines,
and small-thread 'active tasks'.
Nor is does it extend to writing and delegating to FastCGI,
SCGI, CGI or the Python servlet system (WSGI?) or subprocess
handlers running FastCGI over stdin/stdout.
I'm almost talking myself out of it.
Maybe I should just go back to playing with MINA from the comfort
of Eclipse. ;-)